text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Deadpan] | [TOKENS: 1117] |
Contents Deadpan Deadpan, dry humour, or dry-wit humour is the deliberate display of emotional neutrality or no emotion, commonly as a form of comedic delivery to contrast with the ridiculousness or absurdity of the subject matter. The delivery is meant to be blunt, ironic, laconic, or apparently unintentional. Etymology The term deadpan first emerged early in the 20th century, most likely during the 1920s, as a compound word (sometimes spelled as two words) combining "dead" and "pan" (a slang term for the face). It appeared in print as early as 1915, in an article about a former baseball player named Gene Woodburn written by his former manager Roger Bresnahan. Bresnahan described how Woodburn used his skill as a ventriloquist to make his manager and others think they were being heckled from the stands. Woodburn, wrote Bresnahan, "had a trick of what the actors call 'the dead pan.' He never cracked a smile and would be the last man you would suspect was working a trick." George M. Cohan, in a 1908 interview, had alluded to dead pans without using the actual term "deadpan". Cohan, after returning from a trip to London, told an interviewer: "The time is ripe for a manager to take over about a dozen American chorus girls and wake up the musical comedy game. The English chorus girls are dead–their pans are cold.” The Oxford English Dictionary cites a 1928 New York Times article as having the first appearance of the term in print. That article, a collection of film slang compiled by writer and theatrical agent Frank J. Wilstach, defines "dead pan" as "playing a role with expressionless face, as, for instance, the work of Buster Keaton." Several other uses of the term, both in theater and in sports, have been identified between the 1915 Bresnahan article and the 1928 article in the Times. The usage of deadpan as a verb ("to speak, act, or utter in a deadpan manner; to maintain a dead pan") is recorded at least as far back as 1942. Examples The English music hall comedian T. W. Barrett, working in the 1880s and 1890s, is credited with being the first to perform in a deadpan manner, standing completely still and without a smile. Early in his vaudeville days, Buster Keaton developed his deadpan expression. Keaton realized that audiences responded better to his stony expression than when he smiled, and he carried this style into his silent film career. The 1928 Vitaphone short film The Beau Brummels, with vaudeville comics Al Shaw and Sam Lee, was performed entirely in deadpan. The 1980 film Airplane! was performed almost entirely in deadpan; it helped relaunch the career of one of its supporting actors, Leslie Nielsen, who transformed into a prolific deadpan comic after the film. Actor and comedian Bill Murray is known for his deadpan delivery. Several American television comedies have emphasized deadpan deliveries of dry humour, including Bob Newhart in The Bob Newhart Show, and much of the casts in Curb Your Enthusiasm, Arrested Development, and My Name Is Earl. Other examples include recent examples are Andre Braugher as Captain Raymond Holt from the TV show Brooklyn Nine-Nine, Matthew Perry as Chandler Bing in Friends, Nick Offerman as Ron Swanson and Aubrey Plaza as April Ludgate in Parks and Recreation, Jennette McCurdy as Sam Puckett in iCarly, and Louis C.K. in Louie. Another example is the comedy of Steven Wright. Deadpan delivery runs throughout British humour. In television sitcoms, John Cleese as Basil Fawlty in Fawlty Towers and Rowan Atkinson as Edmund Blackadder in Blackadder are both frustrated figures who display little facial expression in their put-downs. Atkinson also plays authority figures (especially priests) while speaking absurd lines with a deadpan delivery. Monty Python include it in their work, such as "The Ministry of Silly Walks" sketch. For his deadpan delivery Peter Sellers received a BAFTA for Best Actor for I'm All Right Jack (1959). A leading figure of the British satire boom of the 1960s, Peter Cook delivered deadpan monologues in his double act with Dudley Moore. In his various roles Ricky Gervais often draws humour from an exasperated sigh. While in his various guises such as Ali G and Borat, the comedian Sacha Baron Cohen interacts with unsuspecting subjects not realising they have been set up for self-revealing ridicule; on this The Observer states, "his career has been built on winding people up, while keeping a deadpan face." Deadpan delivery is a particular staple of New Zealand comedy, with Flight of the Conchords being the best-known example internationally. Dry humour is often confused with highbrow or egghead humour, because the humour in dry humour does not exist in the words or delivery. Instead, the listener must look for humour in the contradiction between words, delivery and context. Failure to include the context or to identify the contradiction results in the listener finding the dry humour unfunny. However, the term "deadpan" itself actually refers only to the method of delivery. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Pelagophyte] | [TOKENS: 148] |
Contents Pelagophyceae Pelagophyceae is a class of heterokont algae. It is the sister group of the Dictyochophyceae. All known species are marine. They can be single-celled (coccoid or flagellate), palmelloid or filamentous. Some members (Pelagomonas) belong to picoplankton, and some other (Sarcinochrysis) are macroscopic attached organisms. The class contains 13 genera, three families and two orders (2017): It is expected that molecular studies will add more species to this list. References This stramenopile-related article is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/24-Isopropylcholestane] | [TOKENS: 1322] |
Contents 24-Isopropylcholestane 24-isopropyl cholestane is an organic molecule produced by specific sponges, protists and marine algae. The identification of this molecule at high abundances in Neoproterozoic rocks has been interpreted to reflect the presence of multicellular life prior to the rapid diversification and radiation of life during the Cambrian explosion. In this transitional period at the start of the Phanerozoic, single-celled organisms evolved to produce many of the evolutionary lineages present on Earth today. Interpreting 24-isopropyl cholestane in ancient rocks as indicating the presence of sponges before this rapid diversification event alters the traditional understanding of the evolution of multicellular life and the coupling of biology to changes in end-Neoproterozoic climate. However, there are several arguments against causally linking 24-isopropyl cholestane to sponges based on considerations of marine algae and the potential alteration of organic molecules over geologic time. In particular the discovery of 24-isopropyl cholestane in rhizarian protists implies that this biomarker cannot be used on its own to trace sponges. Interpreting the presence of 24-isopropyl cholestane in the context of changingglobal biogeochemical cycles at the Proterozoic-Phanerozoic transition remains an area of active research. 24-isopropyl cholestane 24-isopropyl cholestane (figure 1, left) is a C30 sterane with chemical formula C30H54 and molecular mass 414.76 g/mol. The molecule has a cholestane skeleton with an isopropyl moiety at C24 and is the geologically stable form of 24-isopropyl cholesterol. A related and important molecule is 24-n-propyl cholestane (figure 1, right), also with the cholestane skeleton, but with an n-propyl moiety at C24. 24-isopropyl cholestane is produced copiously by a particular group of sponges in the class Demospongiae within the phylum Porifera. Like other molecular fossils, the presence of 24-isopropyl cholestane in rocks may indicate whether demosponge were living in or near the rock's depositional environment. High abundances of 24-isopropyl cholestane are identified in the Precambrian rocks from the Hufq supergroup in Oman, suggesting the presence of sponges prior to the Cambrian explosion. However, sponges are not the only organisms that produce 24-isopropyl cholestane, so the identification of this biomarker is not uniquely linked to the presence of demosponge. While marine pelagophyte algae predominantly produce 24-n-propylcholestane, they also produce 24-isopropyl cholestane. The two possible sources of 24-isopropyl cholestane to rocks, the demosponge and the algae, can be decoupled by considering the ratio of 24-isopropyl cholestane to 24-n-propyl cholestane. In many rocks, this ratio is 0.2-0.3. However, in rocks from Oman, the ratio of steranes is 0.52-16.1, with an average value of 1.51, which strongly suggests input of sponge organic matter. Notably, these elevated values disappear during the Cambrian, and the ratio of 24-isopropyl cholestane to 24-n-propyl cholestane is used an age-specific proxy for the Proterozoic-Phanerozoic transition. Recent research in molecular clocks has argued that the ability to produce 24-isopropyl cholesterol evolved independently in both the demosponge and algae. However, it appears that the biosynthesis evolved earlier in the sponges, during the Neoproterozoic, and that the ability to perform the biosynthesis was not present in algae until the Phanerozoic. If correct, these results would give scientists much more confidence in interpreting elevated levels of 24-isopropyl cholestane in ancient rocks as reflecting the presence of sponges. Additional evidence for sponge evolution before the Cambrian explosion is found in bioclastic packstones from South Australia. Through repeated grinding and photography, researchers constructed 3D models of asymmetric structures with ~1 mm-diameter interconnected channels contained within this rock. The complex network of tunnels appears inconsistent with fungi or algae, and the researchers tentatively suggested that they are primitive sponges. This interpretation is controversial because the structures pre-date the first appearance of other sponge fossils and the structures are only known to occur within a single sedimentary sequence. Implications While Love et al. (2009) argues for the presence of sponges in rocks below the Marinoan cap carbonate at ~635 Ma (millions of years ago), Antcliffe (2013) estimates the age of the biomarker-bearing rock to be between 645 Ma and ~580 Ma. Most recently, Gold et al. (2016) writes that the age of rocks containing 24-isoproylcholestane have an age between ~650 Ma and 540 Ma. In all cases, estimates agree that the age of the rocks containing 24-isoproylcholestane pre-date the Cambrian explosion at ~541 Ma. The presence of sponges before ~540 Ma has profound implications for the evolution of multicellular life and the coupling of the biosphere to Neoproterozoic climate. Climate change before the Cambrian explosion and the subsequent diversification of life are intricately intertwined with understanding the causes of Snowball Earth episodes, the deposition of Banded Iron Formations, and the second step in the rise of atmospheric oxygen. In particular, the presence of sponges raises questions of the minimum dissolved O2 content of the oceans in the late Neoproterozoic and the transition from a euxinic Canfield ocean to the modern oxygenated deep-ocean. However, sponges appear to require very little O2 to survive, so their presence in the Precambrian may not provide strong constraints on Proterozoic O2 levels. Caveats There are several lines of logic against interpreting 24-isopropyl cholestane as a biomarker for demosponge: References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/OptimJ] | [TOKENS: 1620] |
Contents OptimJ OptimJ is an extension for Java with language support for writing optimization models and abstractions for bulk data processing. The extensions and the proprietary product implementing the extensions were developed by Ateji which went out of business in September 2011. OptimJ aims at providing a clear and concise algebraic notation for optimization modeling, removing compatibility barriers between optimization modeling and application programming tools, and bringing software engineering techniques such as object-orientation and modern IDE support to optimization experts. OptimJ models are directly compatible with Java source code, existing Java libraries such as database access, Excel connection or graphical interfaces. OptimJ is compatible with development tools such as Eclipse, CVS, JUnit or JavaDoc. OptimJ is available free with the following solvers: lp_solve, glpk, LP or MPS file formats and also supports the following commercial solvers: MOSEK, IBM ILOG CPLEX Optimization Studio. Language concepts OptimJ combines concepts from object-oriented imperative languages with concepts from algebraic modeling languages for optimization problems. Here we will review the optimization concepts added to Java, starting with a concrete example. The example of map coloring The goal of a map coloring problem is to color a map so that regions sharing a common border have different colors. It can be expressed in OptimJ as follows. Readers familiar with Java will notice a strong similarity with this language. Indeed, OptimJ is a conservative extension of Java: every valid Java program is also a valid OptimJ program and has the same behavior. This map coloring example also shows features specific to optimization that have no direct equivalent in Java, introduced by the keywords model, var, constraints. OR-specific concepts A model is an extension of a Java class that can contain not only fields and methods but also constraints and an objective function. It is introduced by the model keyword and follows the same rules as class declarations. A non-abstract model must be linked to a solver, introduced by the keyword solver. The capabilities of the solver will determine what kind of constraints can be expressed in the model, for instance a linear solver such as lp solve will only allow linear constraints. Imperative languages such as Java provide a notion of imperative variables, which basically represent memory locations that can be written to and read from. OptimJ also introduces the notion of a decision variable, which basically represents an unknown quantity whose value one is searching. A solution to an optimization problem is a set of values for all its decision variables that respects the constraints of the problem—without decision variables, it would not possible to express optimization problems. The term "decision variable" comes from the optimization community, but decision variables in OptimJ are the same concept as logical variables in logical languages such as Prolog. Decision variables have special types introduced by the keyword var. There is a var type for each possible Java type. In the map coloring example, decision variables were introduced together with the range of values they may take. This is just a shorthand equivalent to putting a constraint on the variable. Constraints express conditions that must be true in any solution of the problem. A constraint can be any Java Boolean expression involving decision variables. In the map coloring example, this set of constraints states that in any solution to the map coloring problem, the color of Belgium must be different from the color of Germany, and the color of Germany must be different from the color of Denmark. The operator != is the standard Java not-equal operator. Constraints typically come in batches and can be quantified with the forall operator. For instance, instead of listing all countries and their neighbors explicitly in the source code, one may have an array of countries, an array of decision variables representing the color of each country, and an array boolean[][] neighboring or a predicate (a Boolean function) boolean isNeighbor(). Country c1 : countries is a generator: it iterates c1 over all the values in the collection countries. :isNeighbor(c1,c2) is a filter: it keeps only the generated values for which the predicate is true (the symbol : may be read as "if"). Assuming that the array countries contains belgium, germany and denmark, and that the predicate isNeighbor returns true for the couples (Belgium , Germany) and (Germany, Denmark), then this code is equivalent to the constraints block of the original map coloring example. Optionally, when a model describes an optimization problem, an objective function to be minimized or maximized can be stated in the model. Generalist concepts Generalist concepts are programming concepts that are not specific to OR problems and would make sense for any kind of application development. The generalist concepts added to Java by OptimJ make the expression of OR models easier or more concise. They are often present in older modeling languages and thus provide OR experts with a familiar way of expressing their models. While Java arrays can only be indexed by 0-based integers, OptimJ arrays can be indexed by values of any type. Such arrays are typically called associative arrays or maps. In this example, the array age contains the age of persons, identified by their name: The type int[String] denoting an array of int indexed by String. Accessing OptimJ arrays using the standard Java syntax: Traditionally, associative arrays are heavily used in the expression of optimization problems. OptimJ associative arrays are very handy when associated to their specific initialization syntax. Initial values can be given in intensional definition, as in: or can be given in extensional definition, as in: Here each of the entries length[i] is initialized with names[i].length(). Tuples are ubiquitous in computing, but absent from most mainstream languages including Java. OptimJ provides a notion of tuple at the language level that can be very useful as indexes in combination with associative arrays. Tuple types and tuple values are both written between (: and :). Comprehensions, also called aggregates operations or reductions, are OptimJ expressions that extend a given binary operation over a collection of values. A common example is the sum: This construction is very similar to the big-sigma summation notation used in mathematics, with a syntax compatible with the Java language. Comprehensions can also be used to build collections, such as lists, sets, multisets or maps: Comprehension expressions can have an arbitrary expression as target, as in: They can also have an arbitrary number of generators and filters: Comprehension need not apply only to numeric values. Set or multiset-building comprehensions, especially in combination with tuples of strings, make it possible to express queries very similar to SQL database queries: In the context of optimization models, comprehension expressions provide a concise and expressive way to pre-process and clean the input data, and format the output data. Development environment OptimJ is available as an Eclipse plug-in. The compiler implements a source-to-source translation from OptimJ to standard Java, thus providing immediate compatibility with most development tools of the Java ecosystem. OptimJ GUI and rapid prototyping Since the OptimJ compiler knows about the structure of all data used in models, it is able to generate a structured graphical view of this data at compile-time. This is especially relevant in the case of associative arrays where the compiler knows the collections used for indexing the various dimensions. The basic graphical view generated by the compiler is reminiscent of an OLAP cube. It can then be customized in many different ways, from simple coloring up to providing new widgets for displaying data elements. The compiler-generated OptimJ GUI saves the OR expert from writing all the glue code required when mapping graphical libraries to data. It enables rapid prototyping, by providing immediate visual hints about the structure of data. Another part of the OptimJ GUI reports in real time performance statistics from the solver. This information can be used for understanding performance problems and improving solving time. At this time, it is available only for lp_solve. Supported solvers OptimJ is available for free with the following solvers lp_solve, glpk, LP or MPS file formats and also supports the following commercial solvers: Mosek, IBM ILOG CPLEX Optimization Studio. External links References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Bridge_(interpersonal)] | [TOKENS: 521] |
Contents Bridge (interpersonal) A bridge is a type of social tie that connects two different groups in a social network. General bridge In general, a bridge is a direct tie between nodes that would otherwise be in disconnected components of the graph. This means that say that A and B make up a social networking graph, n 1 {\displaystyle n_{1}} is in A, n 2 {\displaystyle n_{2}} is in B, and there is a social tie e {\displaystyle e} between n 1 {\displaystyle n_{1}} and n 2 {\displaystyle n_{2}} . If e {\displaystyle e} were to be removed, A and B would become disconnected components of the graph. This means that e {\displaystyle e} is a bridge. For example, A could represent a corporation and B Congress. n 1 {\displaystyle n_{1}} could then be a lobbyist and n 2 {\displaystyle n_{2}} a Congressman. e {\displaystyle e} would then represent the relationship between that corporation and Congress that only exists through the lobbyist. This is very similar to the concept of a bridge in graph theory, but with special social networking properties such as strong and weak ties. Local bridge Local bridges are ties between two nodes in a social graph that are the shortest route by which information might travel from those connected to one to those connected to the other. Local bridges differ from regular bridges in that the end points of the local bridge once the bridge has been deleted cannot have a tie directly between them and should not share any common neighbors. Also if the local bridge is deleted the distance between these two nodes will be increased to a value strictly more than two. Social network implications In social networks, bridge relationships transmit information from one group to another. The breadth of information spread depends heavily on the number and connectedness of the bridges available to the originators of the information. Author Malcolm Gladwell characterizes people that habitually act as bridges as Connectors in his book The Tipping Point. Bridges and local bridges are powerful ways to convey awareness of new things, but they are weak at transmitting behaviors that are in some way risky or costly to adopt. Weak ties are able to spread awareness of a joke or an on-line video with remarkable speed, but political mobilization moves more sluggishly, needing to gain momentum within neighborhoods and small communities. McAdams observed that strong ties, rather than weak ties, played a much more dominant role in recruitment to Freedom Summer on college campuses in the 1960s. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Microsoft_365] | [TOKENS: 3632] |
Contents Microsoft 365 Microsoft 365 (previously called Office 365) is a product family of productivity software, collaboration and cloud-based services owned by Microsoft. It encompasses online services such as Outlook.com, OneDrive, Microsoft Teams, programs formerly marketed under the name Microsoft Office (including applications such as Word, Excel, PowerPoint, and Outlook on Microsoft Windows, macOS, mobile devices, and on the web), and enterprise products and services associated with these products such as Exchange Server, SharePoint, and Viva Engage. Microsoft 365 also covers subscription plans encompassing these products, including those that include subscription-based licenses to desktop and mobile software, and hosted email and intranet services. The branding Office 365 was introduced in 2010 to refer to a subscription-based software as a service platform for the corporate market, including hosted services such as Exchange, SharePoint, and Lync Server, and Office on the web. Some plans also included licenses for the Microsoft Office 2010 software. Upon the release of Office 2013, Microsoft began to promote the service as the primary distribution model for the Microsoft Office suite, adding consumer-focused plans integrating with services such as OneDrive and Skype, and emphasizing ongoing feature updates (as opposed to non-subscription licenses, where new versions require purchase of a new license, and are feature updates in and of themselves). In July 2017, Microsoft introduced a second brand of subscription services for the enterprise market known as Microsoft 365, combining Office 365 with Windows 10 Enterprise volume licenses and other cloud-based security and device management products. On April 21, 2020, Office 365 was changing its name to Microsoft 365 to emphasize the service's current inclusion of products and services beyond the core Microsoft Office software family (including cloud-based productivity tools and artificial intelligence features). Most products that were called Office 365 were renamed as Microsoft 365 on the same day. In October 2022, Microsoft announced that it would discontinue the "Microsoft Office" brand by January 2023, with most of its products and online productivity services being marketed primarily under the "Microsoft 365" brand. However, Microsoft continues to sell perpetually-licensed office suites under the "Microsoft Office" brand. As of April 2025, the Microsoft 365 app is now called Microsoft 365 Copilot. History Microsoft first announced Office 365 in October 19, 2010, beginning with a private beta with various organizations, leading into a public beta in April 18, 2011, and reaching general availability on June 28, 2011, with a launch aimed originally at corporate users. Facing growing competition from Google's similar service Google Workspace, Microsoft designed the Office 365 platform to "bring together" its existing online services (such as the Business Productivity Online Suite) into "an always-up-to-date cloud service" incorporating Exchange Server (for e-mail), SharePoint (for internal social networking, collaboration, and a public web site), and Lync (for communication, VoIP, and conferencing). Plans were initially launched for small business and enterprises; the small business plan offered Exchange e-mail, SharePoint Online, Lync Online, web hosting via SharePoint, and the Web App with the enterprise plan also adding per-user licenses for the Office 2010 Professional Plus software and 24/7 phone support. Following the official launch of the service, Business Productivity Online Suite customers were given 12 months to migrate from BPOS to the Office 365 platform. With the release of Office 2013, an updated version of the Office 365 platform was launched on February 27, 2013, expanding Office 365 to include new plans aimed at different types of businesses, along with new plans aimed at general consumers, including benefits tailored towards Microsoft consumer services such as OneDrive (whose integration with Office was a major feature of the 2013 suite). The server components were updated to their respective 2013 versions, and Microsoft expanded the Office 365 service with new plans, such as Small Business Premium, Midsize Premium, and Pro Plus. A new Office 365 Home Premium plan aimed at home users offers access to the Office 2013 suite for up to five computers, along with expanded OneDrive storage and 60 minutes of Skype calls monthly. The plan is aimed at mainstream consumers, especially those who want to install Office on multiple computers. A University plan was introduced, targeted at post-secondary students. With these new offerings, Microsoft began to offer prepaid Office 365 subscriptions through retail outlets alongside the normal, perpetually licensed editions of Office 2013 (which are only licensed for use on one computer, and do not receive feature updates). On March 19, 2013, Microsoft detailed its plans to provide integration with the enterprise social networking platform Yammer (which they had acquired in 2012) for Office 365, such as the ability to use a single sign-on between the two services, shared feeds and document aggregation, and the ability to entirely replace the SharePoint news feed and social functionality with Yammer. The ability to provide a link to a Yammer network from an Office 365 portal was introduced in June 2013, with heavier integration (such as a Yammer app for SharePoint and single sign-on) to be introduced in July 2013. On July 8, 2013, Microsoft unveiled Power BI, a suite of business intelligence and self-serve data mining tools for Office 365, to be released later in the year. Power BI is primarily incorporated into Excel, allowing users to use the Power Query tool to create spreadsheets and graphs using public and private data, and also perform geovisualization with Bing Maps data using the Power Map tool (previously available as a beta plug-in known as GeoFlow). Users will also be able to access and publish reports, and perform natural language queries on data. As a limited-time offer for certain markets (but notably excluding the US), Microsoft also offered a free one-year Xbox Live Gold subscription with any purchase of an Office 365 Home Premium or University subscription, until September 28, 2013. From April 15, 2014, Microsoft renamed the "Home Premium" plan to "Home,” and added a new "Personal" plan for single users. In June 2014, the amount of OneDrive storage offered to Office 365 subscribers was increased to 1 terabyte from 20 GB. On October 27, 2014, Microsoft announced "unlimited" OneDrive storage for Office 365 subscribers. However, due to abuse and a general reduction in storage options implemented by Microsoft, the 1 TB cap was reinstated in November 2015. In June 2016, Microsoft made Planner available for general release. It is considered to be a competitor to Trello and to other agile team collaboration cloud services. In April 2017, Microsoft announced that with the ending of mainstream support for Office 2016 on October 13, 2020, access to OneDrive for Business and Office 365-hosted servers for Skype for Business will become unavailable to those who are not using Office 365 ProPlus or Office perpetual in mainstream support. In July 2019, Microsoft announced that the hosted Skype for Business Online service would be discontinued on July 31, 2021, with users being redirected to the Microsoft Teams collaboration platform as its replacement. Since September 2019, Skype for Business Online is no longer offered to new subscribers. In October 2017, the existing Outlook.com Premium service was discontinued and folded exclusively into Office 365, with all Personal and Family subscribers subsequently being upgraded to 50 GB of storage. The "Microsoft 365" brand was first introduced at Microsoft Inspire in July 2017 as an enterprise subscription product, succeeding the "Secure Productive Enterprise" services released in 2016, and combining Windows 10 Enterprise with Office 365 Business Premium, and the Enterprise Mobility + Security suite including Advanced Threat Analytics, Azure Active Directory, Azure Information Protection, Cloud App Security and Microsoft Intune. Microsoft 365 is sold via Microsoft and its cloud services reseller network. On March 30, 2020, Microsoft announced that the consumer plans of Office 365 would be rebranded as "Microsoft 365" (a brand also used by Microsoft for an enterprise subscription bundle of Windows, Office 365, and security services) on April 21, 2020, succeeding existing consumer plans of Office 365. It is a superset of the existing Office 365 products and benefits, positioned towards "life", productivity, and families, including the Microsoft Office suite, 1 TB of additional OneDrive storage and access to OneDrive Personal Vault, and 60 minutes of Skype calls per month. Under the brand, Microsoft will also add access to its collaboration platform Teams (which will also add additional features designed around family use), and a premium tier of Microsoft Family Safety. Microsoft also announced plans to offer trial offers of third-party services for Microsoft 365 subscribers, with companies such as Adobe (Creative Cloud Photography), Blinkist, CreativeLive, Experian, and Headspace having partnered. Microsoft 365 Personal and Family succeeded the Office 365 Personal and Home subscriptions, with no change in pricing. Office 365 for small- and medium-sized businesses was also renamed Microsoft 365, with Office 365 Business and ProPlus becoming "Microsoft 365 Apps for business" and "Microsoft 365 Apps for enterprise", Office 365 Business Essentials becoming "Microsoft 365 Business Basic", and Office 365 Business Premium becoming "Microsoft 365 Business Standard" (with the existing Microsoft 365 Business product becoming "Microsoft 365 Business Premium"). The Office 365 brand remains in use for its enterprise, education, healthcare, and governmental plans. Microsoft stated that "over the last several years, our cloud productivity offering has grown well beyond what people traditionally think of as 'Office'", citing examples such as Forms, Planner, Stream, and Teams. On October 13, 2022, Microsoft announced that it would be phasing out the Microsoft Office brand, in favor of branding all products under the Microsoft 365 name. This change took effect on Office.com in November 2022, followed by the Office mobile apps in January 2023. The Microsoft Office brand will still be used for legacy products, including subscription products still carrying the "Office 365" name since the previous Microsoft 365 rebranding, and the "on-premises"/perpetually licensed Microsoft Office 2021. In November 2024, Microsoft expanded the use of Copilot in Microsoft 365 apps to Microsoft 365 Personal and Home subscribers in Asia-Pacific markets, previously exclusive to Microsoft's Copilot Pro subscription. Microsoft subsequently increased the price of its consumer subscriptions to accommodate the addition of Copilot features. In December 2024, Microsoft announced that the Microsoft 365 app would be rebranded as the Microsoft 365 Copilot App in an effort to highlight the integration of Copilot features for Microsoft 365 Personal and Home subscriptions. The logo was rebranded to the Copilot logo with an "M365" tag. In March 2025, Microsoft announced that it would bring Copilot for OneDrive to Microsoft 365 Personal and Home subscriptions. Copilot for OneDrive allows consumers to use Copilot to interact with their files stored in OneDrive. Software and services The Microsoft 365 desktop applications (formerly marketed as Microsoft Office) are primarily used on personal computers running Microsoft Windows, and are distributed as part of the Microsoft 365 subscription. They are installed using a "click-to-run" system which allows users to begin using the applications almost instantaneously, while files are downloaded in the background. Updates to the software are installed automatically, covering both security and feature updates. These applications were one of the core components of the initial Office 365 service. If the user's subscription lapses, the applications enter a read-only mode where editing functionality is disabled. Full functionality is restored once a new subscription is purchased and activated. Although there are still "on-premises" or "perpetual" releases of Office released on a three-year cycle, these versions do not receive new features or access to new cloud-based services as they are released on Microsoft 365. All of these applications, excluding Access and Publisher, are also available on macOS. Word, Excel, and PowerPoint are available as mobile and web apps, usable for free with limitations, although they do not contain all of the functionality as the desktop versions. The mobile apps were originally limited to Office 365 subscribers, but basic editing and document creation has since been made free for personal use. An active Microsoft 365 subscription is still required to unlock certain advanced editing features, use the apps on devices with screens larger than 10.1 inches, or to use the apps for commercial purposes. In February 2020, Microsoft introduced a new Microsoft Office app that integrates Word, Excel, and PowerPoint, replacing the previous, separate apps for each. Microsoft Outlook for mobile is derived from the apps Acompli and Sunrise Calendar, which were acquired by Microsoft and discontinued. Some Microsoft 365 online services are usable without a subscription, but with limitations such as advertising and lower storage limits. Business and enterprise-oriented plans for Microsoft/Office 365 offer access to cloud-hosted server platforms on a software as a service basis, including Exchange, Skype for Business, SharePoint, Microsoft Dictate (speech recognition), and Office on the web. Through SharePoint's OneDrive for Business functionality (formerly known as SharePoint MySites and SkyDrive Pro, and distinct from the consumer-oriented OneDrive service), each user also receives 1 TB of online storage. Certain plans also include unlimited personal cloud storage per user. Microsoft 365 services can be configured through an online portal; users can be added manually, imported from a CSV file, or set up for single sign-on with a local Active Directory using Active Directory Federation Services. More advanced setup and features requires the use of PowerShell scripts. Subscription plans Microsoft 365 offers subscription plans aimed at different needs and market segments, providing different sets of features at different price points. Microsoft has also offered Office 365 subscriptions to students of institutions who have licensed Office software for their faculty. There are two separate backends for the products: Aimed at mainstream consumers, both plans offer access to Microsoft Office applications (Word, Excel, PowerPoint, Outlook, Publisher, and Access) for home/non-commercial use on one computer (Windows, macOS, and mobile devices), with access to additional online-based services and premium creative content, 1 TB of OneDrive storage with Advanced Security, 60 minutes of Skype international calls per month (subject to area), and partner offers. Security In December 2011, Microsoft announced that the Office 365 platform was now compliant with the ISO/IEC 27001 security standards, the European Union's Data Protection Directive (through the signing of model clauses), and the Health Insurance Portability and Accountability Act for health care environments in the United States. At the same time, Microsoft also unveiled a new "Trust Center" portal, containing further information on its privacy policies and security practices for the service. In May 2012, Microsoft announced that Office 365 was now compliant with the Federal Information Security Management Act; compliance with the act would now allow Office 365 to be used by U.S. government agencies. In spite of claiming to comply with European data protection standards, and in spite of existing Safe Harbor agreements, Microsoft has admitted that it will not refrain from handing over data stored on its European servers to US authorities under the Patriot Act. In Finland, FICORA has warned Office 365 users of phishing incidents and break-ins that have caused losses of millions of euros. In September 2019, NCSC-FI (National Cyber Security Centre of Finland) created a detailed guide on how to protect Microsoft Office 365 against phishing attempts and any data breaches. In July 2019, the German state of Hesse outlawed the use of Office 365 in educational institutions, citing privacy risks. Denmark’s Ministry of Digital Affairs will phase out Windows and Office 365 by November 2025. In December 2020, the US Department of Commerce was breached via Office 365. The attackers were able to access staff emails for several months. A July 1, 2021 cybersecurity advisory from British (NCSC) and American (NSA, FBI, CISA) security agencies warned of a GRU brute-force campaign from mid-2019 to the present (July 2021) that focused a "significant amount" of activity on Microsoft Office 365 cloud services. On November 10, 2023, IT news portal Heise online disclosed that "if you try out the new Outlook, you risk transferring your IMAP and SMTP credentials of mail accounts and all your emails to Microsoft servers." Outages On March 1, 2025, Microsoft experienced a global outage. All Microsoft products were down for a few hours. Reception TechRadar gave the 2013 update of Office 365 a 4.5 out of 5, praising its administration interfaces for being accessible to users with any level of expertise, the seamless integration of OneDrive Pro into the Office 2013 desktop applications, and the service as a whole for being suitable in small business environments, while still offering "powerful" options for use in larger companies (such as data loss protection and the ability to integrate with a local Active Directory instance). However, the service was severely criticized for how it handled its 2013 update for existing users, and its lack of integration with services such as Skype and Viva Engage. In the fourth quarter of fiscal year 2017, Office 365 revenue exceeded that of conventional license sales of Microsoft Office software for the first time. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Revenue] | [TOKENS: 1324] |
Contents Revenue In accounting, revenue is the total amount of income generated by the sale of goods and services related to the primary operations of a business. Commercial revenue may also be referred to as sales or as turnover. Some companies receive revenue from interest, royalties, or other fees. "Revenue" may refer to income in general, or it may refer to the amount, in a monetary unit, earned during a period of time, as in "Last year, company X had revenue of $42 million". Profits or net income generally imply total revenue minus total expenses in a given period. In accounting, revenue is a subsection of the equity section of the balance statement, since it increases equity. It is often referred to as the "top line" due to its position at the very top of the income statement. This is to be contrasted with the "bottom line" which denotes net income (gross revenues minus total expenses). In general usage, revenue is the total amount of income by the sale of goods or services related to the company's operations. Sales revenue is income received from selling goods or services over a period of time. Tax revenue is income that a government receives from taxpayers. Fundraising revenue is income received by a charity from donors etc. to further its social purposes. In more formal usage, revenue is a calculation or estimation of periodic income based on a particular standard accounting practice or the rules established by a government or government agency. Two common accounting methods, cash basis accounting and accrual basis accounting, do not use the same process for measuring revenue. Corporations that offer shares for sale to the public are usually required by law to report revenue based on generally accepted accounting principles or on International Financial Reporting Standards. In a double-entry bookkeeping system, revenue accounts are general ledger accounts that are summarized periodically under the heading "revenue" or "revenues" on an income statement. Revenue account-names describe the type of revenue, such as "repair service revenue", "rent revenue earned", “interest revenue”, or "sales". In a journal entry or on a ledger, revenue is recorded as a credit, and as a result, cash, deferred revenue, or accounts receivable is debited. Non-profit organizations For non-profit organizations, revenue may be referred to as gross receipts, support, contributions, etc. This operating revenue can include donations from individuals and corporations, support from government agencies, income from activities related to the organization's mission, income from fundraising activities, and membership dues. Revenue (income and gains) from investments may be categorized as "operating" or "non-operating"—but for many non-profits must (simultaneously) be categorized by fund (along with other accounts). For non-profits with substantial revenue from the dues of their voluntary members: non-dues revenue is revenue generated through means besides association membership fees. This revenue can be found through means of sponsorships, donations or outsourcing the association's digital media outlets. Business revenue Business revenue is money income from activities that are ordinary for a particular corporation, company, partnership, or sole-proprietorship. For some businesses, such as manufacturing or grocery, most revenue is from the sale of goods. Service businesses such as law firms and barber shops receive most of their revenue from rendering services. Lending businesses such as car rentals and banks receive most of their revenue from fees and interest generated by lending assets to other organizations or individuals. Revenues from a business's primary activities are reported as sales, sales revenue or net sales. This includes product returns and discounts for early payment of invoices. Most businesses also have revenue that is incidental to the business's primary activities, such as interest earned on deposits in a demand account. This is included in revenue but not included in net sales. Sales revenue does not include sales tax collected by the business. Other revenue (a.k.a. non-operating revenue) is revenue from peripheral (non-core) operations. For example, a company that manufactures and sells automobiles would record the revenue from the sale of an automobile as "regular" revenue. If that same company also rented a portion of one of its buildings, it would record that revenue as "other revenue" and disclose it separately on its income statement to show that it is from something other than its core operations. The combination of all the revenue-generating systems of a business is called its revenue model. While the current IFRS conceptual framework no longer draws a distinction between revenue and gains, it continues to be drawn at the standard and reporting levels. For example, IFRS 9.5.7.1 states: "A gain or loss on a financial asset or financial liability that is measured at fair value shall be recognised in profit or loss ..." while the IASB defined IFRS XBRL taxonomy includes OtherGainsLosses, GainsLossesOnNetMonetaryPosition and similar items. Revenue is a crucial part of financial statement analysis. The company's performance is measured to the extent to which its asset inflows (revenues) compare with its asset outflows (expenses). Net income is the result of this equation, but revenue typically enjoys equal attention during a standard earnings call. If a company displays solid "top-line growth", analysts could view the period's performance as positive even if earnings growth, or "bottom-line growth" is stagnant. Conversely, high net income growth would be tainted if a company failed to produce significant revenue growth. Consistent revenue growth, if accompanied by net income growth, contributes to the value of an enterprise and therefore the share price. Revenue is used as an indication of earnings quality. There are several financial ratios attached to it: Government revenue Government revenue includes all amounts of money (i.e., taxes and fees) received from sources outside the government entity. Large governments usually have an agency or department responsible for collecting government revenue from companies and individuals. Government revenue may also include reserve bank currency which is printed. This is recorded as an advance to the retail bank together with a corresponding currency in circulation expense entry, that is, the income derived from the Official Cash rate payable by the retail banks for instruments such as 90-day bills. There is a question as to whether using generic business-based accounting standards can give a fair and accurate picture of government accounts, in that with a monetary policy statement to the reserve bank directing a positive inflation rate, the expense provision for the return of currency to the reserve bank is largely symbolic, such that to totally cancel the currency in circulation provision, all currency would have to be returned to the reserve bank and canceled. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Volcanism_on_Mars] | [TOKENS: 6059] |
Contents Volcanism on Mars Volcanic activity, or volcanism, has played a significant role in the geologic evolution of Mars. Scientists have known since the Mariner 9 mission in 1972 that volcanic features cover large portions of the Martian surface. These features include extensive lava flows, vast lava plains, and, such as Olympus Mons, the largest known volcanoes in the Solar System. Martian volcanic features range in age from Noachian (>3.7 billion years) to late Amazonian (< 500 million years), indicating that the planet has been volcanically active throughout its history, and some speculate it probably still is so today. Both Mars and Earth are large, differentiated planets built from similar chondritic materials. Many of the same magmatic processes that occur on Earth also occurred on Mars, and both planets are similar enough compositionally that the same names can be applied to their igneous rocks. Background Volcanism is a process in which magma from a planet's interior rises through the crust and erupts on the surface. The erupted materials consist of molten rock (lava), hot fragmental debris (tephra or ash), and gases. Volcanism is a principal way that planets release their internal heat. Volcanic eruptions produce distinctive landforms, rock types, and terrains that provide a window on the chemical composition, thermal state, and history of a planet's interior. Magma is a complex, high-temperature mixture of molten silicates, suspended crystals, and dissolved gases. Magma on Mars likely ascends in a similar manner to that on Earth. It rises through the lower crust in diapiric bodies that are less dense than the surrounding material. As the magma rises, it eventually reaches regions of lower density. When the magma density matches that of the host rock, buoyancy is neutralized and the magma body stalls. At this point, it may form a magma chamber and spread out laterally into a network of dikes and sills. Subsequently, the magma may cool and solidify to form intrusive igneous bodies (plutons). Geologists estimate that about 80% of the magma generated on Earth stalls in the crust and never reaches the surface. As magma rises and cools, it undergoes many complex and dynamic compositional changes. Heavier minerals may crystallize and settle to the bottom of the magma chamber. The magma may also assimilate portions of host rock or mix with other batches of magma. These processes alter the composition of the remaining melt, so that any magma reaching the surface may be chemically quite different from its parent melt. Magmas that have been so altered are said to be "evolved" to distinguish them from "primitive" magmas that more closely resemble the composition of their mantle source. (See igneous differentiation and fractional crystallization.) More highly evolved magmas are usually felsic, that is enriched in silica, volatiles, and other light elements compared to iron- and magnesium-rich (mafic) primitive magmas. The degree and extent to which magmas evolve over time is an indication of a planet's level of internal heat and tectonic activity. The Earth's continental crust is made up of evolved granitic rocks that developed through many episodes of magmatic reprocessing. Evolved igneous rocks are much less common on cold, dead bodies such as the Moon. Mars, being intermediate in size between the Earth and the Moon, is thought to be intermediate in its level of magmatic activity. At shallower depths in the crust, the lithostatic pressure on the magma body decreases. The reduced pressure can cause gases (volatiles), such as carbon dioxide and water vapor, to exsolve from the melt into a froth of gas bubbles. The nucleation of bubbles causes a rapid expansion and cooling of the surrounding melt, producing glassy shards that may erupt explosively as tephra (also called pyroclastics). Fine-grained tephra is commonly referred to as volcanic ash. Whether a volcano erupts explosively or effusively as fluid lava depends on the composition of the melt. Felsic magmas of andesitic and rhyolitic composition tend to erupt explosively. They are very viscous (thick and sticky) and rich in dissolved gases. Mafic magmas, on the other hand, are low in volatiles and commonly erupt effusively as basaltic lava flows. However, these are only generalizations. For example, magma that comes into sudden contact with groundwater or surface water may erupt violently in steam explosions called hydromagmatic (phreatomagmatic or phreatic) eruptions. Erupting magmas may also behave differently on planets with different interior compositions, atmospheres, and gravitational fields. Differences in volcanic styles between Earth and Mars The most common form of volcanism on the Earth is basaltic. Basalts are extrusive igneous rocks derived from the partial melting of the upper mantle. They are rich in iron and magnesium (mafic) minerals and commonly dark gray in color. The principal type of volcanism on Mars is almost certainly basaltic too. On Earth, basaltic magmas commonly erupt as highly fluid flows, which either emerge directly from vents or form by the coalescence of molten clots at the base of lava fountains (Hawaiian eruption). These styles are also common on Mars, but the lower gravity and atmospheric pressure on Mars allow nucleation of gas bubbles (see above) to occur more readily and at greater depths than on Earth. As a consequence, Martian basaltic volcanoes are also capable of erupting large quantities of ash in Plinian-style eruptions. In a Plinian eruption, hot ash is incorporated into the atmosphere, forming a huge convective column (cloud). If insufficient atmosphere is incorporated, the column may collapse to form pyroclastic flows. Plinian eruptions are rare in basaltic volcanoes on Earth where such eruptions are most commonly associated with silica-rich andesitic or rhyolitic magmas (e.g., Mount St. Helens). Because the lower gravity of Mars generates less buoyancy forces on magma rising through the crust, the magma chambers that feed volcanoes on Mars are thought to be deeper and much larger than those on Earth. If a magma body on Mars is to reach close enough to the surface to erupt before solidifying, it must be big. Consequently, eruptions on Mars are less frequent than on Earth, but are of enormous scale and eruptive rate when they do occur. Somewhat paradoxically, the lower gravity of Mars also allows for longer and more widespread lava flows. Lava eruptions on Mars may be unimaginably huge. A vast lava flow the size of the state of Oregon has recently been described in western Elysium Planitia. The flow is believed to have been emplaced turbulently over the span of several weeks and thought to be one of the youngest lava flows on Mars. The tectonic settings of volcanoes on Earth and Mars are very different. Most active volcanoes on Earth occur in long, linear chains along plate boundaries, either in zones where the lithosphere is spreading apart (divergent boundaries) or being subducted back into the mantle (convergent boundaries). Because Mars currently lacks plate tectonics, volcanoes there do not show the same global pattern as on Earth. Martian volcanoes are more analogous to terrestrial mid-plate volcanoes, such as those in the Hawaiian Islands, which are thought to have formed over a stationary mantle plume. (See hot spot.) The paragenetic tephra from a Hawaiian cinder cone has been mined to create Martian regolith simulant for researchers to use since 1998. The largest and most conspicuous volcanoes on Mars occur in Tharsis and Elysium regions. These volcanoes are strikingly similar to shield volcanoes on Earth. Both have shallow-sloping flanks and summit calderas. The main difference between Martian shield volcanoes and those on Earth is in size: Martian shield volcanoes are truly colossal. For example, the tallest volcano on Mars, Olympus Mons, is 550 km across and 21 km high. It is nearly 100 times greater in volume than Mauna Loa in Hawaii, the largest active shield volcano on Earth. Geologists think one of the reasons that volcanoes on Mars are able to grow so large is because Mars lacks plate tectonics. The Martian lithosphere does not slide over the upper mantle (asthenosphere) as on Earth, so lava from a stationary hot spot is able to accumulate at one location on the surface for a billion years or longer. In 2012, the Curiosity rover on the planet Mars at "Rocknest" performed the first X-ray diffraction analysis of Martian soil. The results from the rover's CheMin analyzer revealed the presence of several minerals, including feldspar, pyroxenes and olivine, and suggested that the Martian soil in the sample was similar to the "weathered basaltic soils" of volcanoes in Hawaii. In 2015, the same rover identified tridymite in a rock sample from Gale Crater, leading scientists to believe that silicic volcanism might have played a much more prevalent role in the planet's volcanic history than previously thought. Tharsis volcanic province The western hemisphere of Mars is dominated by a massive volcano-tectonic complex known as the Tharsis region or the Tharsis bulge. This immense, elevated structure is thousands of kilometers in diameter and covers up to 25% of the planet's surface. Averaging 7–10 km above datum (Martian "sea" level), Tharsis contains the highest elevations on the planet. Three enormous volcanoes, Ascraeus Mons, Pavonis Mons, and Arsia Mons (collectively known as the Tharsis Montes), sit aligned northeast–southwest along the crest of the bulge. The vast Alba Mons (formerly Alba Patera) occupies the northern part of the region. The huge shield volcano Olympus Mons lies off the main bulge, at the western edge of the province. Built up by countless generations of lava flows and ash, the Tharsis bulge contains some of the youngest lava flows on Mars, but the bulge itself is believed to be very ancient. Geologic evidence indicates that most of the mass of Tharsis was in place by the end of the Noachian Period, about 3.7 billion years ago (Gya). Tharsis is so massive that it has placed tremendous stresses on the planet's lithosphere, generating immense extensional fractures (grabens and rift valleys) that extend halfway around the planet. The mass of Tharsis could have even altered the orientation of Mars' rotational axis, causing climate changes. The three Tharsis Montes are shield volcanoes centered near the equator at longitude 247°E. All are several hundred kilometers in diameter and range in height from 14 to 18 km. Arsia Mons, the southernmost of the group, has a large summit caldera that is 130 kilometres (81 mi) across and 1.3 kilometres (0.81 mi) deep. Pavonis Mons, the middle volcano, has two nested calderas with the smaller one being almost 5 kilometres (3.1 mi) deep. Ascraeus Mons in the north, has a complex set of internested calderas and a long history of eruption that is believed to span most of Mars' history. The three Tharsis Montes are about 700 kilometres (430 mi) apart. They show a distinctive northeast–southwest alignment that has been the source of some interest. Ceraunius Tholus and Uranius Mons follow the same trend to the northeast, and aprons of young lava flows on the flanks of all three Tharsis Montes are aligned in the same northeast–southwest orientation. This line clearly marks a major structural feature in the Martian crust, but its origin is uncertain. In addition to the large shield volcanoes, Tharsis contains a number of smaller volcanoes called tholi and paterae. The tholi are dome-shaped edifices with flanks that are much steeper than the larger Tharsis shields. Their central calderas are also quite large in proportion to their base diameters. The density of impact craters on many of the tholi indicate they are older than the large shields, having formed between late Noachian and early Hesperian times. Ceraunius Tholus and Uranius Tholus have densely channeled flanks, suggesting that the flank surfaces are made up of easily erodible material, such as ash. The age and morphology of the tholi provide strong evidence that the tholi represent the summits of old shield volcanoes that have been largely buried by great thicknesses of younger lava flows. By one estimate the Tharsis tholi may be buried by up to 4 km of lava. Patera (pl. paterae) is Latin for a shallow drinking bowl. The term was applied to certain ill-defined, scalloped-edged craters that appeared in early spacecraft images to be large volcanic calderas. The smaller paterae in Tharsis appear to be morphologically similar to the tholi, except for having larger calderas. Like the tholi, the Tharsis paterae probably represent the tops of larger, now buried shield volcanoes. Historically, the term patera has been used to describe the entire edifice of certain volcanoes on Mars (e.g., Alba Patera). In 2007, the International Astronomical Union (IAU) redefined the terms Alba Patera, Uranius Patera, and Ulysses Patera to refer only to the central calderas of these volcanoes. Olympus Mons is the youngest and tallest large volcano on Mars. It is located 1200 km northwest of the Tharsis Montes, just off the western edge of the Tharsis bulge. Its summit is 21 km above datum (Mars "sea" level) and has a central caldera complex consisting of six nested calderas that together form a depression 72 x 91 km wide and 3.2 km deep. As a shield volcano, it has an extremely low profile with shallow slopes averaging between 4–5 degrees. The volcano was built up by many thousands of individual flows of highly fluid lava. An irregular escarpment, in places up to 8 km tall, lies at the base of the volcano, forming a kind of pedestal on which the volcano sits. At various locations around the volcano, immense lava flows can be seen extending into the adjacent plains, burying the escarpment. In medium resolution images (100 m/pixel), the surface of the volcano has a fine radial texture due to the innumerable flows and leveed lava channels that line its flanks. Alba Mons, located in the northern Tharsis region, is a unique volcanic structure, with no counterpart on Earth or elsewhere on Mars. The flanks of the volcano have extremely low slopes characterized by extensive lava flows and channels. The average flank slope on Alba Mons is only about 0.5°, over five times lower than the slopes on the other Tharsis volcanoes. The volcano has a central edifice 350 km wide and 1.5 km high with a double caldera complex at the summit. Surrounding the central edifice is an incomplete ring of fractures. Flows related to the volcano can be traced as far north as 61°N and as far south as 26°N. If one counts these widespread flow fields, the volcano stretches an immense 2000 km north–south and 3000 km east–west, making it one of the most areally extensive volcanic features in the Solar System. Most geological models suggest that Alba Mons is composed of highly fluid basaltic lava flows, but some researchers have identified possible pyroclastic deposits on the volcano's flanks. Because Alba Mons lies antipodal to the Hellas impact basin, some researchers have conjectured that the volcano's formation may have been related to crustal weakening from the Hellas impact, which produced strong seismic waves that focused on the opposite side of the planet. Elysium volcanic province A smaller volcanic center lies several thousand kilometers west of Tharsis in Elysium. The Elysium volcanic complex is about 2,000 kilometers in diameter and consists of three main volcanoes, Elysium Mons, Hecates Tholus, and Albor Tholus. The northwestern edge of the province is characterized by large channels (Granicus and Tinjar Valles) that emerge from several grabens on the flanks of Elysium Mons. The grabens may have formed from subsurface dikes. The dikes may have fractured the cryosphere, releasing large volumes of ground water to form the channels. Associated with the channels are widespread sedimentary deposits that may have formed from mudflows or lahars. The Elysium group of volcanoes is thought to be somewhat different from the Tharsis Montes, in that development of the former involved both lavas and pyroclastics. Elysium Mons is the largest volcanic edifice in the province. It is 375 km across (depending on how one defines the base) and 14 km high. It has single, simple caldera at its summit that measures 14 km wide and 100 m deep. The volcano is distinctly conical in profile, leading some to call it a stratocone; however, given the predominantly low slopes, it is probably a shield. Elysium Mons is only about one-fifth the volume of Arsia Mons. Hecates Tholus is 180 km across and 4.8 km high. The slopes of the volcano are heavily dissected with channels, suggesting that the volcano is composed of easily erodible material such as volcanic ash. The origin of the channels is unknown; they have been attributed to lava, ash flows, or even water from snow or rainfall. Albor Tholus, the southernmost of the Elysium volcanoes, is 150 km in diameter and 4.1 km high. Its slopes are smoother and less heavily cratered than the slopes of the other Elysium volcanoes. Syrtis Major Syrtis Major Planum is a vast Hesperian-aged shield volcano located within the albedo feature bearing the same name. The volcano is 1200 km in diameter but only 2 km high. It has two calderas, Meroe Patera and Nili Patera. Studies involving the regional gravity field suggest a solidified magma chamber at least 5 km thick lies under the surface. Syrtis Major is of interest to geologists because dacite and granite have been detected there from orbiting spacecraft. Dacites and granites are silica-rich rocks that crystallize from a magma that is more chemically evolved and differentiated than basalt. They may form at the top of a magma chamber after the heavy minerals, such as olivine and pyroxene (those containing iron and magnesium), have settled to the bottom. Dacites and granites are very common on Earth but rare on Mars. Arabia Terra Arabia Terra is a large upland region in the north of Mars that lies mostly in the Arabia quadrangle. Several irregularly shaped craters found within the region represent a type of highland volcanic construct which, all together, represent a martian igneous province. Low-relief paterae within the region possess a range of geomorphic features, including structural collapse, effusive volcanism and explosive eruptions, that are similar to terrestrial supervolcanoes. The enigmatic highland ridged plains in the region may have been formed, in part, by the related flow of lavas. Highland paterae In the southern hemisphere, particularly around the Hellas impact basin, are several flat-lying volcanic structures called highland paterae These volcanoes are some of the oldest identifiable volcanic edifices on Mars. They are characterized by having extremely low profiles with highly eroded ridges and channels that radiate outward from a degraded, central caldera complex. They include Tyrrhena Patera, Hadriaca Patera to the northeast of Hellas and Amphitrites Patera, Peneus Patera, Malea Patera and Pityusa Patera to the southwest of Hellas. Geomorphologic evidence suggests that the highland patera were produced through a combination of lava flows and pyroclastics from the interaction of magma with water. Some researchers speculate that the location of the highland paterae around Hellas is due to deep-seated fractures caused by the impact that provided conduits for magma to rise to the surface. Although they are not very high, some paterae cover large areas—Amphritrites Patera, for example, covers a larger area than Olympus Mons while Pityusa Patera, the largest, has a caldera nearly large enough to fit Olympus Mons inside it. Volcanic plains Volcanic plains are widespread on Mars. Two types of plains are commonly recognized: those where lava flow features are common, and those where flow features are generally absent but a volcanic origin is inferred by other characteristics. Plains with abundant lava flow features occur in and around the large volcanic provinces of Tharsis and Elysium. Flow features include both sheet flow and tube- and channel-fed flow morphologies. Sheet flows show complex, overlapping flow lobes and may extend for many hundreds of kilometers from their source areas. Lava flows can form a lava tube when the exposed upper layers of lava cool and solidify to form a roof while the lava underneath continues flowing. Often, when all the remaining lava leaves the tube, the roof collapses to make a channel or line of pit craters (catena). An unusual type of flow feature occurs in the Cerberus plains south of Elysium and in Amazonis. These flows have a broken platey texture, consisting of dark, kilometer-scale slabs embedded in a light-toned matrix. They have been attributed to rafted slabs of solidified lava floating on a still-molten subsurface. Others have claimed the broken slabs represent pack ice that froze over a sea that pooled in the area after massive releases of groundwater from the Cerberus Fossae area. The second type of volcanic plains (ridged plains) are characterized by abundant wrinkle ridges. Volcanic flow features are rare or absent. The ridged plains are believed to be regions of extensive flood basalts, by analogy with the lunar maria. Ridged plains make up about 30% of the Martian surface and are most prominent in Lunae, Hesperia, and Malea Plana, as well as throughout much of the northern lowlands. Ridged plains are all Hesperian in age and represent a style of volcanism globally predominant during that time period. The Hesperian Period is named after the ridged plains in Hesperia Planum. Potential current volcanism Scientists have never recorded an active volcano eruption on the surface of Mars; moreover, searches for thermal signatures and surface changes before 2011 did not yield any positive evidence for active volcanism. However, the European Space Agency's Mars Express orbiter photographed lava flows interpreted in 2004 to have occurred within the past two million years, suggesting a relatively recent geologic activity. An updated study in 2011 estimated that the youngest lava flows occurred in the last few tens of millions of years. The authors consider this age makes it possible that Mars is not yet volcanically extinct. The InSight lander mission would determine if there is any seismic activity, measure the amount of heat flow from the interior, estimate the size of Mars' core and whether the core is liquid or solid. The findings were that Mars possesses a molten outer core and a solid inner core with a partially molten mantle. In 2020, astronomers reported evidence for volcanic activity on Mars as recently as 53,000 years ago in the Cerberus Fossae amid Elysium Planitia. Such activity could have provided the environment, in terms of energy and chemicals, needed to support life forms. Large amounts of water ice are believed to be present in the Martian subsurface. The interaction of ice with molten rock can produce distinct landforms. On Earth, when hot volcanic material comes into contact with surface ice, large amounts of liquid water and mud may form that flow catastrophically down slope as massive debris flows (lahars). Some channels in Martian volcanic areas, such as Hrad Vallis near Elysium Mons, may have been similarly carved or modified by lahars. Lava flowing over water-saturated ground can cause the water to erupt violently in an explosion of steam (see phreatic eruption), producing small volcano-like landforms called pseudocraters, or rootless cones. Features that resemble terrestrial rootless cones occur in Elysium, Amazonis, and Isidis and Chryse Planitiae. Also, phreatomagmatism produce tuff rings or tuff cones on Earth and existence of similar landforms on Mars is expected too. Their existence was suggested from Nepenthes/Amenthes region. Finally, when a volcano erupts under an ice sheet, it can form a distinct, mesa-like landform called a tuya or table mountain. Some researchers cite geomorphic evidence that many of the layered interior deposits in Valles Marineris may be the Martian equivalent of tuyas. Tectonic boundaries Tectonic boundaries have been discovered on Mars. Valles Marineris is a horizontally sliding tectonic boundary that divides two major partial or complete plates of Mars. The recent finding suggests that Mars is geologically active with occurrences in the millions of years. There has been previous evidence of Mars' geologic activity. The Mars Global Surveyor (MGS) discovered magnetic stripes in the crust of Mars, especially in the Phaethontis and Eridania quadrangles. The magnetometer on MGS discovered 100 km wide stripes of magnetized crust running roughly parallel for up to 2000 km. These stripes alternate in polarity with the north magnetic pole of one pointing up from the surface and the north magnetic pole of the next pointing down. When similar stripes were discovered on Earth in the 1960s, they were taken as evidence of plate tectonics. However, there are some differences, between the magnetic stripes on Earth and those on Mars. The Martian stripes are wider, much more strongly magnetized, and do not appear to spread out from a middle crustal spreading zone. Because the area with the magnetic stripes is about 4 billion years old, it is believed that the global magnetic field probably lasted for only the first few hundred million years of Mars' life. At that time the temperature of the molten iron in the planet's core might have been high enough to mix it into a magnetic dynamo. Younger rock does not show any stripes. When molten rock containing magnetic material, such as hematite (Fe2O3), cools and solidifies in the presence of a magnetic field, it becomes magnetized and takes on the polarity of the background field. This magnetism is lost only if the rock is subsequently heated above the Curie temperature, which is 770 °C for pure iron, but lower for oxides such as hematite (approximately 650 °C) or magnetite (approximately 580 °C). The magnetism left in rocks is a record of the magnetic field when the rock solidified. Mars' volcanic features can be likened to Earth's geologic hotspots. Pavonis Mons is the middle of three volcanoes (collectively known as Tharsis Montes) on the Tharsis bulge near the equator of the planet Mars. The other Tharsis volcanoes are Ascraeus Mons and Arsia Mons. The three Tharsis Montes, together with some smaller volcanoes to the north, form a straight line. This arrangement suggests that they were formed by a crustal plate moving over a hot spot. Such an arrangement exists in the Earth's Pacific Ocean as the Hawaiian Islands. The Hawaiian Islands are in a straight line, with the youngest in the south and the oldest in the north. So geologists believe the plate is moving while a stationary plume of hot magma rises and punches through the crust to produce volcanic mountains. However, the largest volcano on the planet, Olympus Mons, is thought to have formed when the plates were not moving. Olympus Mons may have formed just after the plate motion stopped. The mare-like plains on Mars are roughly 3 to 3.5 billion years old. The giant shield volcanoes are younger, formed between 1 and 2 billion years ago. Olympus Mons may be "as young as 200 million years." In 1994, Norman H. Sleep, professor of geophysics at Stanford University, described how the three volcanoes that form a line along the Tharsis Ridge may be extinct island arc volcanoes like the island chain of Japan.[needs update?] See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Mars#cite_note-STD-20230416-98] | [TOKENS: 11899] |
Contents Mars Mars is the fourth planet from the Sun. It is also known as the "Red Planet", for its orange-red appearance. Mars is a desert-like rocky planet with a tenuous atmosphere that is primarily carbon dioxide (CO2). At the average surface level the atmospheric pressure is a few thousandths of Earth's, atmospheric temperature ranges from −153 to 20 °C (−243 to 68 °F), and cosmic radiation is high. Mars retains some water, in the ground as well as thinly in the atmosphere, forming cirrus clouds, fog, frost, larger polar regions of permafrost and ice caps (with seasonal CO2 snow), but no bodies of liquid surface water. Its surface gravity is roughly a third of Earth's or double that of the Moon. Its diameter, 6,779 km (4,212 mi), is about half the Earth's, or twice the Moon's, and its surface area is the size of all the dry land of Earth. Fine dust is prevalent across the surface and the atmosphere, being picked up and spread at the low Martian gravity even by the weak wind of the tenuous atmosphere. The terrain of Mars roughly follows a north-south divide, the Martian dichotomy, with the northern hemisphere mainly consisting of relatively flat, low lying plains, and the southern hemisphere of cratered highlands. Geologically, the planet is fairly active with marsquakes trembling underneath the ground, but also hosts many enormous volcanoes that are extinct (the tallest is Olympus Mons, 21.9 km or 13.6 mi tall), as well as one of the largest canyons in the Solar System (Valles Marineris, 4,000 km or 2,500 mi long). Mars has two natural satellites that are small and irregular in shape: Phobos and Deimos. With a significant axial tilt of 25 degrees, Mars experiences seasons, like Earth (which has an axial tilt of 23.5 degrees). A Martian solar year is equal to 1.88 Earth years (687 Earth days), a Martian solar day (sol) is equal to 24.6 hours. Mars formed along with the other planets approximately 4.5 billion years ago. During the martian Noachian period (4.5 to 3.5 billion years ago), its surface was marked by meteor impacts, valley formation, erosion, the possible presence of water oceans and the loss of its magnetosphere. The Hesperian period (beginning 3.5 billion years ago and ending 3.3–2.9 billion years ago) was dominated by widespread volcanic activity and flooding that carved immense outflow channels. The Amazonian period, which continues to the present, is the currently dominating and remaining influence on geological processes. Because of Mars's geological history, the possibility of past or present life on Mars remains an area of active scientific investigation, with some possible traces needing further examination. Being visible with the naked eye in Earth's sky as a red wandering star, Mars has been observed throughout history, acquiring diverse associations in different cultures. In 1963 the first flight to Mars took place with Mars 1, but communication was lost en route. The first successful flyby exploration of Mars was conducted in 1965 with Mariner 4. In 1971 Mariner 9 entered orbit around Mars, being the first spacecraft to orbit any body other than the Moon, Sun or Earth; following in the same year were the first uncontrolled impact (Mars 2) and first successful landing (Mars 3) on Mars. Probes have been active on Mars continuously since 1997. At times, more than ten probes have simultaneously operated in orbit or on the surface, more than at any other planet beyond Earth. Mars is an often proposed target for future crewed exploration missions, though no such mission is currently planned. Natural history Scientists have theorized that during the Solar System's formation, Mars was created as the result of a random process of run-away accretion of material from the protoplanetary disk that orbited the Sun. Mars has many distinctive chemical features caused by its position in the Solar System. Elements with comparatively low boiling points, such as chlorine, phosphorus, and sulfur, are much more common on Mars than on Earth; these elements were probably pushed outward by the young Sun's energetic solar wind. After the formation of the planets, the inner Solar System may have been subjected to the so-called Late Heavy Bombardment. About 60% of the surface of Mars shows a record of impacts from that era, whereas much of the remaining surface is probably underlain by immense impact basins caused by those events. However, more recent modeling has disputed the existence of the Late Heavy Bombardment. There is evidence of an enormous impact basin in the Northern Hemisphere of Mars, spanning 10,600 by 8,500 kilometres (6,600 by 5,300 mi), or roughly four times the size of the Moon's South Pole–Aitken basin, which would be the largest impact basin yet discovered if confirmed. It has been hypothesized that the basin was formed when Mars was struck by a Pluto-sized body about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet. A 2023 study shows evidence, based on the orbital inclination of Deimos (a small moon of Mars), that Mars may once have had a ring system 3.5 billion years to 4 billion years ago. This ring system may have been formed from a moon, 20 times more massive than Phobos, orbiting Mars billions of years ago; and Phobos would be a remnant of that ring. Epochs: The geological history of Mars can be split into many periods, but the following are the three primary periods: Geological activity is still taking place on Mars. The Athabasca Valles is home to sheet-like lava flows created about 200 million years ago. Water flows in the grabens called the Cerberus Fossae occurred less than 20 million years ago, indicating equally recent volcanic intrusions. The Mars Reconnaissance Orbiter has captured images of avalanches. Physical characteristics Mars is approximately half the diameter of Earth or twice that of the Moon, with a surface area only slightly less than the total area of Earth's dry land. Mars is less dense than Earth, having about 15% of Earth's volume and 11% of Earth's mass, resulting in about 38% of Earth's surface gravity. Mars is the only presently known example of a desert planet, a rocky planet with a surface akin to that of Earth's deserts. The red-orange appearance of the Martian surface is caused by iron(III) oxide (nanophase Fe2O3) and the iron(III) oxide-hydroxide mineral goethite. It can look like butterscotch; other common surface colors include golden, brown, tan, and greenish, depending on the minerals present. Like Earth, Mars is differentiated into a dense metallic core overlaid by less dense rocky layers. The outermost layer is the crust, which is on average about 42–56 kilometres (26–35 mi) thick, with a minimum thickness of 6 kilometres (3.7 mi) in Isidis Planitia, and a maximum thickness of 117 kilometres (73 mi) in the southern Tharsis plateau. For comparison, Earth's crust averages 27.3 ± 4.8 km in thickness. The most abundant elements in the Martian crust are silicon, oxygen, iron, magnesium, aluminum, calcium, and potassium. Mars is confirmed to be seismically active; in 2019, it was reported that InSight had detected and recorded over 450 marsquakes and related events. Beneath the crust is a silicate mantle responsible for many of the tectonic and volcanic features on the planet's surface. The upper Martian mantle is a low-velocity zone, where the velocity of seismic waves is lower than surrounding depth intervals. The mantle appears to be rigid down to the depth of about 250 km, giving Mars a very thick lithosphere compared to Earth. Below this the mantle gradually becomes more ductile, and the seismic wave velocity starts to grow again. The Martian mantle does not appear to have a thermally insulating layer analogous to Earth's lower mantle; instead, below 1050 km in depth, it becomes mineralogically similar to Earth's transition zone. At the bottom of the mantle lies a basal liquid silicate layer approximately 150–180 km thick. The Martian mantle appears to be highly heterogenous, with dense fragments up to 4 km across, likely injected deep into the planet by colossal impacts ~4.5 billion years ago; high-frequency waves from eight marsquakes slowed as they passed these localized regions, and modeling indicates the heterogeneities are compositionally distinct debris preserved because Mars lacks plate tectonics and has a sluggishly convecting interior that prevents complete homogenization. Mars's iron and nickel core is at least partially molten, and may have a solid inner core. It is around half of Mars's radius, approximately 1650–1675 km, and is enriched in light elements such as sulfur, oxygen, carbon, and hydrogen. The temperature of the core is estimated to be 2000–2400 K, compared to 5400–6230 K for Earth's solid inner core. In 2025, based on data from the InSight lander, a group of researchers reported the detection of a solid inner core 613 kilometres (381 mi) ± 67 kilometres (42 mi) in radius. Mars is a terrestrial planet with a surface that consists of minerals containing silicon and oxygen, metals, and other elements that typically make up rock. The Martian surface is primarily composed of tholeiitic basalt, although parts are more silica-rich than typical basalt and may be similar to andesitic rocks on Earth, or silica glass. Regions of low albedo suggest concentrations of plagioclase feldspar, with northern low albedo regions displaying higher than normal concentrations of sheet silicates and high-silicon glass. Parts of the southern highlands include detectable amounts of high-calcium pyroxenes. Localized concentrations of hematite and olivine have been found. Much of the surface is deeply covered by finely grained iron(III) oxide dust. The Phoenix lander returned data showing Martian soil to be slightly alkaline and containing elements such as magnesium, sodium, potassium and chlorine. These nutrients are found in soils on Earth, and are necessary for plant growth. Experiments performed by the lander showed that the Martian soil has a basic pH of 7.7, and contains 0.6% perchlorate by weight, concentrations that are toxic to humans. Streaks are common across Mars and new ones appear frequently on steep slopes of craters, troughs, and valleys. The streaks are dark at first and get lighter with age. The streaks can start in a tiny area, then spread out for hundreds of metres. They have been seen to follow the edges of boulders and other obstacles in their path. The commonly accepted hypotheses include that they are dark underlying layers of soil revealed after avalanches of bright dust or dust devils. Several other explanations have been put forward, including those that involve water or even the growth of organisms. Environmental radiation levels on the surface are on average 0.64 millisieverts of radiation per day, and significantly less than the radiation of 1.84 millisieverts per day or 22 millirads per day during the flight to and from Mars. For comparison the radiation levels in low Earth orbit, where Earth's space stations orbit, are around 0.5 millisieverts of radiation per day. Hellas Planitia has the lowest surface radiation at about 0.342 millisieverts per day, featuring lava tubes southwest of Hadriacus Mons with potentially levels as low as 0.064 millisieverts per day, comparable to radiation levels during flights on Earth. Although Mars has no evidence of a structured global magnetic field, observations show that parts of the planet's crust have been magnetized, suggesting that alternating polarity reversals of its dipole field have occurred in the past. This paleomagnetism of magnetically susceptible minerals is similar to the alternating bands found on Earth's ocean floors. One hypothesis, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands suggest plate tectonic activity on Mars four billion years ago, before the planetary dynamo ceased to function and the planet's magnetic field faded. Geography and features Although better remembered for mapping the Moon, Johann Heinrich von Mädler and Wilhelm Beer were the first areographers. They began by establishing that most of Mars's surface features were permanent and by more precisely determining the planet's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars. Features on Mars are named from a variety of sources. Albedo features are named for classical mythology. Craters larger than roughly 50 km are named for deceased scientists and writers and others who have contributed to the study of Mars. Smaller craters are named for towns and villages of the world with populations of less than 100,000. Large valleys are named for the word "Mars" or "star" in various languages; smaller valleys are named for rivers. Large albedo features retain many of the older names but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian "continents" and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The permanent northern polar ice cap is named Planum Boreum. The southern cap is called Planum Australe. Mars's equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mädler and Beer selected a line for their first maps of Mars in 1830. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen by Merton E. Davies, Harold Masursky, and Gérard de Vaucouleurs for the definition of 0.0° longitude to coincide with the original selection. Because Mars has no oceans, and hence no "sea level", a zero-elevation surface had to be selected as a reference level; this is called the areoid of Mars, analogous to the terrestrial geoid. Zero altitude was defined by the height at which there is 610.5 Pa (6.105 mbar) of atmospheric pressure. This pressure corresponds to the triple point of water, and it is about 0.6% of the sea level surface pressure on Earth (0.006 atm). For mapping purposes, the United States Geological Survey divides the surface of Mars into thirty cartographic quadrangles, each named for a classical albedo feature it contains. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023. The vast upland region Tharsis contains several massive volcanoes, which include the shield volcano Olympus Mons. The edifice is over 600 km (370 mi) wide. Because the mountain is so large, with complex structure at its edges, giving a definite height to it is difficult. Its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over 21 km (13 mi), a little over twice the height of Mauna Kea as measured from its base on the ocean floor. The total elevation change from the plains of Amazonis Planitia, over 1,000 km (620 mi) to the northwest, to the summit approaches 26 km (16 mi), roughly three times the height of Mount Everest, which in comparison stands at just over 8.8 kilometres (5.5 mi). Consequently, Olympus Mons is either the tallest or second-tallest mountain in the Solar System; the only known mountain which might be taller is the Rheasilvia peak on the asteroid Vesta, at 20–25 km (12–16 mi). The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. It is possible that, four billion years ago, the Northern Hemisphere of Mars was struck by an object one-tenth to two-thirds the size of Earth's Moon. If this is the case, the Northern Hemisphere of Mars would be the site of an impact crater 10,600 by 8,500 kilometres (6,600 by 5,300 mi) in size, or roughly the area of Europe, Asia, and Australia combined, surpassing Utopia Planitia and the Moon's South Pole–Aitken basin as the largest impact crater in the Solar System. Mars is scarred by 43,000 impact craters with a diameter of 5 kilometres (3.1 mi) or greater. The largest exposed crater is Hellas, which is 2,300 kilometres (1,400 mi) wide and 7,000 metres (23,000 ft) deep, and is a light albedo feature clearly visible from Earth. There are other notable impact features, such as Argyre, which is around 1,800 kilometres (1,100 mi) in diameter, and Isidis, which is around 1,500 kilometres (930 mi) in diameter. Due to the smaller mass and size of Mars, the probability of an object colliding with the planet is about half that of Earth. Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter. Martian craters can[discuss] have a morphology that suggests the ground became wet after the meteor impact. The large canyon, Valles Marineris (Latin for 'Mariner Valleys, also known as Agathodaemon in the old canal maps), has a length of 4,000 kilometres (2,500 mi) and a depth of up to 7 kilometres (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 kilometres (277 mi) long and nearly 2 kilometres (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 kilometres (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement. Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the volcano Arsia Mons. The caves, named after loved ones of their discoverers, are collectively known as the "seven sisters". Cave entrances measure from 100 to 252 metres (328 to 827 ft) wide and they are estimated to be at least 73 to 96 metres (240 to 315 ft) deep. Because light does not reach the floor of most of the caves, they may extend much deeper than these lower estimates and widen below the surface. "Dena" is the only exception; its floor is visible and was measured to be 130 metres (430 ft) deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface. Martian geysers (or CO2 jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms – are the two most visible types of features ascribed to these eruptions. Similarly sized dust will settle from the thinner Martian atmosphere sooner than it would on Earth. For example, the dust suspended by the 2001 global dust storms on Mars only remained in the Martian atmosphere for 0.6 years, while the dust from Mount Pinatubo took about two years to settle. However, under current Martian conditions, the mass movements involved are generally much smaller than on Earth. Even the 2001 global dust storms on Mars moved only the equivalent of a very thin dust layer – about 3 μm thick if deposited with uniform thickness between 58° north and south of the equator. Dust deposition at the two rover sites has proceeded at a rate of about the thickness of a grain every 100 sols. Atmosphere Mars lost its magnetosphere 4 billion years ago, possibly because of numerous asteroid strikes, so the solar wind interacts directly with the Martian ionosphere, lowering the atmospheric density by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected ionized atmospheric particles trailing off into space behind Mars, and this atmospheric loss is being studied by the MAVEN orbiter. Compared to Earth, the atmosphere of Mars is quite rarefied. Atmospheric pressure on the surface today ranges from a low of 30 Pa (0.0044 psi) on Olympus Mons to over 1,155 Pa (0.1675 psi) in Hellas Planitia, with a mean pressure at the surface level of 600 Pa (0.087 psi). The highest atmospheric density on Mars is equal to that found 35 kilometres (22 mi) above Earth's surface. The resulting mean surface pressure is only 0.6% of Earth's 101.3 kPa (14.69 psi). The scale height of the atmosphere is about 10.8 kilometres (6.7 mi), which is higher than Earth's 6 kilometres (3.7 mi), because the surface gravity of Mars is only about 38% of Earth's. The atmosphere of Mars consists of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 μm in diameter which give the Martian sky a tawny color when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it. Despite repeated detections of methane on Mars, there is no scientific consensus as to its origin. One suggestion is that methane exists on Mars and that its concentration fluctuates seasonally. The existence of methane could be produced by non-biological process such as serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars, or by Martian life. Compared to Earth, its higher concentration of atmospheric CO2 and lower surface pressure may be why sound is attenuated more on Mars, where natural sources are rare apart from the wind. Using acoustic recordings collected by the Perseverance rover, researchers concluded that the speed of sound there is approximately 240 m/s for frequencies below 240 Hz, and 250 m/s for those above. Auroras have been detected on Mars. Because Mars lacks a global magnetic field, the types and distribution of auroras there differ from those on Earth; rather than being mostly restricted to polar regions as is the case on Earth, a Martian aurora can encompass the planet. In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25 times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month. Mars has seasons, alternating between its northern and southern hemispheres, similar to on Earth. Additionally the orbit of Mars has, compared to Earth's, a large eccentricity and approaches perihelion when it is summer in its southern hemisphere and winter in its northern, and aphelion when it is winter in its southern hemisphere and summer in its northern. As a result, the seasons in its southern hemisphere are more extreme and the seasons in its northern are milder than would otherwise be the case. The summer temperatures in the south can be warmer than the equivalent summer temperatures in the north by up to 30 °C (54 °F). Martian surface temperatures vary from lows of about −110 °C (−166 °F) to highs of up to 35 °C (95 °F) in equatorial summer. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure (about 1% that of the atmosphere of Earth), and the low thermal inertia of Martian soil. The planet is 1.52 times as far from the Sun as Earth, resulting in just 43% of the amount of sunlight. Mars has the largest dust storms in the Solar System, reaching speeds of over 160 km/h (100 mph). These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase global temperature. Seasons also produce dry ice covering polar ice caps. Hydrology While Mars contains water in larger amounts, most of it is dust covered water ice at the Martian polar ice caps. The volume of water ice in the south polar ice cap, if melted, would be enough to cover most of the surface of the planet with a depth of 11 metres (36 ft). Water in its liquid form cannot persist on the surface due to Mars's low atmospheric pressure, which is less than 1% that of Earth. Only at the lowest of elevations are the pressure and temperature high enough for liquid water to exist for short periods. Although little water is present in the atmosphere, there is enough to produce clouds of water ice and different cases of snow and frost, often mixed with snow of carbon dioxide dry ice. Landforms visible on Mars strongly suggest that liquid water has existed on the planet's surface. Huge linear swathes of scoured ground, known as outflow channels, cut across the surface in about 25 places. These are thought to be a record of erosion caused by the catastrophic release of water from subsurface aquifers, though some of these structures have been hypothesized to result from the action of glaciers or lava. One of the larger examples, Ma'adim Vallis, is 700 kilometres (430 mi) long, much greater than the Grand Canyon, with a width of 20 kilometres (12 mi) and a depth of 2 kilometres (1.2 mi) in places. It is thought to have been carved by flowing water early in Mars's history. The youngest of these channels is thought to have formed only a few million years ago. Elsewhere, particularly on the oldest areas of the Martian surface, finer-scale, dendritic networks of valleys are spread across significant proportions of the landscape. Features of these valleys and their distribution strongly imply that they were carved by runoff resulting from precipitation in early Mars history. Subsurface water flow and groundwater sapping may play important subsidiary roles in some networks, but precipitation was probably the root cause of the incision in almost all cases. Along craters and canyon walls, there are thousands of features that appear similar to terrestrial gullies. The gullies tend to be in the highlands of the Southern Hemisphere and face the Equator; all are poleward of 30° latitude. A number of authors have suggested that their formation process involves liquid water, probably from melting ice, although others have argued for formation mechanisms involving carbon dioxide frost or the movement of dry dust. No partially degraded gullies have formed by weathering and no superimposed impact craters have been observed, indicating that these are young features, possibly still active. Other geological features, such as deltas and alluvial fans preserved in craters, are further evidence for warmer, wetter conditions at an interval or intervals in earlier Mars history. Such conditions necessarily require the widespread presence of crater lakes across a large proportion of the surface, for which there is independent mineralogical, sedimentological and geomorphological evidence. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water. The chemical signature of water vapor on Mars was first unequivocally demonstrated in 1963 by spectroscopy using an Earth-based telescope. In 2004, Opportunity detected the mineral jarosite. This forms only in the presence of acidic water, showing that water once existed on Mars. The Spirit rover found concentrated deposits of silica in 2007 that indicated wet conditions in the past, and in December 2011, the mineral gypsum, which also forms in the presence of water, was found on the surface by NASA's Mars rover Opportunity. It is estimated that the amount of water in the upper mantle of Mars, represented by hydroxyl ions contained within Martian minerals, is equal to or greater than that of Earth at 50–300 parts per million of water, which is enough to cover the entire planet to a depth of 200–1,000 metres (660–3,280 ft). On 18 March 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 centimetres (24 in), during the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain. In September 2015, NASA announced that they had found strong evidence of hydrated brine flows in recurring slope lineae, based on spectrometer readings of the darkened areas of slopes. These streaks flow downhill in Martian summer, when the temperature is above −23 °C, and freeze at lower temperatures. These observations supported earlier hypotheses, based on timing of formation and their rate of growth, that these dark streaks resulted from water flowing just below the surface. However, later work suggested that the lineae may be dry, granular flows instead, with at most a limited role for water in initiating the process. A definitive conclusion about the presence, extent, and role of liquid water on the Martian surface remains elusive. Researchers suspect much of the low northern plains of the planet were covered with an ocean hundreds of meters deep, though this theory remains controversial. In March 2015, scientists stated that such an ocean might have been the size of Earth's Arctic Ocean. This finding was derived from the ratio of protium to deuterium in the modern Martian atmosphere compared to that ratio on Earth. The amount of Martian deuterium (D/H = 9.3 ± 1.7 10−4) is five to seven times the amount on Earth (D/H = 1.56 10−4), suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the former presence of an ocean. Other scientists caution that these results have not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water. Near the northern polar cap is the 81.4 kilometres (50.6 mi) wide Korolev Crater, which the Mars Express orbiter found to be filled with approximately 2,200 cubic kilometres (530 cu mi) of water ice. In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior (which is 12,100 cubic kilometers). During observations from 2018 through 2021, the ExoMars Trace Gas Orbiter spotted indications of water, probably subsurface ice, in the Valles Marineris canyon system. Orbital motion Mars's average distance from the Sun is roughly 230 million km (143 million mi), and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. The gravitational potential difference and thus the delta-v needed to transfer between Mars and Earth is the second lowest for Earth. The axial tilt of Mars is 25.19° relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day, the orientation of the north pole of Mars is close to the star Deneb. Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury has a larger orbital eccentricity. It is known that in the past, Mars has had a much more circular orbit. At one point, 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. Mars's cycle of eccentricity is 96,000 Earth years compared to Earth's cycle of 100,000 years. Mars has its closest approach to Earth (opposition) in a synodic period of 779.94 days. It should not be confused with Mars conjunction, where the Earth and Mars are at opposite sides of the Solar System and form a straight line crossing the Sun. The average time between the successive oppositions of Mars, its synodic period, is 780 days; but the number of days between successive oppositions can range from 764 to 812. The distance at close approach varies between about 54 and 103 million km (34 and 64 million mi) due to the planets' elliptical orbits, which causes comparable variation in angular size. At their furthest Mars and Earth can be as far as 401 million km (249 million mi) apart. Mars comes into opposition from Earth every 2.1 years. The planets come into opposition near Mars's perihelion in 2003, 2018 and 2035, with the 2020 and 2033 events being particularly close to perihelic opposition. The mean apparent magnitude of Mars is +0.71 with a standard deviation of 1.05. Because the orbit of Mars is eccentric, the magnitude at opposition from the Sun can range from about −3.0 to −1.4. The minimum brightness is magnitude +1.86 when the planet is near aphelion and in conjunction with the Sun. At its brightest, Mars (along with Jupiter) is second only to Venus in apparent brightness. Mars usually appears distinctly yellow, orange, or red. When farthest away from Earth, it is more than seven times farther away than when it is closest. Mars is usually close enough for particularly good viewing once or twice at 15-year or 17-year intervals. Optical ground-based telescopes are typically limited to resolving features about 300 kilometres (190 mi) across when Earth and Mars are closest because of Earth's atmosphere. As Mars approaches opposition, it begins a period of retrograde motion, which means it will appear to move backwards in a looping curve with respect to the background stars. This retrograde motion lasts for about 72 days, and Mars reaches its peak apparent brightness in the middle of this interval. Moons Mars has two relatively small (compared to Earth's) natural moons, Phobos (about 22 km (14 mi) in diameter) and Deimos (about 12 km (7.5 mi) in diameter), which orbit at 9,376 km (5,826 mi) and 23,460 km (14,580 mi) around the planet. The origin of both moons is unclear, although a popular theory states that they were asteroids captured into Martian orbit. Both satellites were discovered in 1877 by Asaph Hall and were named after the characters Phobos (the deity of panic and fear) and Deimos (the deity of terror and dread), twins from Greek mythology who accompanied their father Ares, god of war, into battle. Mars was the Roman equivalent to Ares. In modern Greek, the planet retains its ancient name Ares (Aris: Άρης). From the surface of Mars, the motions of Phobos and Deimos appear different from that of the Earth's satellite, the Moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbit – where the orbital period would match the planet's period of rotation – rises as expected in the east, but slowly. Because the orbit of Phobos is below a synchronous altitude, tidal forces from Mars are gradually lowering its orbit. In about 50 million years, it could either crash into Mars's surface or break up into a ring structure around the planet. The origin of the two satellites is not well understood. Their low albedo and carbonaceous chondrite composition have been regarded as similar to asteroids, supporting a capture theory. The unstable orbit of Phobos would seem to point toward a relatively recent capture. But both have circular orbits near the equator, which is unusual for captured objects, and the required capture dynamics are complex. Accretion early in the history of Mars is plausible, but would not account for a composition resembling asteroids rather than Mars itself, if that is confirmed. Mars may have yet-undiscovered moons, smaller than 50 to 100 metres (160 to 330 ft) in diameter, and a dust ring is predicted to exist between Phobos and Deimos. A third possibility for their origin as satellites of Mars is the involvement of a third body or a type of impact disruption. More-recent lines of evidence for Phobos having a highly porous interior, and suggesting a composition containing mainly phyllosilicates and other minerals known from Mars, point toward an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's satellite. Although the visible and near-infrared (VNIR) spectra of the moons of Mars resemble those of outer-belt asteroids, the thermal infrared spectra of Phobos are reported to be inconsistent with chondrites of any class. It is also possible that Phobos and Deimos were fragments of an older moon, formed by debris from a large impact on Mars, and then destroyed by a more recent impact upon the satellite. More recently, a study conducted by a team of researchers from multiple countries suggests that a lost moon, at least fifteen times the size of Phobos, may have existed in the past. By analyzing rocks which point to tidal processes on the planet, it is possible that these tides may have been regulated by a past moon. Human observations and exploration The history of observations of Mars is marked by oppositions of Mars when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars, which are distinguished because Mars is close to perihelion, making it even closer to Earth. The ancient Sumerians named Mars Nergal, the god of war and plague. During Sumerian times, Nergal was a minor deity of little significance, but, during later times, his main cult center was the city of Nineveh. In Mesopotamian texts, Mars is referred to as the "star of judgement of the fate of the dead". The existence of Mars as a wandering object in the night sky was also recorded by the ancient Egyptian astronomers and, by 1534 BCE, they were familiar with the retrograde motion of the planet. By the period of the Neo-Babylonian Empire, the Babylonian astronomers were making regular records of the positions of the planets and systematic observations of their behavior. For Mars, they knew that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. They invented arithmetic methods for making minor corrections to the predicted positions of the planets. In Ancient Greece, the planet was known as Πυρόεις. Commonly, the Greek name for the planet now referred to as Mars, was Ares. It was the Romans who named the planet Mars, for their god of war, often represented by the sword and shield of the planet's namesake. In the fourth century BCE, Aristotle noted that Mars disappeared behind the Moon during an occultation, indicating that the planet was farther away. Ptolemy, a Greek living in Alexandria, attempted to address the problem of the orbital motion of Mars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection later called the Almagest (from the Arabic for "greatest"), which became the authoritative treatise on Western astronomy for the next fourteen centuries. Literature from ancient China confirms that Mars was known by Chinese astronomers by no later than the fourth century BCE. In the East Asian cultures, Mars is traditionally referred to as the "fire star" (火星) based on the Wuxing system. In 1609 Johannes Kepler published a 10 year study of Martian orbit, using the diurnal parallax of Mars, measured by Tycho Brahe, to make a preliminary calculation of the relative distance to the planet. From Brahe's observations of Mars, Kepler deduced that the planet orbited the Sun not in a circle, but in an ellipse. Moreover, Kepler showed that Mars sped up as it approached the Sun and slowed down as it moved farther away, in a manner that later physicists would explain as a consequence of the conservation of angular momentum.: 433–437 In 1610 the first use of a telescope for astronomical observation, including Mars, was performed by Italian astronomer Galileo Galilei. With the telescope the diurnal parallax of Mars was again measured in an effort to determine the Sun-Earth distance. This was first performed by Giovanni Domenico Cassini in 1672. The early parallax measurements were hampered by the quality of the instruments. The only occultation of Mars by Venus observed was that of 13 October 1590, seen by Michael Maestlin at Heidelberg. By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. On 5 September 1877, a perihelic opposition to Mars occurred. The Italian astronomer Giovanni Schiaparelli used a 22-centimetre (8.7 in) telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which, with the possible exception of the natural canyon Valles Marineris, were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means "channels" or "grooves", was popularly mistranslated in English as "canals". Influenced by the observations, the orientalist Percival Lowell founded an observatory which had 30- and 45-centimetre (12- and 18-in) telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894, and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were independently observed by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time. The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summers) in combination with the canals led to speculation about life on Mars, and it was a long-held belief that Mars contained vast seas and vegetation. As bigger telescopes were used, fewer long, straight canali were observed. During observations in 1909 by Antoniadi with an 84-centimetre (33 in) telescope, irregular patterns were observed, but no canali were seen. The first spacecraft from Earth to visit Mars was Mars 1 of the Soviet Union, which flew by in 1963, but contact was lost en route. NASA's Mariner 4 followed and became the first spacecraft to successfully transmit from Mars; launched on 28 November 1964, it made its closest approach to the planet on 15 July 1965. Mariner 4 detected the weak Martian radiation belt, measured at about 0.1% that of Earth, and captured the first images of another planet from deep space. Once spacecraft visited the planet during the 1960s and 1970s, many previous concepts of Mars were radically broken. After the results of the Viking life-detection experiments, the hypothesis of a dead planet was generally accepted. The data from Mariner 9 and Viking allowed better maps of Mars to be made. Until 1997 and after Viking 1 shut down in 1982, Mars was only visited by three unsuccessful probes, two flying past without contact (Phobos 1, 1988; Mars Observer, 1993), and one (Phobos 2 1989) malfunctioning in orbit before reaching its destination Phobos. In 1997 Mars Pathfinder became the first successful rover mission beyond the Moon and started together with Mars Global Surveyor (operated until late 2006) an uninterrupted active robotic presence at Mars that has lasted until today. It produced complete, extremely detailed maps of the Martian topography, magnetic field and surface minerals. Starting with these missions a range of new improved crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars, with successful missions by the NASA (United States), Jaxa (Japan), ESA, United Kingdom, ISRO (India), Roscosmos (Russia), the United Arab Emirates, and CNSA (China) to study the planet's surface, climate, and geology, uncovering the different elements of the history and dynamic of the hydrosphere of Mars and possible traces of ancient life. As of 2023[update], Mars is host to ten functioning spacecraft. Eight are in orbit: 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, ExoMars Trace Gas Orbiter, the Hope orbiter, and the Tianwen-1 orbiter. Another two are on the surface: the Mars Science Laboratory Curiosity rover and the Perseverance rover. Collected maps are available online at websites including Google Mars. NASA provides two online tools: Mars Trek, which provides visualizations of the planet using data from 50 years of exploration, and Experience Curiosity, which simulates traveling on Mars in 3-D with Curiosity. Planned missions to Mars include: As of February 2024[update], debris from these types of missions has reached over seven tons. Most of it consists of crashed and inactive spacecraft as well as discarded components. In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Key areas include establishing telecommunications, payload delivery and surface imaging. Habitability and habitation During the late 19th century, it was widely accepted in the astronomical community that Mars had life-supporting qualities, including the presence of oxygen and water. However, in 1894 W. W. Campbell at Lick Observatory observed the planet and found that "if water vapor or oxygen occur in the atmosphere of Mars it is in quantities too small to be detected by spectroscopes then available". That observation contradicted many of the measurements of the time and was not widely accepted. Campbell and V. M. Slipher repeated the study in 1909 using better instruments, but with the same results. It was not until the findings were confirmed by W. S. Adams in 1925 that the myth of the Earth-like habitability of Mars was finally broken. However, even in the 1960s, articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars. The current understanding of planetary habitability – the ability of a world to develop environmental conditions favorable to the emergence of life – favors planets that have liquid water on their surface. Most often this requires the orbit of a planet to lie within the habitable zone, which for the Sun is estimated to extend from within the orbit of Earth to about that of Mars. During perihelion, Mars dips inside this region, but Mars's thin (low-pressure) atmosphere prevents liquid water from existing over large regions for extended periods. The past flow of liquid water demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface may have been too salty and acidic to support regular terrestrial life. The environmental conditions on Mars are a challenge to sustaining organic life: the planet has little heat transfer across its surface, it has poor insulation against bombardment by the solar wind due to the absence of a magnetosphere and has insufficient atmospheric pressure to retain water in a liquid form (water instead sublimes to a gaseous state). Mars is nearly, or perhaps totally, geologically dead; the end of volcanic activity has apparently stopped the recycling of chemicals and minerals between the surface and interior of the planet. Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there remains unknown. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites and had positive results, including a temporary increase in CO2 production on exposure to water and nutrients. This sign of life was later disputed by scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A 2014 analysis of Martian meteorite EETA79001 found chlorate, perchlorate, and nitrate ions in sufficiently high concentrations to suggest that they are widespread on Mars. UV and X-ray radiation would turn chlorate and perchlorate ions into other, highly reactive oxychlorines, indicating that any organic molecules would have to be buried under the surface to survive. Small quantities of methane and formaldehyde detected by Mars orbiters are both claimed to be possible evidence for life, as these chemical compounds would quickly break down in the Martian atmosphere. Alternatively, these compounds may instead be replenished by volcanic or other geological means, such as serpentinite. Impact glass, formed by the impact of meteors, which on Earth can preserve signs of life, has also been found on the surface of the impact craters on Mars. Likewise, the glass in impact craters on Mars could have preserved signs of life, if life existed at the site. The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available. Several plans for a human mission to Mars have been proposed, but none have come to fruition. The NASA Authorization Act of 2017 directed NASA to study the feasibility of a crewed Mars mission in the early 2030s; the resulting report concluded that this would be unfeasible. In addition, in 2021, China was planning to send a crewed Mars mission in 2033. Privately held companies such as SpaceX have also proposed plans to send humans to Mars, with the eventual goal to settle on the planet. As of 2024, SpaceX has proceeded with the development of the Starship launch vehicle with the goal of Mars colonization. In plans shared with the company in April 2024, Elon Musk envisions the beginning of a Mars colony within the next twenty years. This would be enabled by the planned mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. The moon Phobos has been proposed as an anchor point for a space elevator. Besides national space agencies and space companies, groups such as the Mars Society and The Planetary Society advocate for human missions to Mars. In culture Mars is named after the Roman god of war (Greek Ares), but was also associated with the demi-god Heracles (Roman Hercules) by ancient Greek astronomers, as detailed by Aristotle. This association between Mars and war dates back at least to Babylonian astronomy, in which the planet was named for the god Nergal, deity of war and destruction. It persisted into modern times, as exemplified by Gustav Holst's orchestral suite The Planets, whose famous first movement labels Mars "The Bringer of War". The planet's symbol, a circle with a spear pointing out to the upper right, is also used as a symbol for the male gender. The symbol dates from at least the 11th century, though a possible predecessor has been found in the Greek Oxyrhynchus Papyri. The idea that Mars was populated by intelligent Martians became widespread in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". In the present day, high-resolution mapping of the surface of Mars has revealed no artifacts of habitation, but pseudoscientific speculation about intelligent life on Mars still continues. Reminiscent of the canali observations, these speculations are based on small scale features perceived in the spacecraft images, such as "pyramids" and the "Face on Mars". In his book Cosmos, planetary astronomer Carl Sagan wrote: "Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears." The depiction of Mars in fiction has been stimulated by its dramatic red color and by nineteenth-century scientific speculations that its surface conditions might support not just life but intelligent life. This gave way to many science fiction stories involving these concepts, such as H. G. Wells's The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth; Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization; as well as Edgar Rice Burroughs's series Barsoom, C. S. Lewis's novel Out of the Silent Planet (1938), and a number of Robert A. Heinlein stories before the mid-sixties. Since then, depictions of Martians have also extended to animation. A comic figure of an intelligent Martian, Marvin the Martian, appeared in Haredevil Hare (1948) as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. After the Mariner and Viking spacecraft had returned pictures of Mars as a lifeless and canal-less world, these ideas about Mars were abandoned; for many science-fiction authors, the new discoveries initially seemed like a constraint, but eventually the post-Viking knowledge of Mars became itself a source of inspiration for works like Kim Stanley Robinson's Mars trilogy. See also Notes References Further reading External links Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of". |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Large_Magellanic_Cloud] | [TOKENS: 2456] |
Contents Large Magellanic Cloud The Large Magellanic Cloud (LMC) is a dwarf galaxy and satellite galaxy of the Milky Way. At a distance of around 50 kiloparsecs (163,000 light-years), the LMC is the second- or third-closest galaxy to the Milky Way, after the Sagittarius Dwarf Spheroidal (c. 16 kiloparsecs (52,000 light-years) away) and the possible dwarf irregular galaxy called the Canis Major Overdensity. It is about 9.86 kiloparsecs (32,200 light-years) across, and has roughly one-hundredth the mass of the Milky Way making it the fourth-largest galaxy in the Local Group, after the Andromeda Galaxy (M31), the Milky Way, and the Triangulum Galaxy (M33). The LMC is classified as a Magellanic spiral. It contains a stellar bar that is geometrically off-center, suggesting that it was once a barred dwarf spiral galaxy before its spiral arms were disrupted, likely by tidal interactions from the nearby Small Magellanic Cloud (SMC) and the Milky Way's gravity. The LMC is predicted to merge with the Milky Way in approximately 2.4 billion years. With a declination of about −70°, the LMC is visible as a faint "cloud" from the Southern Hemisphere of the Earth and from as far north as 20° N. It straddles the constellations Dorado and Mensa and has an apparent length of about 10° to the naked eye, 20 times the Moon's diameter, from dark sites away from light pollution. History of observation Both the Large and Small Magellanic Clouds have been easily visible for southern nighttime observers well back into prehistory. It has been claimed that the first known written mention of the Large Magellanic Cloud was by the Persian astronomer 'Abd al-Rahman al-Sufi Shirazi (later known in Europe as "Azophi"), which he referred to as Al Bakr, the White Ox, in his Book of Fixed Stars around 964 AD. However, this seems to be a misunderstanding of a reference to some stars south of Canopus which he admits he had not seen. The first confirmed recorded observation was in 1503–1504 by Amerigo Vespucci in a letter about his third voyage. He mentioned "three Canopes [sic], two bright and one obscure"; "bright" refers to the two Magellanic Clouds, and "obscure" refers to the Coalsack. Ferdinand Magellan sighted the LMC on his voyage in 1519 and his writings brought it into common Western knowledge. The galaxy now bears his name. The galaxy and southern end of Dorado are in the current epoch at opposition on about 5 December when thus visible from sunset to sunrise from equatorial points such as Ecuador, the Congos, Uganda, Kenya and Indonesia and for part of the night in nearby months. Above about 28° south, such as most of Australia and South Africa, the galaxy is always sufficiently above the horizon to be considered properly circumpolar, thus during spring and autumn the cloud is also visible much of the night, and the height of winter in June nearly coincides with closest proximity to the Sun's apparent position. Measurements with the Hubble Space Telescope, announced in 2006, suggest the Large and Small Magellanic Clouds may be moving too quickly to be orbiting the Milky Way. Astronomers discovered a new black hole inside the Large Magellanic Cloud in November 2021 using the European Southern Observatory's Very Large Telescope in Chile. Astronomers claim its gravity is influenced by a nearby star, which is about five times the mass of the Sun. In March 2025, the Center for Astrophysics announced strong evidence for a supermassive black hole in the Large Magellanic Cloud, the second-closest besides Sagittarius A*, with an estimated mass 600,000 times that of the Sun. Geometry The Large Magellanic Cloud has a prominent central bar and spiral arm. The central bar, with a radius of 6,900 light-years (2.13 kpc) and a position angle of about 121°, seems to be warped so that the east and west ends are nearer the Milky Way than the middle. In 2014, measurements from the Hubble Space Telescope made it possible to determine a rotation period of 250 million years. The LMC was long considered to be a planar galaxy that could be assumed to lie at a single distance from the Solar System. However, in 1986, Caldwell and Coulson found that field Cepheid variables in the northeast lie closer to the Milky Way than those in the southwest. From 2001 to 2002 this inclined geometry was confirmed by the same means, by core helium-burning red clump stars, and by the tip of the red giant branch. All three papers find an inclination of ~35°, where a face-on galaxy has an inclination of 0°. Further work on the structure of the LMC using the kinematics of carbon stars showed that the LMC's disk is both thick and flared, likely due to interactions with the SMC. Regarding the distribution of star clusters in the LMC, Schommer et al. measured velocities for ~80 clusters and found that the LMC's cluster system has kinematics consistent with the clusters moving in a disk-like distribution. These results were confirmed by Grocholski et al., who calculated distances to a sample of clusters and showed that the cluster system is distributed in the same plane as the field stars. Distance The distance to the LMC has been calculated using standard candles; Cepheid variables are one of the most popular. These have been shown to have a relationship between their absolute luminosity and the period over which their brightness varies. However the variable of metallicity may also need to be taken as a component of this as consensus is this likely affects their period-luminosity relations. Cepheid variables in the Milky Way typically used to calibrate the relation are more metal-rich than those found in the LMC. Modern 8-meter-class optical telescopes have discovered eclipsing binaries throughout the Local Group. Parameters of these systems can be measured without mass or compositional assumptions. The light echoes of supernova 1987A are also geometric measurements, without any stellar models or assumptions. In 2006, the Cepheid absolute luminosity was re-calibrated using Cepheid variables in the galaxy Messier 106 that cover a range of metallicities. Using this improved calibration, they find an absolute distance modulus of ( m − M ) 0 = 18.41 {\displaystyle (m-M)_{0}=18.41} , or 48 kpc (160,000 light-years). This distance has been confirmed by other authors. By cross-correlating different measurement methods, one can bound the distance; the residual errors are now less than the estimated size parameters of the LMC. The results of a study using late-type eclipsing binaries to determine the distance more accurately was published in the scientific journal Nature in March 2013. A distance of 49.97 kpc (163,000 light-years) with an accuracy of 2.2% was obtained. Features Like many irregular galaxies, the LMC is rich in gas and dust, and is currently undergoing vigorous star formation activity. It holds the Tarantula Nebula, the most active star-forming region in the Local Group. The LMC has a wide range of galactic objects and phenomena that make it known as an "astronomical treasure-house, a great celestial laboratory for the study of the growth and evolution of the stars", per Robert Burnham Jr. Surveys of the galaxy have found roughly 60 globular clusters, 400 planetary nebulae and 700 open clusters, along with hundreds of thousands of giant and supergiant stars. Supernova 1987A—the nearest supernova in recent years—was in the Large Magellanic Cloud. The Lionel-Murphy SNR (N86) nitrogen-abundant supernova remnant was named by astronomers at the Australian National University's Mount Stromlo Observatory, acknowledging Australian High Court Justice Lionel Murphy's interest in science and its perceived resemblance to his large nose. A bridge of gas connects the Small Magellanic Cloud (SMC) with the LMC, which evinces tidal interaction between the galaxies. The Magellanic Clouds have a common envelope of neutral hydrogen, indicating that they have been gravitationally bound for a long time. This bridge of gas is a star-forming site. The Large Magellanic Cloud likely has a supermassive black hole at its center, estimated to have 630,000+370,000−380,000 times the mass of the Sun. 21 hypervelocity stars have been discovered within the Milky Way's halo, which are thought to have been ejected from the Large Magellanic Cloud after gravitational interaction with this black hole via the Hills mechanism. X-ray sources No X-rays above background were detected from either cloud during the September 20, 1966, Nike-Tomahawk rocket flight nor that of two days later. The second took off from Johnston Atoll at 17:13 UTC and reached an apogee of 160 km (99 mi), with spin-stabilization at 5.6 rps. The LMC was not detected in the X-ray range 8–80 keV. Another was launched from same atoll at 11:32 UTC on October 29, 1968, to scan the LMC for X-rays. The first discrete X-ray source in Dorado was at RA 05h 20m Dec −69°, and it was the Large Magellanic Cloud. This X-ray source extended over about 12° and is consistent with the Cloud. Its emission rate between 1.5–10.5 keV for a distance of 50 kpc is 4×1038 ergs/s. An X-ray astronomy instrument was carried aboard a Thor missile launched from the same atoll on September 24, 1970, at 12:54 UTC and altitudes above 300 km (190 mi), to search for the Small Magellanic Cloud and to extend observation of the LMC. The source in the LMC appeared extended and contained star ε Dor. The X-ray luminosity (Lx) over the range 1.5–12 keV was 6×1031 W (6×1038 erg/s). The Large Magellanic Cloud (LMC) appears in the constellations Mensa and Dorado. LMC X-1 (the first X-ray source in the LMC) is at RA 05h 40m 05s Dec −69° 45′ 51″, and is a high-mass X-ray binary (star system) source (HMXB). Of the first five luminous LMC X-ray binaries: LMC X-1, X-2, X-3, X-4 and A 0538–66 (detected by Ariel 5 at A 0538–66), LMC X-2 is the one that is a bright low-mass X-ray binary system (LMXB) in the LMC. DEM L316 in the Cloud consists of two supernova remnants. Chandra X-ray spectra show that the hot gas shell on the upper left has an abundance of iron. This implies that the upper-left SNR is the product of a Type Ia supernova; much lower such abundance in the lower remnant belies a Type II supernova. A 16 ms X-ray pulsar is associated with SNR 0538-69.1. SNR 0540-697 was resolved using ROSAT. Gallery References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/OpenAI#cite_note-160] | [TOKENS: 8773] |
Contents OpenAI OpenAI is an American artificial intelligence research organization comprising both a non-profit foundation and a controlled for-profit public benefit corporation (PBC), headquartered in San Francisco. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". OpenAI is widely recognized for its development of the GPT family of large language models, the DALL-E series of text-to-image models, and the Sora series of text-to-video models, which have influenced industry research and commercial applications. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI. The organization was founded in 2015 in Delaware but evolved a complex corporate structure. As of October 2025, following restructuring approved by California and Delaware regulators, the non-profit OpenAI Foundation holds 26% of the for-profit OpenAI Group PBC, with Microsoft holding 27% and employees/other investors holding 47%. Under its governance arrangements, the OpenAI Foundation holds the authority to appoint the board of the for-profit OpenAI Group PBC, a mechanism designed to align the entity’s strategic direction with the Foundation’s charter. Microsoft previously invested over $13 billion into OpenAI, and provides Azure cloud computing resources. In October 2025, OpenAI conducted a $6.6 billion share sale that valued the company at $500 billion. In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed AI safety researchers left OpenAI, citing the company's prominent role in an industry-wide problem. Founding In December 2015, OpenAI was founded as a not for profit organization by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the co-chairs. A total of $1 billion in capital was pledged by Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), and Infosys. However, the actual capital collected significantly lagged pledges. According to company disclosures, only $130 million had been received by 2019. In its founding charter, OpenAI stated an intention to collaborate openly with other institutions by making certain patents and research publicly available, but later restricted access to its most capable models, citing competitive and safety concerns. OpenAI was initially run from Brockman's living room. It was later headquartered at the Pioneer Building in the Mission District, San Francisco. According to OpenAI's charter, its founding mission is "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity." Musk and Altman stated in 2015 that they were partly motivated by concerns about AI safety and existential risk from artificial general intelligence. OpenAI stated that "it's hard to fathom how much human-level AI could benefit society", and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly". The startup also wrote that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible", and that "because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest." Co-chair Sam Altman expected a decades-long project that eventually surpasses human intelligence. Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list of great AI researchers. Brockman was able to hire nine of them as the first employees in December 2015. OpenAI did not pay AI researchers salaries comparable to those of Facebook or Google. It also did not pay stock options which AI researchers typically get. Nevertheless, OpenAI spent $7 million on its first 52 employees in 2016. OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission." OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead. In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research. Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours. In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications. Corporate structure In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit being capped at 100 times any investment. According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company. Many top researchers work for Google Brain, DeepMind, or Facebook, which offer equity that a nonprofit would be unable to match. Before the transition, OpenAI was legally required to publicly disclose the compensation of its top employees. The company then distributed equity to its employees and partnered with Microsoft, announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. OpenAI Global, LLC then announced its intention to commercially license its technologies. It planned to spend $1 billion "within five years, and possibly much faster". Altman stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence. The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global, LLC. In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest. Some researchers have argued that OpenAI Global, LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI. On February 29, 2024, Elon Musk filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of shifting focus from public benefit to profit maximization—a case OpenAI dismissed as "incoherent" and "frivolous," though Musk later revived legal action against Altman and others in August. On April 9, 2024, OpenAI countersued Musk in federal court, alleging that he had engaged in "bad-faith tactics" to slow the company's progress and seize its innovations for his personal benefit. OpenAI also argued that Musk had previously supported the creation of a for-profit structure and had expressed interest in controlling OpenAI himself. The countersuit seeks damages and legal measures to prevent further alleged interference. On February 10, 2025, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI, declaring willingness to match or exceed any better offer. The offer was rejected on 14 February 2025, with OpenAI stating that it was not for sale, but the offer complicated Altman's restructuring plan by suggesting a lower bar for how much the nonprofit should be valued. OpenAI, Inc. was originally designed as a nonprofit in order to ensure that AGI "benefits all of humanity" rather than "the private gain of any person". In 2019, it created OpenAI Global, LLC, a capped-profit subsidiary controlled by the nonprofit. In December 2024, OpenAI proposed a restructuring plan to convert the capped-profit into a Delaware-based public benefit corporation (PBC), and to release it from the control of the nonprofit. The nonprofit would sell its control and other assets, getting equity in return, and would use it to fund and pursue separate charitable projects, including in science and education. OpenAI's leadership described the change as necessary to secure additional investments, and claimed that the nonprofit's founding mission to ensure AGI "benefits all of humanity" would be better fulfilled. The plan has been criticized by former employees. A legal letter named "Not For Private Gain" asked the attorneys general of California and Delaware to intervene, stating that the restructuring is illegal and would remove governance safeguards from the nonprofit and the attorneys general. The letter argues that OpenAI's complex structure was deliberately designed to remain accountable to its mission, without the conflicting pressure of maximizing profits. It contends that the nonprofit is best positioned to advance its mission of ensuring AGI benefits all of humanity by continuing to control OpenAI Global, LLC, whatever the amount of equity that it could get in exchange. PBCs can choose how they balance their mission with profit-making. Controlling shareholders have a large influence on how closely a PBC sticks to its mission. On October 28, 2025, OpenAI announced that it had adopted the new PBC corporate structure after receiving approval from the attorneys general of California and Delaware. Under the new structure, OpenAI's for-profit branch became a public benefit corporation known as OpenAI Group PBC, while the non-profit was renamed to the OpenAI Foundation. The OpenAI Foundation holds a 26% stake in the PBC, while Microsoft holds a 27% stake and the remaining 47% is owned by employees and other investors. All members of the OpenAI Group PBC board of directors will be appointed by the OpenAI Foundation, which can remove them at any time. Members of the Foundation's board will also serve on the for-profit board. The new structure allows the for-profit PBC to raise investor funds like most traditional tech companies, including through an initial public offering, which Altman claimed was the most likely path forward. In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value. On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft's cloud-computing service Azure. From September to December, 2023, Microsoft rebranded all variants of its Copilot to Microsoft Copilot, and they added MS-Copilot to many installations of Windows and released Microsoft Copilot mobile apps. Following OpenAI's 2025 restructuring, Microsoft owns a 27% stake in the for-profit OpenAI Group PBC, valued at $135 billion. In a deal announced the same day, OpenAI agreed to purchase $250 billion of Azure services, with Microsoft ceding their right of first refusal over OpenAI's future cloud computing purchases. As part of the deal, OpenAI will continue to share 20% of its revenue with Microsoft until it achieves AGI, which must now be verified by an independent panel of experts. The deal also loosened restrictions on both companies working with third parties, allowing Microsoft to pursue AGI independently and allowing OpenAI to develop products with other companies. In 2017, OpenAI spent $7.9 million, a quarter of its functional expenses, on cloud computing alone. In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks. In October 2024, OpenAI completed a $6.6 billion capital raise with a $157 billion valuation including investments from Microsoft, Nvidia, and SoftBank. On January 21, 2025, Donald Trump announced The Stargate Project, a joint venture between OpenAI, Oracle, SoftBank and MGX to build an AI infrastructure system in conjunction with the US government. The project takes its name from OpenAI's existing "Stargate" supercomputer project and is estimated to cost $500 billion. The partners planned to fund the project over the next four years. In July, the United States Department of Defense announced that OpenAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and xAI. In the same month, the company made a deal with the UK Government to use ChatGPT and other AI tools in public services. OpenAI subsequently began a $50 million fund to support nonprofit and community organizations. In April 2025, OpenAI raised $40 billion at a $300 billion post-money valuation, which was the highest-value private technology deal in history. The financing round was led by SoftBank, with other participants including Microsoft, Coatue, Altimeter and Thrive. In July 2025, the company reported annualized revenue of $12 billion. This was an increase from $3.7 billion in 2024, which was driven by ChatGPT subscriptions, which reached 20 million paid subscribers by April 2025, up from 15.5 million at the end of 2024, alongside a rapidly expanding enterprise customer base that grew to five million business users. The company’s cash burn remains high because of the intensive computational costs required to train and operate large language models. It projects an $8 billion operating loss in 2025. OpenAI reports revised long-term spending projections totaling approximately $115 billion through 2029, with annual expenditures projected to escalate significantly, reaching $17 billion in 2026, $35 billion in 2027, and $45 billion in 2028. These expenditures are primarily allocated toward expanding compute infrastructure, developing proprietary AI chips, constructing data centers, and funding intensive model training programs, with more than half of the spending through the end of the decade expected to support research-intensive compute for model training and development. The company's financial strategy prioritizes market expansion and technological advancement over near-term profitability, with OpenAI targeting cash-flow-positive operations by 2029 and projecting revenue of approximately $200 billion by 2030. This aggressive spending trajectory underscores both the enormous capital requirements of scaling cutting-edge AI technology and OpenAI's commitment to maintaining its position as a leader in the artificial intelligence industry. In October 2025, OpenAI completed an employee share sale of up to $10 billion to existing investors which valued the company at $500 billion. The deal values OpenAI as the most valuable privately owned company in the world—surpassing SpaceX as the world's most valuable private company. On November 17, 2023, Sam Altman was removed as CEO when its board of directors (composed of Helen Toner, Ilya Sutskever, Adam D'Angelo and Tasha McCauley) cited a lack of confidence in him. Chief Technology Officer Mira Murati took over as interim CEO. Greg Brockman, the president of OpenAI, was also removed as chairman of the board and resigned from the company's presidency shortly thereafter. Three senior OpenAI researchers subsequently resigned: director of research and GPT-4 lead Jakub Pachocki, head of AI risk Aleksander Mądry, and researcher Szymon Sidor. On November 18, 2023, there were reportedly talks of Altman returning as CEO amid pressure placed upon the board by investors such as Microsoft and Thrive Capital, who objected to Altman's departure. Although Altman himself spoke in favor of returning to OpenAI, he has since stated that he considered starting a new company and bringing former OpenAI employees with him if talks to reinstate him didn't work out. The board members agreed "in principle" to resign if Altman returned. On November 19, 2023, negotiations with Altman to return failed and Murati was replaced by Emmett Shear as interim CEO. The board initially contacted Anthropic CEO Dario Amodei (a former OpenAI executive) about replacing Altman, and proposed a merger of the two companies, but both offers were declined. On November 20, 2023, Microsoft CEO Satya Nadella announced Altman and Brockman would be joining Microsoft to lead a new advanced AI research team, but added that they were still committed to OpenAI despite recent events. Before the partnership with Microsoft was finalized, Altman gave the board another opportunity to negotiate with him. About 738 of OpenAI's 770 employees, including Murati and Sutskever, signed an open letter stating they would quit their jobs and join Microsoft if the board did not rehire Altman and then resign. This prompted OpenAI investors to consider legal action against the board as well. In response, OpenAI management sent an internal memo to employees stating that negotiations with Altman and the board had resumed and would take some time. On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members Bret Taylor (as chairman) and Lawrence Summers, with D'Angelo remaining. According to subsequent reporting, shortly before Altman’s firing, some employees raised concerns to the board about how he had handled the safety implications of a recent internal AI capability discovery. On November 29, 2023, OpenAI announced that an anonymous Microsoft employee had joined the board as a non-voting member to observe the company's operations; Microsoft resigned from the board in July 2024. In February 2024, the Securities and Exchange Commission subpoenaed OpenAI's internal communication to determine if Altman's alleged lack of candor misled investors. In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers. In August 2023, it was announced that OpenAI had acquired the New York-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools. In June 2024, OpenAI acquired Multi, a startup focused on remote collaboration. In March 2025, OpenAI reached a deal with CoreWeave to acquire $350 million worth of CoreWeave shares and access to AI infrastructure, in return for $11.9 billion paid over five years. Microsoft was already CoreWeave's biggest customer in 2024. Alongside their other business dealings, OpenAI and Microsoft were renegotiating the terms of their partnership to facilitate a potential future initial public offering by OpenAI, while ensuring Microsoft's continued access to advanced AI models. On May 21, OpenAI announced the $6.5 billion acquisition of io, an AI hardware start-up founded by former Apple designer Jony Ive in 2024. In September 2025, OpenAI agreed to acquire the product testing startup Statsig for $1.1 billion in an all-stock deal and appointed Statsig's founding CEO Vijaye Raji as OpenAI's chief technology officer of applications. The company also announced development of an AI-driven hiring service designed to rival LinkedIn. OpenAI acquired personal finance app Roi in October 2025. In October 2025, OpenAI acquired Software Applications Incorporated, the developer of Sky, a macOS-based natural language interface designed to operate across desktop applications. The Sky team joined OpenAI, and the company announced plans to integrate Sky’s capabilities into ChatGPT. In December 2025, it was announced OpenAI had agreed to acquire Neptune, an AI tooling startup that helps companies track and manage model training, for an undisclosed amount. In January 2026, it was announced OpenAI had acquired healthcare technology startup Torch for approximately $60 million. The acquisition followed the launch of OpenAI’s ChatGPT Health product and was intended to strengthen the company’s medical data and healthcare artificial intelligence capabilities. OpenAI has been criticized for outsourcing the annotation of data sets to Sama, a company based in San Francisco that employed workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to moderate toxic content, notably from ChatGPT's training data and outputs. However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The investigation uncovered that OpenAI began sending snippets of data to Sama as early as November 2021. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management. In 2024, OpenAI began collaborating with Broadcom to design a custom AI chip capable of both training and inference, targeted for mass production in 2026 and to be manufactured by TSMC on a 3 nm process node. This initiative intended to reduce OpenAI's dependence on Nvidia GPUs, which are costly and face high demand in the market. In January 2024, Arizona State University purchased ChatGPT Enterprise in OpenAI's first deal with a university. In June 2024, Apple Inc. signed a contract with OpenAI to integrate ChatGPT features into its products as part of its new Apple Intelligence initiative. In June 2025, OpenAI began renting Google Cloud's Tensor Processing Units (TPUs) to support ChatGPT and related services, marking its first meaningful use of non‑Nvidia AI chips. In September 2025, it was revealed that OpenAI signed a contract with Oracle to purchase $300 billion in computing power over the next five years. In September 2025, OpenAI and NVIDIA announced a memorandum of understanding that included a potential deployment of at least 10 gigawatts of NVIDIA systems and a $100 billion investment from NVIDIA in OpenAI. OpenAI expected the negotiations to be completed within weeks. As of January 2026, this has not been realized, and the two sides are rethinking the future of their partnership. In October 2025, OpenAI announced a multi-billion dollar deal with AMD. OpenAI committed to purchasing six gigawatts worth of AMD chips, starting with the MI450. OpenAI will have the option to buy up to 160 million shares of AMD, about 10% of the company, depending on development, performance and share price targets. In December 2025, Disney said it would make a $1 billion investment in OpenAI, and signed a three-year licensing deal that will let users generate videos using Sora—OpenAI's short-form AI video platform. More than 200 Disney, Marvel, Star Wars and Pixar characters will be available to OpenAI users. In early 2026, Amazon entered advanced discussions to invest up to $50 billion in OpenAI as part of a potential artificial intelligence partnership. Under the proposed agreement, OpenAI’s models could be integrated into Amazon’s digital assistant Alexa and other internal projects. OpenAI provides LLMs to the Artificial Intelligence Cyber Challenge and to the Advanced Research Projects Agency for Health. In October 2024, The Intercept revealed that OpenAI's tools are considered "essential" for AFRICOM's mission and included in an "Exception to Fair Opportunity" contractual agreement between the United States Department of Defense and Microsoft. In December 2024, OpenAI said it would partner with defense-tech company Anduril to build drone defense technologies for the United States and its allies. In 2025, OpenAI's Chief Product Officer, Kevin Weil, was commissioned lieutenant colonel in the U.S. Army to join Detachment 201 as senior advisor. In June 2025, the U.S. Department of Defense awarded OpenAI a $200 million one-year contract to develop AI tools for military and national security applications. OpenAI announced a new program, OpenAI for Government, to give federal, state, and local governments access to its models, including ChatGPT. Services In February 2019, GPT-2 was announced, which gained attention for its ability to generate human-like text. In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named the API, would form the heart of its first commercial product. Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish Anthropic. In 2021, OpenAI introduced DALL-E, a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture. In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024. After ChatGPT was launched, Google announced a similar chatbot, Bard, amid internal concerns that ChatGPT could threaten Google’s position as a primary source of online information. On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products. On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus. On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries. On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand. Access for newer subscribers re-opened a month later on December 13. In December 2024, the company launched the Sora model. It also launched OpenAI o1, an early reasoning model that was internally codenamed strawberry. Additionally, ChatGPT Pro—a $200/month subscription service offering unlimited o1 access and enhanced voice features—was introduced, and preliminary benchmark results for the upcoming OpenAI o3 models were shared. On January 23, 2025, OpenAI released Operator, an AI agent and web automation tool for accessing websites to execute goals defined by users. The feature was only available to Pro users in the United States. OpenAI released deep research agent, nine days later. It scored a 27% accuracy on the benchmark Humanity's Last Exam (HLE). Altman later stated GPT-4.5 would be the last model without full chain-of-thought reasoning. In July 2025, reports indicated that AI models by both OpenAI and Google DeepMind solved mathematics problems at the level of top-performing students in the International Mathematical Olympiad. OpenAI's large language model was able to achieve gold medal-level performance, reflecting significant progress in AI's reasoning abilities. On October 6, 2025, OpenAI unveiled its Agent Builder platform during the company's DevDay event. The platform includes a visual drag-and-drop interface that lets developers and businesses design, test, and deploy agentic workflows with limited coding. On October 21, 2025, OpenAI introduced ChatGPT Atlas, a browser integrating the ChatGPT assistant directly into web navigation, to compete with existing browsers such as Google Chrome and Apple Safari. On December 11, 2025, OpenAI announced GPT-5.2. This model will be better at creating spreadsheets, building presentations, perceiving images, writing code and understanding long context. On January 27, 2026, OpenAI introduced Prism, a LaTeX-native workspace meant to assist scientists to help with research and writing. The platform utilizes GPT-5.2 as a backend to automate the process of drafting for scientific papers, including features for managing citations, complex equation formatting, and real-time collaborative editing. In March 2023, the company was criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards. OpenAI cited competitiveness and safety concerns to justify this repudiation. OpenAI's former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become "obvious" in a few years. In September 2025, OpenAI published a study on how people use ChatGPT for everyday tasks. The study found that "non-work tasks" (according to an LLM-based classifier) account for more than 72 percent of all ChatGPT usage, with a minority of overall usage related to business productivity. In July 2023, OpenAI launched the superalignment project, aiming within four years to determine how to align future superintelligent systems. OpenAI promised to dedicate 20% of its computing resources to the project, although the team denied receiving anything close to 20%. OpenAI ended the project in May 2024 after its co-leaders Ilya Sutskever and Jan Leike left the company. In August 2025, OpenAI was criticized after thousands of private ChatGPT conversations were inadvertently exposed to public search engines like Google due to an experimental "share with search engines" feature. The opt-in toggle, intended to allow users to make specific chats discoverable, resulted in some discussions including personal details such as names, locations, and intimate topics appearing in search results when users accidentally enabled it while sharing links. OpenAI announced the feature's permanent removal on August 1, 2025, and the company began coordinating with search providers to remove the exposed content, emphasizing that it was not a security breach but a design flaw that heightened privacy risks. CEO Sam Altman acknowledged the issue in a podcast, noting users often treat ChatGPT as a confidant for deeply personal matters, which amplified concerns about AI handling sensitive data. Management In 2018, Musk resigned from his Board of Directors seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars. OpenAI stated that Musk's financial contributions were below $45 million. On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via Greylock Partners, and his co-founding of the AI startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI. In May 2024, Chief Scientist Ilya Sutskever resigned and was succeeded by Jakub Pachocki. Co-leader Jan Leike also departed amid concerns over safety and trust. OpenAI then signed deals with Reddit, News Corp, Axios, and Vox Media. Paul Nakasone then joined the board of OpenAI. In August 2024, cofounder John Schulman left OpenAI to join Anthropic, and OpenAI's president Greg Brockman took extended leave until November. In September 2024, CTO Mira Murati left the company. In November 2025, Lawrence Summers resigned from the board of directors. Governance and legal issues In May 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence. They stated that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They proposed creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also called for more technical safety research for superintelligences, and asked for more coordination, for example through governments launching a joint project which "many current efforts become part of". In July 2023, the FTC issued a civil investigative demand to OpenAI to investigate whether the company's data security and privacy practices to develop ChatGPT were unfair or harmed consumers (including by reputational harm) in violation of Section 5 of the Federal Trade Commission Act of 1914. These are typically preliminary investigative matters and are nonpublic, but the FTC's document was leaked. In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information. They asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people. The agency also raised concerns about ‘circular’ spending arrangements—for example, Microsoft extending Azure credits to OpenAI while both companies shared engineering talent—and warned that such structures could negatively affect the public. In September 2024, OpenAI's global affairs chief endorsed the UK's "smart" AI regulation during testimony to a House of Lords committee. In February 2025, OpenAI CEO Sam Altman stated that the company is interested in collaborating with the People's Republic of China, despite regulatory restrictions imposed by the U.S. government. This shift comes in response to the growing influence of the Chinese artificial intelligence company DeepSeek, which has disrupted the AI market with open models, including DeepSeek V3 and DeepSeek R1. Following DeepSeek's market emergence, OpenAI enhanced security protocols to protect proprietary development techniques from industrial espionage. Some industry observers noted similarities between DeepSeek's model distillation approach and OpenAI's methodology, though no formal intellectual property claim was filed. According to Oliver Roberts, in March 2025, the United States had 781 state AI bills or laws. OpenAI advocated for preempting state AI laws with federal laws. According to Scott Kohler, OpenAI has opposed California's AI legislation and suggested that the state bill encroaches on a more competent federal government. Public Citizen opposed a federal preemption on AI and pointed to OpenAI's growth and valuation as evidence that existing state laws have not hampered innovation. Before May 2024, OpenAI required departing employees to sign a lifelong non-disparagement agreement forbidding them from criticizing OpenAI and acknowledging the existence of the agreement. Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement. Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity. However, leaked documents and emails refute this claim. On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement. OpenAI was sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad in July 2023. In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work. The New York Times also sued the company in late December 2023. In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the training of GPT-3, and which the Authors Guild believed to have contained over 100,000 copyrighted books. In 2021, OpenAI developed a speech recognition tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI's president, Greg Brockman. The resulting dataset proved instrumental in training GPT-4. In February 2024, The Intercept as well as Raw Story and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground. The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI. On April 30, 2024, eight newspapers filed a lawsuit in the Southern District of New York against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles. The suing publications included The Mercury News, The Denver Post, The Orange County Register, St. Paul Pioneer Press, Chicago Tribune, Orlando Sentinel, Sun Sentinel, and New York Daily News. In June 2023, a lawsuit claimed that OpenAI scraped 300 billion words online without consent and without registering as a data broker. It was filed in San Francisco, California, by sixteen anonymous plaintiffs. They also claimed that OpenAI and its partner as well as customer Microsoft continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models. On May 22, 2024, OpenAI entered into an agreement with News Corp to integrate news content from The Wall Street Journal, the New York Post, The Times, and The Sunday Times into its AI platform. Meanwhile, other publications like The New York Times chose to sue OpenAI and Microsoft for copyright infringement over the use of their content to train AI models. In November 2024, a coalition of Canadian news outlets, including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC, sued OpenAI for using their news articles to train its software without permission. In October 2024 during a New York Times interview, Suchir Balaji accused OpenAI of violating copyright law in developing its commercial LLMs which he had helped engineer. He was a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in court filings as potentially having documents relevant to the case. On November 26, 2024, Balaji died by suicide. His death prompted the circulation of conspiracy theories alleging that he had been deliberately silenced. California Congressman Ro Khanna endorsed calls for an investigation. On April 24, 2025, Ziff Davis sued OpenAI in Delaware federal court for copyright infringement. Ziff Davis is known for publications such as ZDNet, PCMag, CNET, IGN and Lifehacker. In April 2023, the EU's European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against OpenAI about the ChatGPT service". In late April 2024 NOYB filed a complaint with the Austrian Datenschutzbehörde against OpenAI for violating the European General Data Protection Regulation. A text created with ChatGPT gave a false date of birth for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT's work nor the sources used, could be made available, OpenAI claimed. OpenAI was criticized for lifting its ban on using ChatGPT for "military and warfare". Up until January 10, 2024, its "usage policies" included a ban on "activity that has high risk of physical harm, including", specifically, "weapons development" and "military and warfare". Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons". In August 2025, the parents of a 16-year-old boy who died by suicide filed a wrongful death lawsuit against OpenAI (and CEO Sam Altman), alleging that months of conversations with ChatGPT about mental health and methods of self-harm contributed to their son's death and that safeguards were inadequate for minors. OpenAI expressed condolences and said it was strengthening protections (including updated crisis response behavior and parental controls). Coverage described it as a first-of-its-kind wrongful death case targeting the company's chatbot. The complaint was filed in California state court in San Francisco. In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits against OpenAI, of which four lawsuits alleged wrongful death. The suits were filed on behalf of Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon, who each committed suicide after prolonged ChatGPT usage. In December 2025, Stein-Erik Soelberg, who was 56 years old at the time, allegedly murdered his mother Suzanne Adams. In the months prior the paranoid, delusional man often discussed his ideas with ChatGPT. Adam's estate then sued OpenAI claiming that the company shared responsibility due to the risk of chatbot psychosis despite the fact that chatbot psychosis is not a real medical diagnosis. OpenAI responded saying they will make ChatGPT safer for users disconnected from reality. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Orthodox_Judaism] | [TOKENS: 15558] |
Contents Orthodox Judaism Orthodox Judaism is a collective term for the traditionalist branches of contemporary Judaism. Theologically, it is chiefly defined by regarding the Torah, both Written and Oral, as literally revealed by God on Mount Sinai and faithfully transmitted ever since. Orthodox Judaism therefore advocates a strict observance of Jewish law, or halakha, which is to be interpreted and determined only according to traditional methods and in adherence to the continuum of received precedent through the ages. It regards the entire halakhic system as ultimately grounded in immutable revelation, essentially beyond external and historical influence. More than any theoretical issue, obeying the dietary, purity, ethical and other laws of halakha is the hallmark of Orthodoxy. Practicing members are easily distinguishable by their lifestyle, refraining from doing numerous routine actions on the Sabbath and holidays, consuming only kosher food, praying thrice a day, studying the Torah often, donning head covering and tassels for men and modest clothing for women, and so forth. Other key doctrines include belief in a future bodily resurrection of the dead, divine reward and punishment for the righteous and the sinners, the Election of Israel as a people bound by a covenant with God, and an eventual reign of a salvific Messiah who will restore the Temple in Jerusalem and gather the people to Zion. Orthodox Judaism is not a centralized denomination. Relations between its different subgroups are often strained, and the exact limits of Orthodoxy are subject to intense debate. Very roughly, it may be divided between the Haredi (ultra-Orthodox) branch, which is more conservative and reclusive, and the Modern Orthodox, which is relatively open to outer society and partakes in secular life and culture. Each of those is itself formed of independent communities. These are almost uniformly exclusionist, regarding Orthodoxy as the only legitimate form of Judaism. While adhering to traditional beliefs, the movement is a modern phenomenon. It arose as a result of the breakdown of the autonomous Jewish community since the late 18th century, and was much shaped by a conscious struggle against the pressures of secularization, acculturation and rival alternatives. The strictly observant Orthodox are a definite minority among all Jews, but there are also numerous semi- and non-practicing persons who are affiliated or personally identify with Orthodox communities and organizations. In total, Orthodox Judaism is the largest Jewish religious group, estimated to have over 2 million practicing adherents, and at least an equal number of nominal members or self-identifying supporters. Definitions The earliest known mention of the term Orthodox Jews was made in the Berlinische Monatsschrift in 1795. The word Orthodox was borrowed from the general German Enlightenment discourse, and used to denote those Jews who opposed Enlightenment. During the early and mid-19th century, with the advent of the progressive movements among German Jews, and especially early Reform Judaism, the title Orthodox became the epithet of traditionalists who espoused conservative positions on the issues raised by modernization. They themselves often disliked the Christian term, preferring titles such as "Torah-true" (gesetztreu). They often declared that they used it only as a convenience. German Orthodox leader Rabbi Samson Raphael Hirsch referred to "the conviction commonly designated as Orthodox Judaism"; in 1882, when Rabbi Azriel Hildesheimer became convinced that the public understood that his philosophy and Liberal Judaism were radically different, he removed the word Orthodox from the name of his Hildesheimer Rabbinical Seminary. By the 1920s, the term had become common and accepted even in Eastern Europe. Orthodoxy perceives itself as the only authentic continuation of Judaism as it was until the crisis of modernity. Its progressive opponents often shared this view, regarding it as a remnant of the past and lending credit to their own rivals' ideology.: 5–22 Thus, the term Orthodox is often used generically to refer to traditional (even if only in the sense that it is unrelated to the modernist movements) synagogues, rites, and observances. Academic research noted that the formation of Orthodox ideology and organizations was itself influenced by modernity. This was brought about by the need to defend the very concept of tradition in a world where that was no longer self-evident. When secularization and the dismantlement of communal structures uprooted the old order of Jewish life, traditionalist elements united to form groups that had a specific self-understanding. This, and all that it entailed, constituted a notable change, for the Orthodox had to adapt to modern society no less than anyone else; they developed novel, sometimes radically so, means of action and modes of thought. "Orthodoxization" was a contingent process, drawing from local circumstances and dependent on the threat sensed by its proponents: a sharply delineated Orthodox identity appeared in Central Europe, in Germany and Hungary, by the 1860s; a less stark one emerged in Eastern Europe during the Interwar period. Among the Jews of the Muslim lands, similar processes on a large scale began only around the 1970s, after they immigrated to Israel. Orthodoxy is often described as extremely conservative, ossifying a once-dynamic tradition due to the fear of legitimizing change. While this was sometimes true, its defining feature was not forbidding change and "freezing" Jewish heritage, but rather the need to adapt to being but a segment of Judaism in a modern world inhospitable to traditional practice, often employing much accommodation and leniency. In the mid-1980s, research on Orthodox Judaism became a scholarly discipline, examining how the need to confront modernity shaped and changed its beliefs, ideologies, social structure, and halakhic rulings, separating it from traditional Jewish society. History Until the latter half of the 18th century, Jewish communities in Central and Western Europe were autonomous entities, with distinct privileges and obligations. They were led by the affluent wardens' class judicially subject to rabbinical courts, which governed most civil matters. Jewish Law was considered normative and enforced upon transgressors (common sinning was rebuked, but tolerated) invoking all communal sanctions: imprisonment, taxation, flogging, pillorying, and, especially, excommunication. Cultural, economic, and social exchange with non-Jewish society was limited and regulated. This state of affairs came to an end with the rise of the modern, centralized state, which appropriated all authority. The nobility, clergy, urban guilds, and all other corporate estates were gradually stripped of privileges, inadvertently creating a more equal and secularized society. The Jews were one of the groups affected: excommunication was banned, and rabbinic courts lost almost all their jurisdiction. The state, especially following the French Revolution, was more and more inclined to tolerate Jews as a religious sect, but not as an autonomous entity, and sought to reform and integrate them as "useful subjects". Jewish emancipation and equal rights were discussed. The Christian (and especially Protestant) separation of "religious" and "secular" was applied to Jewish affairs, to which these concepts were alien. The rabbis were bemused when the state expected them to assume pastoral care, foregoing their principal judicial role. Of secondary importance, much less than the civil and legal transformations, were the ideas of Enlightenment that chafed at the authority of tradition and faith. By the end of the 18th century, the weakened rabbinic establishment was facing a new kind of transgressor: they could not be classified as tolerable sinners overcome by their urges (khote le-te'avon), or as schismatics like the Sabbateans or Frankists, against whom sanctions were levied. Their attitudes did not fit the criteria set when faith was a normative and self-evident part of worldly life, but rested on the realities of the new, secularized age. The wardens' class, which wielded most power within the communities, was rapidly acculturating and often sought to oblige the state's agenda. Rabbi Elazar Fleckeles, who returned to Prague from the countryside in 1783, recalled that he first faced there "new vices" of principled irreverence towards tradition, rather than "old vices" such as gossip or fornication. In Hamburg, Rabbi Raphael Cohen attempted to reinforce traditional norms. Cohen ordered the men in his community to grow a beard, forbade holding hands with one's wife in public, and decried women who wore wigs, instead of visible headgear, to cover their hair; Cohen taxed and otherwise persecuted members of the priestly caste who left the city to marry divorcees, men who appealed to state courts, those who ate food cooked by Gentiles, and other transgressors. Hamburg's Jews repeatedly appealed to the civil authorities, which eventually justified Cohen. However, the unprecedented meddling in his jurisdiction profoundly shocked him and dealt a blow to the prestige of the rabbinate. An ideological challenge to rabbinic authority, in contrast to prosaic secularization, appeared in the form of the Haskalah (Jewish Enlightenment) movement which came to the fore in 1782. Hartwig Wessely, Moses Mendelssohn, and other maskilim called for a reform of Jewish education, abolition of coercion in matters of conscience, and other modernizing measures. They bypassed rabbinic approval and set themselves, at least implicitly, as a rival intellectual elite. A bitter struggle ensued. Reacting to Mendelssohn's assertion that freedom of conscience must replace communal censure, Rabbi Cohen of Hamburg commented: The very foundation of the Law and commandments rests on coercion, enabling to force obedience and punish the transgressor. Denying this fact is akin to denying the sun at noon. However, maskilic-rabbinic rivalry ended in most of Central Europe, as governments imposed modernization upon their Jewish subjects. Schools replaced traditional cheders, and standard German began to supplant Yiddish. Differences between the establishment and the Enlightened became irrelevant, and the former often embraced the views of the latter (now antiquated, as more aggressive modes of acculturation replaced the Haskalah program). In 1810, when philanthropist Israel Jacobson opened what was later identified as the first Reform synagogue in Seesen, with modernized rituals, he encountered little protest. The founding of the Hamburg Temple in 1818 mobilized the conservative elements. The organizers of the synagogue wished to appeal to acculturated Jews with a modernized ritual. They openly defied not just the local rabbinic court that ordered them to desist, but published learned tracts that castigated the entire rabbinical elite as hypocritical and obscurant. The moral threat they posed to rabbinic authority, as well as halakhic issues such as having a gentile play an organ on the Sabbath, were combined with theological issues. The Temple's revised prayer book omitted or rephrased petitions for the coming of the Messiah and renewal of sacrifices (post factum, it was considered to be the first Reform liturgy). More than anything else, this doctrinal breach alarmed the traditionalists. Dozens of rabbis from across Europe united in support of the Hamburg rabbinic court, banning the major practices enacted there and offering halakhic grounds for forbidding any changes. Most historians concur that the 1818–1821 Hamburg Temple dispute, with its concerted backlash against Reform and the emergence of a self-aware conservative ideology, marks the beginning of Orthodox Judaism. The leader and organizer of the Orthodox camp during the dispute, and the most influential figure in early Orthodoxy, was Rabbi Moses Sofer of Pressburg, Hungary. Historian Jacob Katz regarded him as the first to grasp the realities of the modern age. Sofer understood that what remained of his political clout would soon disappear, and that he had largely lost the ability to enforce observance; as Katz wrote, "obedience to halakha became dependent on recognizing its validity, and this very validity was challenged by those who did not obey." He was deeply troubled by reports from his native Frankfurt and the arrival from the west of dismissed rabbis, ejected by progressive wardens, or pious families, fearing for the education of their children. These émigrés often became ardent followers. Sofer's response to the crisis of traditional Jewish society was unremitting conservatism, canonizing every detail of prevalent norms in the observant community lest any compromise legitimize the progressives' claim that the law was fluid or redundant. He was unwilling to trade halakhic opinions for those he considered to be pretending to honor the rules of rabbinic discourse, while intending to undermine them. Sofer regarded traditional customs as equivalent to vows; he warned in 1793 that even the "custom of ignoramuses" (one known to be rooted solely in a mistake of the common masses) was to be meticulously observed and revered. Sofer was frank and vehement about his stance, stating during the Hamburg dispute that prayers in the vernacular were not problematic per se, but he forbade them because they constituted an innovation. He succinctly expressed his attitude in wordplay he borrowed from the Talmud: "The new (Chadash, originally meaning new grain) is forbidden by the Torah anywhere." Regarding the new, ideologically-driven sinners, Sofer commented in 1818 that they should have been anathemized and banished from the People of Israel like earlier heretical sects. Unlike most, if not all, rabbis in Central Europe, who had little choice but to compromise, Sofer enjoyed unique circumstances. He, too, had to tread carefully during the 1810s, tolerating a modernized synagogue in Pressburg and other innovations, and his yeshiva was nearly closed by warden Wolf Breisach. But in 1822, three poor (and therefore traditional) community members, whose deceased apostate brother bequeathed them a large fortune, rose to the wardens' board. Breisach died soon after, and the Pressburg community became dominated by the conservatives. Sofer also possessed a strong base in the form of his yeshiva, the world's largest at the time, with hundreds of students. And crucially, the large and privileged Hungarian nobility blocked most imperial reforms in the backward country, including those relevant to the Jews. Hungarian Jewry retained its pre-modern character well into the 19th century, allowing Sofer's disciples to establish a score of new yeshivas, at a time when these institutions were rapidly closing in the west, and a strong rabbinate to appoint them. A generation later, a self-aware Orthodoxy was well entrenched in the country. Hungarian Jewry gave rise both to Orthodoxy in general, in the sense of a comprehensive response to modernity, and specifically to the traditionalist, militant ultra-Orthodoxy. The 1818–1821 controversy also elicited a different response, which first arose in its very epicenter. Severe protests did not affect Temple congregants, eventually leading the wardens of Hamburg's Jewish community to a comprehensive compromise for the sake of unity. They replaced the elderly, traditional Chief Dayan Baruch Oser with Isaac Bernays. The latter was a university graduate, clean-shaven, and modern, who could appeal to the acculturated and the young. Bernays signified a new era, and historians marked him as the first modern rabbi, fitting the demands of emancipation: his contract forbade him to tax, punish, or coerce, and he lacked political or judiciary power. He was forbidden from interfering in the Temple's conduct. Conservative in the principal issues of faith, in aesthetic, cultural, and civil matters, Bernays was a reformer. He introduced secular studies for children, wore a cassock like a Protestant clergyman, and delivered vernacular sermons. He forbade the spontaneous, informal character of synagogue conduct typical of Ashkenazi tradition, and ordered prayers to be somber and dignified. Bernays' style re-unified the Hamburg community by accommodating their aesthetic demands (but not theological ones, raised by only a learned few). The combination of religious conservatism and modernity in everything else was emulated elsewhere, earning the label "Neo-Orthodoxy". Bernays and his like-minded followers, such as Rabbi Jacob Ettlinger, fully accepted the platform of the moderate Haskalah, taking away its progressive edge. While old-style traditional life continued in Germany until the 1840s, secularization and acculturation turned Neo-Orthodoxy into the strict right-wing of German Jewry. It was fully articulated by Bernays' mid-century disciples Samson Raphael Hirsch and Azriel Hildesheimer. Hirsch, a Hamburg native who was ten during the Temple dispute, combined Orthodox dogmatism and militancy against rival interpretations of Judaism, granting leniency on many cultural issues and embraced German culture. The novel mixture termed "Neo-Orthodoxy" spread. While insisting on strict observance, the movement both tolerated and advocated modernization: traditionally rare formal religious education for girls was introduced; modesty and gender separation were relaxed to match German society; men went clean-shaven and dressed like Gentiles; and exclusive Torah study virtually disappeared. Basic religious studies incorporating German Bildung provided children with practical halakhic knowledge for thriving in modern society. Ritual was reformed to match prevalent aesthetic conceptions, much like non-Orthodox synagogues though without the ideological undertone, and the liturgy was often abbreviated. Neo-Orthodoxy mostly did not attempt to reconcile its conduct and halakhic or moral norms. Instead it adopted compartmentalization, de facto limiting Judaism to the private and religious spheres, while otherwise yielding to outer society. While conservative Rabbis in Hungary still thought in terms of the now-lost communal autonomy, the Neo-Orthodox turned Judaism from an all-encompassing practice into a private religious conviction. In the late 1830s, modernist pressures in Germany shifted from the secularization debate, moving into the "purely religious" sphere of theology and liturgy. A new generation of university-trained rabbis (many German states required communal rabbis to possess such education) sought to reconcile Judaism with the historical-critical study of scripture and the dominant philosophies of the day, especially Kant and Hegel. Influenced by the critical "Science of Judaism" (Wissenschaft des Judentums) pioneered by Leopold Zunz, and often in emulation of the Liberal Protestant milieu, they reexamined and undermined beliefs held as sacred in traditional circles, especially the notion of an unbroken chain from Sinai to the Sages. The more radical among the Wissenschaft rabbis, unwilling to limit critical analysis or its practical application, coalesced around Rabbi Abraham Geiger to establish Reform Judaism. Between 1844 and 1846, Geiger organized three rabbinical synods in Braunschweig, Frankfurt and Breslau, to determine how to refashion Judaism for present times. The Reform conferences were met with uproar by the Orthodox. Warden Hirsch Lehren of Amsterdam and Rabbi Jacob Ettlinger of Altona both organized anti-Reform manifestos, denouncing the new initiatives, signed by scores of rabbis from Europe and the Middle East. The tone of the signatories varied considerably along geographic lines: letters from traditional societies in Eastern Europe and the Ottoman Empire implored local leaders to petition the authorities and have them ban the movement. Signers from Central and Western Europe used terms commensurate with the liberal age. All were implored by the petitioners to be brief and accessible; complex halakhic arguments, intended to convince the rabbinic elite in past generations, were replaced by an appeal to the secularized masses. The struggle with Wissenschaft criticism shaped the Orthodox. For centuries, Ashkenazi rabbinic authorities espoused Nahmanides' position that the Talmudic exegesis, which derived laws from the Torah's text by employing hermeneutics, was binding d'Oraita. Geiger and others presented exegesis as an arbitrary, illogical process, and consequently defenders of tradition embraced Maimonides' claim that the Sages merely buttressed already received laws with biblical citations, rather than actually deriving them. Jay Harris commented, "An insulated orthodox, or, rather, traditional rabbinate, feeling no pressing need to defend the validity of the Oral Law, could confidently appropriate the vision of most medieval rabbinic scholars; a defensive German Orthodoxy, by contrast, could not. ... Thus began a shift in understanding that led Orthodox rabbis and historians in the modern period to insist that the entire Oral Law was revealed by God to Moses at Sinai." 19th-Century Orthodox commentaries, like those authored by Malbim, attempted to amplify the notion that the Oral and Written Law were intertwined and inseparable. Wissenschaft posed a greater challenge to the modernized neo-Orthodox than to the traditionalist. Hirsch and Hildesheimer divided on the matter, anticipating modernist Orthodox attitudes to the historical-critical method. Hirsch argued that analyzing minutiae of tradition as products of their historical context was akin to denying its divine origin and timeless relevance. Hildesheimer consented to research under limits, subjugating it to the predetermined sanctity of the subject matter and accepting its results only when they accorded with the latter. More importantly, while he was content to engage academically, he opposed its practical application in religious questions, requiring traditional methods to be used. Hildesheimer's approach was emulated by his disciple Rabbi David Zvi Hoffmann, a scholar and apologetic. His polemic against the Graf-Wellhausen hypothesis formed the classical Orthodox response to Higher Criticism. Hoffman declared that for him, the unity of the Pentateuch was a given, regardless of research. Hirsch often lambasted Hoffman for contextualizing rabbinic literature. All of them stressed the importance of dogmatic adherence to Torah min ha-Shamayim, which led them to conflict with Rabbi Zecharias Frankel, Chancellor of the Jewish Theological Seminary of Breslau. Unlike the Reform camp, Frankel insisted on strict observance and displayed great reverence towards tradition. But though appreciated by conservatives, his practice of Wissenschaft left him suspect to Hirsch and Hildesheimer. They demanded again and again that he state his beliefs concerning the nature of revelation. In 1859, Frankel published a critical study of the Mishnah, and added that all commandments classified as "Law given to Moses at Sinai" were merely customs (he broadened Asher ben Jehiel's opinion). Hirsch and Hildesheimer seized the opportunity and launched a public campaign against him, accusing him of heresy. Concerned that public opinion regarded both neo-Orthodoxy and Frankel's "Positive-Historical School" centered at Breslau as similarly observant and traditionalist, the two stressed that the difference was dogmatic and not halakhic. They managed to tarnish Frankel's reputation in the traditional camp and delegitimized him for many. The Positive-Historical School is regarded by Conservative Judaism as an intellectual forerunner. While Hildesheimer distinguished Frankel's observant disciples from Reform proponents, he wrote in his diary: how meager is the principal difference between the Breslau School, who don silk gloves at their work, and Geiger who wields a sledgehammer. During the 1840s in Germany, as traditionalists became a clear minority, some Orthodox rabbis, such as Salomo Eger of Posen, urged the adoption of Moses Sofer's position and to anathemize the principally nonobservant. Eating, worshipping or marrying with them were to be banned. Rabbi Jacob Ettlinger, whose journal Treue Zionswächter was the first regular Orthodox newspaper (signifying the coalescence of a distinct Orthodox mindset), rejected their call. Ettlinger, and German neo-Orthodoxy in his wake, chose to regard the modern secularized Jew as a transgressor rather than a schismatic. He adopted Maimonides' interpretation of the Talmudic concept tinok shenishba (captured infant), a Jew by birth who was not raised as such and therefore could be absolved for not practicing, and greatly expanded it to serve the Orthodox need to tolerate the nonobservant majority (many of their own congregants ignored strict practice). For example, he allowed congregants to drink wine poured by Sabbath desecrators, and to ignore other halakhic sanctions. Yet German neo-Orthodoxy could not legitimize nonobservance, and adopted a hierarchical approach, softer than traditional sanctions, but no less intent on differentiating sinners and righteous. Reform rabbis or lay leaders, considered ideological opponents, were castigated, while the common mass was to be carefully handled. Some German neo-Orthodox believed that while doomed to minority status in their native country, their ideology could successfully confront modernity and unify Judaism in more traditional communities to the east. In 1847, Hirsch was elected Chief Rabbi of Moravia, where old rabbinic culture and yeshivas operated. His expectations were dashed as traditionalist rabbis scorned him for his European manners and lack of Talmudic acumen. They became enraged by his attempts to reform synagogues and to establish a rabbinical seminary including secular studies. The progressives viewed him as too conservative. After four years of constant strife, he lost faith in the possibility of reuniting the Jewish public. In 1851, a group in Frankfurt am Main that opposed the Reform character of the Jewish community turned to Hirsch. He led them for the remainder of his life, finding Frankfurt a hospitable site for his unique ideology, which amalgamated acculturation, dogmatic theology, thorough observance, and strict secession from the non-Orthodox. That year, Hildesheimer visited Hungary. Confounded by urbanization and acculturation – and the rise of Neology, a nonobservant laity served by rabbis who mostly favoured the Positive-Historical approach – the elderly local rabbis at first welcomed Hildesheimer. He opened a modern school in Eisenstadt that combined secular and religious studies. Traditionalists such as Moshe Schick and Yehudah Aszód sent their sons to study there. Samuel Benjamin Sofer, the heir of late Hatam Sofer, considered appointing Hildesheimer as his assistant-rabbi in Pressburg and instituting secular studies in the city's great yeshiva. The rabbi of Eisenstadt believed that only a full-fledged modern rabbinical seminary could fulfill his neo-Orthodox agenda. In the 1850s and 1860s, however, a radical reactionary Orthodox party coalesced in the northeastern regions of Hungary. Led by Rabbi Hillel Lichtenstein, his son-in-law Akiva Yosef Schlesinger and decisor Chaim Sofer, the "zealots" were shocked by the demise of the traditional world into which they had been born. Like Moses Sofer a generation before them, these Orthodox émigrés moved east, to a pre-modern environment that they were determined to safeguard. Lichtenstein ruled out any compromise with modernity, insisting on maintaining Yiddish and traditional dress. They considered the Neologs as moving outside of Judaism, and were more concerned with neo-Orthodoxy, which they regarded as a thinly veiled gateway for a similar fate. Chaim Sofer summarized their view of Hildesheimer: "The wicked Hildesheimer is the horse and chariot of the Evil Inclination... All the heretics in the last century did not seek to undermine the Law and the Faith as he does." In their struggle against acculturation, the Hungarian ultra-Orthodox struggled to provide strong halakhic arguments. Michael Silber wrote: "These issues, even most of the religious reforms, fell into gray areas not easily treated within Halakha. It was often too flexible or ambiguous, at times silent, or worse yet, embarrassingly lenient." Schlesinger was forced to venture outside of normative law, into mystical writings and other fringe sources, to buttress his ideology. Most Hungarian Orthodox rabbis, while sympathetic to the "zealots"' cause, dismissed their legal arguments. In 1865, the ultra-Orthodox convened in Nagymihály and issued a ban on various synagogue reforms, intended not against the Neologs but against developments in the Orthodox camp, especially after Samuel Sofer violated his father's expressed ban and instituted vernacular sermons in Pressburg. Schick, the country's most prominent decisor, and other leading rabbis refused to sign, though they did not publicly oppose the decree. Hildesheimer's planned seminary was too radical for the mainstream rabbis, and he became marginalized and isolated by 1864. The internal Orthodox division was complicated by growing tension with the Neologs. In 1869, the Hungarian government convened a General Jewish Congress that was aimed at creating a national representative body. Fearing Neolog domination, the Orthodox seceded from the Congress and appealed to Parliament in the name of religious freedom. This demonstrated the internalization of the new circumstances. In 1851, Orthodox leader Meir Eisenstaedter petitioned the authorities to restore the coercive powers of the communities. In 1871 the government recognized a separate Orthodox national committee. Communities that refused to join either side, labeled "Status Quo", were subject to Orthodox condemnation. However, the Orthodox tolerated nonobservant Jews as long as they affiliated with the national committee: Adam Ferziger claimed that membership and loyalty, rather than beliefs and ritual behavior, emerged as the definitive manifestation of Jewish identity. The Hungarian schism was the most radical internal separation among the Jews of Europe. Hildesheimer returned to Germany soon after, disillusioned though not as pessimistic as Hirsch. He was appointed rabbi of the Orthodox sub-community in Berlin (which had separate religious institutions but was not formally independent of the Liberal majority), where he finally established his seminary. In 1877, a law enabling Jews to secede from their communities without conversion was passed in Germany. It was a stark example that Judaism was now confessional, not corporate. Hirsch withdrew his congregation from the Frankfurt community and decreed that all Orthodox should do the same. However, unlike the heterogeneous communities of Hungary, which often consisted of recent immigrants, Frankfurt and most German communities were close-knit. The majority of Hirsch's congregants enlisted Rabbi Seligman Baer Bamberger, who was older and more conservative. Bamberger was concerned with the principal of unity among the People Israel and dismissive of Hirsch, whom he regarded as unlearned and overly assimilated. He decreed that since the mother community was willing to finance Orthodox services and allow them religious freedom, secession was unwarranted. Eventually, less than 80 families from Hirsch's 300-strong congregation followed their rabbi. The vast majority of the 15%–20% of German Jews affiliated with Orthodox institutions cared little for the polemics. They did not secede over reasons of finance and familial relations. Only a handful of Secessionist, Austrittorthodox, communities were established in the Reich; almost everyone remained Communal Orthodox, Gemeindeortodox, within Liberal mother congregations. The Communal Orthodox argued that their approach was true to Jewish unity and decisive in maintaining public standards of observance and traditional education in Liberal communities. They claimed that Secessionists viewed them as hypocritical middle-of-the-roaders. The conflicts in Hungary and Germany, and the emergence of distinctly Orthodox communities and ideologies, were the exception rather than the rule in Central and Western Europe. France, Britain, Bohemia, Austria and other countries saw both a virtual disappearance of observance and serious interest in bridging Judaism and modernity. The official rabbinate remained technically traditional, not introducing ideological change. The organ – a symbol of Reform in Germany since 1818, so much that Hildesheimer seminarians had to sign a declaration that they would never serve in a synagogue that introduced one – was accepted with little qualm by the French Consistoire in 1856. It was part of a series of synagogue regulations passed by Chief Rabbi Salomon Ulmann. Even Rabbi Solomon Klein of Colmar, the leader of Alsatian conservatives who partook in the castigation of Zecharias Frankel, allowed the instrument in his community. In England, Rabbi Nathan Marcus Adler's United Synagogue shared a similar approach: It was vehemently conservative in principle and combated ideological reformers, yet served a nonobservant public – as Todd Endelman noted, "While respectful of tradition, most English-born Jews were not orthodox in terms of personal practice. Nonetheless they were content to remain within an orthodox congregational framework" – and introduced considerable synagogue reforms. The much belated pace of modernization in Russia, Congress Poland and the Romanian principalities, where harsh discrimination and active persecution of the Jews continued until 1917, delayed the crisis of traditional society for decades. Old-style education in the heder and yeshiva remained the norm, retaining Hebrew as the language of the elite and Yiddish as the vernacular. The defining fault-line of Eastern European Jews was between the Hasidim and the Misnagdic reaction against them. Reform attempts by the Czar's government, like the school modernization under Max Lilienthal or the foundation of rabbinical seminaries and the mandating of communities to appoint clerks known as "official rabbis", all had little influence. Communal autonomy and the rabbinic courts' jurisdiction were abolished in 1844, but economic and social seclusion remained, ensuring the authority of Jewish institutions and traditions de facto. In 1880, there were only 21,308 Jewish pupils in government schools, out of some 5 million Jews in total; In 1897, 97% of the 5.2 million Jews in the Pale of Settlement and Congress Poland declared Yiddish their mother tongue, and only 26% possessed any literacy in Russian. Though the Eastern European Haskalah challenged the traditional establishment – unlike its western counterpart, no acculturation process turned it irrelevant; it flourished from the 1820s until the 1890s – the latter's hegemony over the vast majority was self-evident. The leading rabbis maintained the old conception of communal unity: In 1882, when an Orthodox party in Galicia appealed for the right of secession, the Netziv and other Russian rabbis declared it forbidden and contradicting the idea of Israel's oneness. While slow, change was by no means absent. In the 1860s and 1870s, anticipating a communal disintegration like the one in the west, moderate maskilic rabbis like Yitzchak Yaacov Reines and Yechiel Michel Pines called for inclusion of secular studies in the heders and yeshivas, a careful modernization, and an ecumenical attempt to form a consensus on necessary adaptation of halakha to novel times. Their initiative was thwarted by a combination of strong anti-traditional invective on behalf of the radical, secularist maskilim and conservative intransigence from the leading rabbis, especially during the bitter polemic which erupted after Moshe Leib Lilienblum's 1868 call for a reconsideration of Talmudic strictures. Reines, Pines and their associates would gradually form the nucleus of Religious Zionism, while their conservative opponents would eventually adopt the epithet Haredim (then, and also much later, still a generic term for the observant and the pious). The attitude toward Jewish nationalism, particularly Zionism, and its nonobservant if not staunchly secularist leaders and partisans, was the key question facing the traditionalists of Eastern Europe. Closely intertwined were issues of modernization in general: As noted by Joseph Salmon, the future religious Zionists (organized in the Mizrahi since 1902) were not only supportive of the national agenda per se, but deeply motivated by criticism of the prevalent Jewish society, a positive reaction to modernity and a willingness to tolerate nonobservance while affirming traditional faith and practice. Their proto-Haredi opponents sharply rejected all of the former positions and espoused staunch conservatism, which idealized existing norms. Any illusion that differences could be blanded and a united observant pro-Zionist front would be formed, were dashed between 1897 and 1899, as both the Eastern European nationalist intellectuals and Theodor Herzl himself revealed an uncompromising secularist agenda, forcing traditionalist leaders to pick sides. In 1900, the anti-Zionist pamphlet Or la-Yesharim, endorsed by many Russian and Polish rabbis, largely demarcated the lines between the proto-Haredi majority and the Mizrahi minority, and terminated dialogue; in 1911, when the 10th World Zionist Congress voted in favour of propagating non-religious cultural work and education, a large segment of the Mizrahi seceded and joined the anti-Zionists. In 1907, Eastern European proto-Haredi elements formed the Knesseth Israel party, a modern framework created in recognition of the deficiencies of existing institutions. It dissipated within a year. German Neo-Orthodoxy, in the meantime, developed a keen interest in the traditional Jewish masses of Russia and Poland; if at the past they were considered primitive, a disillusionment with emancipation and enlightenment made many young assimilated German Orthodox youth embark on journeys to East European yeshivot, in search of authenticity. The German secessionists already possessed a platform of their own, the Freie Vereinigung für die Interessen des Orthodoxen Judentums, founded by Samson Raphael Hirsch in 1885. In 1912, two German FVIOJ leaders, Isaac Breuer and Jacob Rosenheim, managed to organize a meeting of 300 seceding Mizrahi, proto-Haredi and secessionist Neo-Orthodox delegate in Katowice, creating the Agudath Israel party. While the Germans were a tiny minority in comparison to the Eastern Europeans, their modern education made them a prominent elite in the new organization, which strove to provide a comprehensive response to world Jewry's challenges in a strictly observant spirit. The Agudah immediately formed its Council of Torah Sages as supreme rabbinic leadership body. Many ultra-traditionalist elements in Eastern Europe, like the Belz and Lubavitch Hasidim, refused to join, viewing the movement as a dangerous innovation; and the organized Orthodox in Hungary rejected it as well, especially after it did not affirm a commitment to communal secession in 1923. In the Interwar period, sweeping secularization and acculturation deracinated old Jewish society in Eastern Europe. The October Revolution granted civil equality and imposed anti-religious persecutions, radically transforming Russian Jewry within a decade; the lifting of formal discrimination also strongly affected the Jews of independent Poland, Lithuania and other states. In the 1930s, it was estimated that no more than 20%–33% of Poland's Jews, the last stronghold of traditionalism where many were still living in rural and culturally secluded communities, could be considered strictly observant. Only upon having become an embattled (though still quite large) minority, did the local traditionalists complete their transformation into Orthodox, albeit never as starkly as in Hungary or Germany. Eastern European Orthodoxy, whether Agudah or Mizrahi, always preferred cultural and educational independence to communal secession, and maintained strong ties and self-identification with the general Jewish public. Within its ranks, the 150-years-long struggle between Hasidim and Misnagdim was largely subsided; the latter were even dubbed henceforth as "Litvaks", as the anti-Hasidic component in their identity was marginalized. In the interwar period, Rabbi Yisrael Meir Kagan emerged as the popular leader of the Eastern European Orthodox, particularly the Agudah-leaning. American Jewry of the 19th century was small and immigrant-based, lacking traditional institutions or strong rabbinic presence. Voluntary congregations, rather than corporate communities, were the norm; separation of church and state, and dynamic religiosity of the independent Protestant model, shaped synagogue life. In the mid-19th century, Reform Judaism spread rapidly, advocating a formal relinquishment of traditions very few in the secularized, open environment observed anyhow; the United States would be derisively named the Treife Medina, or "Profane Country", in Yiddish. Conservative elements, concerned mainly with public standards of observance in critical fields like marriage, rallied around Isaac Leeser. Lacking a rabbinic ordination and little knowledgeable by European standards, Leeser was an ultra-traditionalist in his American milieu. In 1845 he introduced the words "Orthodox" and "Orthodoxy" into the American Jewish discourse, in the sense of opposing Reform; while admiring Samson Raphael Hirsch, Leeser was an even stauncher proponent of Zecharias Frankel, whom he considered the "leader of the Orthodox party" at a time when Positive-Historical and Orthodox positions were barely discernible from each other to most observers (in 1861, Leeser defended Frankel in the polemic instigated by Hirsch). A broad non-Reform camp slowly coalesced as the minority within American Jewry; while strict in relation to their progressive opponents, they served a nonobservant public and instituted thorough synagogue reforms – omission of piyyutim from the liturgy, English-language sermons and secular education for the clergy were the norm in most, and many Orthodox synagogues in America did not partition men and women. In 1885, the antinomian Pittsburgh Platform moved a broad coalition of conservative religious leaders to found the Jewish Theological Seminary of America. They variously termed their ideology, which was never consistent and mainly motivated by a rejection of Reform, as "Enlightened Orthodoxy" or "Conservative Judaism". The latter term would only gradually assume a clearly distinct meaning. To their right, strictly traditionalist Eastern European immigrants formed the Union of Orthodox Rabbis in 1902, in direct opposition to the Americanized character of the OU and JTS. The UOR frowned upon English-language sermons, secular education, and acculturation in general. Even before that, in 1897, an old-style yeshiva, RIETS, was founded in New York. Eventually, its students rebelled in 1908, demanding a modern rabbinic training much like that of their peers in JTS. In 1915, RIETS was reorganized as a decidedly Modern Orthodox institution, and a merger with the JTS was discussed. In 1923, the Rabbinical Council of America was established as the clerical association of the OU. Only in the postwar era, did the vague traditional coalition come to a definite end. During and after the Holocaust, a new wave of strictly observant refugees arrived from Eastern and Central Europe. They often regarded even the UOR as too lenient and Americanized. Typical of these was Rabbi Aaron Kotler, who established Lakewood Yeshiva in New Jersey during 1943. Alarmed by the enticing American environment, Kotler turned his institution into an enclave, around which an entire community slowly evolved. It was very different from his prewar yeshiva at Kletsk, Poland, the students of which were but a segment of the general Jewish population and mingled with the rest. Lakewood pioneered the homogeneous, voluntary and enclavist model of postwar Haredi communities, which were independent entities with their own developing subculture. The new arrivals soon dominated the traditionalist wing of American Jewry, forcing the locals to adopt more rigorous positions. Concurrently, the younger generation in the JTS and the Rabbinical Assembly demanded greater clarity, theological unambiguity and halakhic independence from the Orthodox veto on serious innovations — in 1935, for example, the RA yielded to such pressures and shelved its proposal for a solution to the agunah predicament. "Conservative Judaism", now adopted as an exclusive label by most JTS graduates and RA members, became a truly distinct movement. In 1950, the Conservatives signaled their break with Orthodox halakhic authorities, with the acceptance of a far-reaching legal decision, which allowed one to drive to the synagogue and to use electricity on Sabbath. Between the ultra-Orthodox and Conservatives, Modern Orthodoxy in America also coalesced, becoming less a generic term and more a distinct movement. Its leader in the postwar era, Rabbi Joseph B. Soloveitchik, left Agudas Israel to adopt both pro-Zionist positions and a positive, if reserved, attitude toward Western culture. As dean of RIETS and honorary chair of RCA's halakha committee, Soloveitchik shaped Modern Orthodoxy for decades. While principled differences with the Conservatives were clear, as the RCA stressed the divinely revealed status of the Torah and a strict observance of halakha, sociological boundaries were less so. Many members of the Modern Orthodox public were barely observant, and a considerable number of communities did not install a gender partition in their synagogues – physically separate seating became the distinguishing mark of Orthodox/Conservative affiliation in the 1950s, and was strongly promulgated by the RCA – for many years. As late as 1997, seven OU congregations still lacked a partition. Theology Judaism never formulated a conclusive credo; whether it reflects a dogma remains controversial. Some researchers argued that the importance of daily practice and adherence to halakha (Jewish law) mooted theoretical issues. Others dismissed this view entirely, citing ancient rabbinic debates that castigated various heresies with little reference to observance. However, even without a uniform doctrine, Orthodox Judaism is basically united in its core beliefs. Disavowing them is a major blasphemy.[citation needed]. Several medieval authorities attempted to codify these beliefs, including Saadia Gaon and Joseph Albo. Each composed a creed, although the 13 principles expounded by Maimonides in his 1160s Commentary on the Mishna, remained the most widely accepted. Various points were contested by many of Maimonides' contemporaries and later sages, such as the exact formulation and the status of disbelievers (either misinformed or expelled heretics). Similarly, Albo listed only three fundamentals, and did not regard the Messiah as a key tenet. Many who objected argued that the entire corpus of the Torah and the sayings of ancient sages were of canonical stature, rather than a few selected points. In later centuries, the 13 Principles became considered universally binding and cardinal by Orthodox authorities. During the Middle Ages, two systems of thought competed for primacy. The rationalist-philosophic school endeavored to present all commandments as serving higher moral and ethical purposes, while the mystical tradition, exemplified in Kabbalah, assigned each rite with a role in hidden dimensions of reality. Sheer obedience, derived from faithfulness to one's community and ancestry, was believed sufficient for the common people, while the educated chose one of the two schools. In the modern era, the prestige of both declined, and "naive faith" became popular. At a time when contemplation in matters of belief was associated with secularization, luminaries such as Yisrael Meir Kagan stressed the importance of simple, unsophisticated commitment to the precepts passed down from the Beatified Sages.[clarification needed] This became standard in the Haredi world. Judaism adheres to monotheism, the belief in one God. The basic tenets of Orthodoxy, drawn from ancient sources like the Talmud and later sages, chiefly include the attributes of God in Judaism: one and indivisible, preceding all creation, which God alone brought into being, eternal, omniscient, omnipotent, absolutely incorporeal, and beyond human reason. This basis is evoked in many foundational texts, and is repeated often in daily prayers, such as in Judaism's creed-like Shema Yisrael: "Hear, O Israel, the Lord is our God, the Lord is One." Maimonides delineated this understanding of a personal God in his opening six articles. The six concern God's status as the sole creator, his oneness, his impalpability, that he is first and last, that God alone, and no other being, may be worshipped, and that he is omniscient. The supremacy of the God of Israel is even applied to non-Jews. According to most rabbinic opinions, non-Jews are banned from the worship of other deities. However, they are allowed to "associate" lower divine beings with their faith in God (mostly to allow contact with Christians, accepting that they were not idolaters with whom business dealings and the like are forbidden.) The utter imperceptibility of God, considered as beyond human reason and only reachable through what he chooses to reveal, was emphasized among others in the ancient ban on making any image of him. Maimonides and virtually all sages in his time and thereafter stressed that the creator is incorporeal, lacking "any semblance of a body". While incorporeality has almost been taken for granted since the Middle Ages, Maimonides and his contemporaries reported that anthropomorphic conceptions of God were quite common in their time. The medieval tension between God's transcendence and equanimity, and his contact and interest in his creation, found its most popular resolution in the Kabbalah. Kabbalists asserted that while God himself is beyond the universe, he progressively unfolds into the created realm via a series of emanations, or sefirot, each a refraction of the perfect godhead. While widely received, this system proved contentious and some authorities lambasted it as a threat to God's unity. In modern times it is upheld, at least tacitly, in many traditionalist Orthodox circles, while Modern Orthodoxy mostly simply ignores it. The defining doctrine of Orthodox Judaism is the belief that God revealed the Torah ("Teaching" or "Law") to Moses on Mount Sinai, both the written scripture of the Torah and the Oral Torah explicating it, and that sages promulgated it faithfully from Sinai in an unbroken chain. One of the foundational texts of rabbinic literature is the list opening the Pirkei Avot, enumerating the sages, from Moses through Joshua, the Seventy Elders, and Prophets, and then onward until Hillel the Elder and Shammai. This core belief is referred to in classical sources as "The Law/Teaching is from the Heavens" (Torah min HaShamayim). Orthodoxy holds that the body of revelation is total and complete. Its interpretation and application under new circumstances, required of every generation's scholars, is an act of inferring and elaborating, not of innovation or addition. One clause in the Jerusalem Talmud asserts that anything that a veteran disciple shall teach was given at Sinai: a story in the Babylonian Talmud claims that Moses was taken aback upon seeing the immensely intricate deduction of future Rabbi Akiva in a vision, until Akiva proclaimed that Moses had received everything he was teaching. The Written and Oral Torah are held to be intertwined and mutually reliant. The latter is a source of many divine commandments, and the text of the Pentateuch is seen as incomprehensible. God's will may be surmised only by appealing to the Oral Torah, which revealed the text's allegorical, anagogical, or tropological meaning, rather than by a literal reading. Lacunae in received tradition or disagreements between early sages are attributed to disruptions, especially persecutions such that "the Torah was forgotten in Israel." According to rabbinic lore, these eventually compelled the legists to write down the Oral Law in the Mishna and Talmud. The wholeness of the original divine message and the reliability of those who transmitted it are axiomatic. One of the primary intellectual exercises of Torah scholars is to locate discrepancies between Talmudic or other passages and then demonstrate by complex logical steps (presumably proving each passage referred to a slightly different situation, etc.) that no contradiction is obtained. Orthodox Judaism considers revelation as propositional, explicit, verbal, and unambiguous. Revelation serves as a firm source of authority for religious commandments. Modernist understandings of revelation as a subjective, humanly conditioned experience are rejected. Some thinkers at the liberal end of the liberal wing promoted such views, although they found virtually no acceptance from the establishment. An important ramification of Torah min HaShamayim in modern times is the reserved, and often totally rejectionist, attitude of Orthodoxy toward the historical-critical method, particularly higher Biblical criticism. The refusal by rabbis to employ such tools, insisting on traditional methods and the need for consensus and continuity with past authorities, separates the most liberal-leaning Orthodox rabbinic circles from the most conservative non-Orthodox ones.: 115–119 While the Sinai event is held to be the supreme act of revelation, rabbinic tradition acknowledges matters addressed by the Prophets and God's later announcements. The Kabbalah, as revealed to illustrious past figures and passed on through elitist circles, is widely (albeit not universally) esteemed. While some prominent rabbis considered Kabbalah a late forgery, most generally accepted it as legitimate. However, its status in determining normative halakhic decision-making, which is binding for the entire community, and not just for spiritualists who voluntarily adopt kabbalistic strictures, was always controversial. Leading decisors openly applied criteria from Kabbalah in their rulings, while others did so only inadvertently, and many denied it any normative role. A closely related mystical phenomenon is the belief in Magidim, supposed dreamlike apparitions or visions, that may inform those who experience them with certain divine knowledge. Belief in a future Messiah is central to Orthodox Judaism. According to this doctrine, a king will arise from King David's lineage, and will bring with him signs such as the restoration of the Temple, peace, and universal acceptance of the God of Israel. The Messiah will embark on a quest to gather all Jews to the Holy Land, will proclaim prophethood, and will restore the Davidic Monarchy. Classical Judaism incorporated a tradition of belief in the resurrection of the dead.: p. 1 The scriptural basis for this doctrine, as quoted by the Mishnah is:: p. 24 "All Israelites have a share in the World-to-Come, as it is written: "And your people, all of them righteous, Shall possess the land for all time; They are the shoot that I planted, My handiwork in which I glory.". The Mishnah also brands as heretics any Jew who rejects the doctrine of resurrection or its Torah origin.: p. 25 Those who deny the doctrine are deemed to receive no share in the World-to-Come.: p. 26 The Pharisees believed in both a bodily resurrection and an immortal soul. They also believed that acts in this world would affect the state of life in the next world.: p. 61 Mishnah Sahedrin 10 clarifies that only those who follow the correct theology have a place in the World to Come.: p. 66 Other passing references to the afterlife appear in Mishnaic tractates. Berakhot informs that the Jewish belief in the afterlife was established long before the compilation of the Mishnah.: p. 70 [failed verification] Biblical tradition mentions Sheol sixty-five times. It is described as an underworld containing the gathering of the dead with their families.: p. 19 Numbers 16:30states that Korah went into Sheol alive to describe his death in divine retribution.: p. 20 The deceased who reside in Sheol have a "nebulous" existence. No reward or punishment comes in Sheol, which is represented as a dark and gloomy place. But a distinction is made for kings who are said to be greeted by other kings when entering Sheol.: p.21 Biblical poetry suggests that resurrection from Sheol is possible.: p. 22 Prophetic narratives of resurrection in the Bible have been labelled as an external cultural influence by some scholars.: p. 23 Talmudic discourse expanded on the details of the World to Come. This was to motivate Jewish compliance with religious codes.: p. 79 In brief, the righteous will be rewarded with a place in Gan Eden, the wicked will be punished in Gehinnom, and the resurrection will take place in the Messianic age. The sequence of these events is unclear.: p. 81 Rabbis support the concept of resurrection with Biblical citations and show it as a sign of God's omnipotence. Practice A relatively thorough observance of halakha – rather than theological and doctrinal matters, which produce diverse opinions – is the concrete demarcation line separating Orthodoxy from other Jewish movements. As noted by researchers and communal leaders, Orthodox subgroups have a sense of commitment towards the Law, perceiving it as seriously binding, which is rarely visible outside the movement.: 121–122 The halakha, like any jurisprudence, is not a definitive set of rules, but rather an expanding discourse. Its authority is derived from the belief in divine revelation, but rabbis interpret and apply it, basing their mandate on biblical verses such as and thou shalt observe to do according to all that they inform thee. From ancient to modern times, rabbinic discourse was wrought with controversy (machloket) and sages disagreeing over various points of law. The Talmud itself is mainly a record of such disputes. The Orthodox continue to believe that such disagreements flow naturally from the divinity of Jewish Law, which is presumed to contain a solution for any possible question. As long as both contesting parties base their arguments on received hermeneutics and precedents and are driven by sincere faith, both these and those are the words of the Living God (Talmudic statement originally attributed to a divine proclamation during a dispute between the House of Hillel and House of Shammai). Majority opinions were accepted and reified, though many disagreements remain unresolved as new ones appear. This plurality of opinion allows decisors, rabbis tasked with determining the legal stance in subjects without precedent, to weigh a range of options, based on methods derived from earlier authorities. The most basic form of halakhic discourse is the responsa literature, in which rabbis answered questions directed from commoners or other rabbis, thus setting precedent. The system's oldest and most basic sources are the Mishna and the Talmuds, augmented by the Geonim. Those were followed by the great codes which sought to assemble and standardize the laws, including Rabbi Isaac Alfasi's Hilchot HaRif, Maimonides' Mishneh Torah, and Rabbi Asher ben Jehiel's work (colloquially called the Rosh). These three works were the main basis of Rabbi Jacob ben Asher's Arba'ah Turim, which in turn became the basis of one of the latest and most authoritative codifications – the 1565 Shulchan Aruch, or "Set Table", by Rabbi Joseph Karo. This work gained canonical status and became almost synonymous, with the halakhic system. However, no later authority accepted it in its entirety (for example, Orthodox Jews wear phylacteries in a manner different from the one advocated there), and it was immediately contested or re-interpreted by various commentaries, most prominently the gloss written by Rabbi Moses Isserles named HaMapah ("The Tablecloth"). Halakhic literature continued to expand and evolve. New authoritative guides continued to be compiled and canonized, until the popular 20th century works such as the Mishnah Berurah arrived. The most important distinction within halakha is between all laws derived from God's revelation (d'Oraita) and those enacted by human authorities (d'Rabanan), who are believed to have been empowered by God to legislate as necessary. The former are either directly understood, derived via various hermeneutics or attributed to commandments handed down to Moses. The authority to pass measures d'Rabanan is itself subject to debate – Maimonides stated that absolute obedience to rabbinic decrees is stipulated by the verse and thou shalt observe, while Nachmanides argued that such severity is unfounded, while accepting such enactments as binding, albeit less so than the divine commandments. A Talmudic maxim states that when in doubt regarding a matter d'Oraita, one must rule strenuously, but leniently when it concerns d'Rabanan. Many arguments in halakhic literature revolve over whether a detail is derived from the former or the latter source, and under which circumstances. Commandments or prohibitions d'Rabanan, though less stringent than d'Oraita, are an important facet of Jewish law. They range from the 2nd century BCE establishment of Hanukkah, to bypassing the Biblical ban on charging interest via the Prozbul, and up to the 1950 marital rules standardized by the Chief Rabbinate of Israel, which forbade polygamy and levirate marriage even in communities that still practiced them. A third major component buttressing Orthodox and other practice is local or familial custom, Minhag. The development and acceptance of customs as binding, more than disagreements between decisors, is the main source of diversity in matters of practice across geographic or ethnic boundaries. While the reverence accorded to Minhag across rabbinic literature covers the extremes, including "a custom may uproot halakha" and wholly dismissive attitudes, it was generally accepted as binding by scholars, and drew its power from popular adherence and routine. Ashkenazim, Sephardim, Teimanim, and others have distinct prayer rites, kosher emphases (for example, by the 12th century, it became an Ashkenazi custom to avoid legumes in Passover) and other distinctions. The influence of custom upset scholars who noted that the common masses observe Minhag, yet ignore important divine decrees. Rabbinic leadership, assigned with implementing and interpreting tradition, changed considerably over the centuries, separating Orthodox from pre-modern Judaism. Since the demise of the Geonim, who led the Jewish world up to 1038, halakha was adjudicated locally, and the final arbiter was mostly the local rabbi, the Mara d'Athra (Master of the Area). He was responsible to judicially instruct his community. Emancipation and modern transport and communication made this model untenable. While Orthodox communities, especially the more conservative ones, have rabbis who technically fill this capacity, the public generally follows more broadly known authorities who are not limited by geography, and based on reverence and peer pressure more than coercion. These may be either popular chairs of Talmudic academies, renowned decisors, and, in the Hasidic world, hereditary rebbes. Their influence varies considerably: In conservative Orthodox circles, mainly Haredi, rabbis possess strong authority, and often exercise leadership. Bodies such as the Council of Torah Sages, Council of Torah Luminaries, the Central Rabbinical Congress, and the Orthodox Council of Jerusalem are all held as the arbiters in their respective communities. In the more liberal Orthodox sectors, rabbis are revered and consulted, but rarely exert direct control. Orthodox Judaism emphasizes practicing rules of kashrut, Shabbat, family purity, and tefilah (daily prayer). Many Orthodox can be identified by their dress and family lifestyle. Orthodox men and women dress modestly, covering most of their skin. Married women cover their hair, with scarves (tichel), veils, snoods, turbans, hats, berets, or wigs. Orthodox men wear a ritual fringe called Tzitzit, and wear a head-covering for males. Many men grow beards, and Haredi men wear suits with black hats over a skullcap. Modern Orthodox Jews may adopt the dress of general society, although they, too, wear kippahs and tzitzit. On Shabbat, Modern Orthodox men wear suits (or at least a dress shirt) and dress pants, while women wear clothing. Orthodox Jews follow the laws of negiah (touch). The Orthodox do not engage in physical contact with those of the opposite sex other than their spouse, or immediate family members. Kol Isha prohibits a woman's singing to a man (except as per negiah). Doorposts have a mezuzah. Separate sinks for meat and dairy have become increasingly common. Diversity Orthodox Judaism lacks a central framework and a common leadership. It is not a "denomination" in the structural sense, but a spectrum of groups, united in broadly affirming matters of belief and practice, which share a consciousness and a common discourse. Individual rabbis often gain respect across boundaries, particularly recognized decisors, but each community largely elevates its own leaders (for example, the Haredi world shares a sense of common identity, while distinct subgroups include hundreds of independent communities with their own rabbis). The limits and boundaries of Orthodoxy are also controversial. No encompassing definition has found acceptance. Moderately conservative subgroups hotly criticize more liberal groups for deviation, while strict hard-liners dismiss the latter as non-Orthodox. Contentious topics range from the abstract and theoretical, such as the attitude toward the study of scripture, to the mundane and pressing, such as modesty rules. As in any other broad religious movement, an intrinsic tension connects the ideological and the sociological dimensions of Orthodox Judaism – while elites and intellectuals define adherence in theoretical terms, the masses use societal, familial, and institutional affiliation. The latter may be neither strictly observant nor fully accept the tenets of faith.: 25–26, 76, 116–119, 154–156 Demographics Professors Daniel Elazar and Rela Mintz Geffen, according to calculations in 1990, found there to be at least 2,000,000 observant Orthodox Jews worldwide in 2012, and at least 2,000,000 additional members and supporters who identified as such. This estimate held Orthodoxy to be the largest Jewish group. In the State of Israel, where the total Jewish population is about 6.5 million, 22% of all Jewish respondents to a 2016 Pew survey declared themselves as observant Orthodox (9% Haredim, 13% Datiim, "religious"). 29% described themselves as "traditional", a label implying less observance, but identification with Orthodoxy. The Orthodox community of the United States is the second-largest in the world, concentrated in the Northeast and specifically in New York and New Jersey. A 2013 Pew survey found that 10% of respondents identified as Orthodox, among a total Jewish population of at least 5.5 million. 3% were Modern Orthodox, 6% were Haredi, and 1% were "other" (Sephardic, liberal Orthodox, etc.) In the United Kingdom, of 79,597 households with at least one Jewish member that held synagogue membership in 2016, 66% affiliated with Orthodox synagogues: 53% in "centrist Orthodox", and 13% in "strictly Orthodox" (further 3% were Sephardi, which technically eschews the title "Orthodox"). The Orthodox have higher birth rates than others. Haredi communities have some of the world's highest birth rates, averaging six children per household. A nearly non-existent rate of intermarriage with members of other faiths (Orthodox vehemently oppose the phenomenon) contributes to their growing share of the world's Jewish population. Among American Jewish children, the Orthodox share is an estimated 61% in New York, including 49% Haredi. Similar patterns are observed in other countries. With present trends sustained, Orthodox Jews are projected to numerically dominate British Jewry by 2031, and American Jewry by 2058. However, large numbers of members leave their communities and observant lifestyle. Among the 2013 Pew respondents, 17% of those under 30 who were raised Orthodox disaffiliated (in earlier generations, this trend was far more prevalent, and 77% of those over 65 left). Groups The most recognizable sub-group is the Haredim (literally, 'trembling' or 'fervent'), also known as "strictly Orthodox", and the like. They are the most traditional part of the Orthodox. Haredim have minimal engagement with/wholesale rejection of modern society, avow precedence to religious values, and accept a high degree of rabbinic involvement in daily life. Haredi rabbis and communities generally accept each other, and accord them legitimacy. They are organized in large political structures, mainly Agudath Israel of America and the Israeli United Torah Judaism party. Other organized groups include the Anti-Zionist Central Rabbinical Congress and the Edah HaChareidis. They are easily discerned by their mode of dress, often mostly black for men and very modest, by religious standards, for women (including hair covering, long skirts, etc.). The Haredim may be roughly classified into three sub-groups: Hasidic Jews originated in 18th-century Eastern Europe, where they formed as a revival movement that defied the rabbinical establishment. The threat of modernity turned the movement towards conservatism and reconciled it with traditionalist elements. Hasidism espouses a mystical interpretation of religion. Each Hasidic community aligned with a hereditary leader known as rebbe (who is almost always an ordained rabbi). While the spiritualist element of Hasidism declined through the centuries, the rebbes' authority stems from the mystical belief that the holiness of their ancestors is inborn. They exercise tight control over their followers. Each of the hundreds of independent Hasidic groups/sects (also called "courts" or "dynasties") has its own line of rebbes. Groups range in size from large ones with thousands of member households to very small. Courts often possess unique customs, religious emphases, philosophies, and styles of dress. Hasidic men, especially on the Sabbath, don long garments and fur hats, which were once a staple of Eastern European Jews, but are now associated almost exclusively with them. As of 2016, 130,000 Hasidic households were counted. The second Haredi group are the Litvaks, or Yeshivish. They originated, loosely, with the Misnagdim, the opponents of Hasidism, who were mainly concentrated in old Lithuania. The confrontation with the Hasid bred distinct ideologies and institutions, especially great yeshivas, learning halls, where the study of Torah for its own sake and admiration for the scholars who headed these schools was enshrined. With the advent of secularization, the Misnagdim largely abandoned their hostility towards Hasidism. They became defined by affiliation with their yeshivas, and their communities were sometimes composed of alumni. The prestige ascribed to them as centers of Torah study (after they were rebuilt in Israel and America, bearing the names of Eastern European yeshivas destroyed in the Holocaust) persuaded many who were not Misnagdic, and the term Litvak lost its original ethnic connotation. It is granted to all non-Hasidic Haredim of Ashkenazi descent. The Litvak sector is led mainly by heads of yeshivas. The third Haredi movement consists of the Sephardic Haredim, who live mostly in Israel. There they are linked to the Shas party and the legacy of Rabbi Ovadia Yosef. Originating in the Mizrahi (Middle Eastern and North African Jews) immigrants to the country who arrived in the 1950s, most of the Sephardi Haredim were educated in Litvak yeshivas. They adopted their educators' mentality. Their identity developed in reaction to the racism they encountered. Shas arose in the 1980s, with the aim of reclaiming Sephardi religious legacy, in opposition to both secularism and the hegemony of European-descended Haredim. While living in strictly observant circles, they maintain a strong bond with non-Haredi masses of Israeli Mizrahi society. In the West, especially in the United States, Modern Orthodoxy, or "Centrist Orthodoxy", is an umbrella term for communities that seek an observant lifestyle and traditional theology, while at the same time ascribing positive value to engagement (if not "synthesis") with the modern world. In the United States, the Modern Orthodox form a cohesive community, influenced by the legacy of leaders such as Rabbi Joseph B. Soloveitchik, and concentrated around Yeshiva University and institutions such as the OU or National Council of Young Israel. They affirm strict obedience to Jewish Law, the centrality of Torah study, and the importance of positive engagement with modern culture. In Israel, Religious Zionism represents the largest Orthodox public and are fervent Zionists. Religious Zionism supports Israel and ascribes an inherent religious value to it. The dominant ideological school, influenced by Rabbi Abraham Isaac Kook's thought, regards the state in messianic terms. Religious Zionism is not a uniform group, and the split between its conservative flank (often named "Chardal", or "National-Haredi") and more liberal elements has increased since the 1990s. The National Religious Party, once the single political platform, dissolved, and the common educational system became torn on issues such as gender separation in elementary school or secular studies. In Europe, "Centrist Orthodoxy" is represented by organizations such as the British United Synagogue and the Israelite Central Consistory of France, both the dominant official rabbinates in their respective countries. The laity is often non-observant, retaining formal affiliation due to familial piety or a sense of Jewish identity. Another large demographic usually considered Orthodox are the Israeli Masortim, or "traditionals". This moniker originated with Mizrahi immigrants who were secularized and reverent toward their communal heritage. However, Mizrahi intellectuals, in recent years, developed a more reflective, nuanced understanding of this term, eschewing its shallow image and not necessarily agreeing with the formal deference to Orthodox rabbis. Self-conscious Masorti identity is limited to small, elitist circles. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Mashable] | [TOKENS: 276] |
Contents Mashable Mashable is a news website, digital media platform and entertainment company founded by Pete Cashmore in 2005. History Mashable was founded by Pete Cashmore while living in Aberdeen, Scotland, in July 2004. Early iterations of the site were a simple WordPress blog, with Cashmore as sole author. Fame came relatively quickly, with Time magazine noting Mashable as one of the 25 best blogs of 2009. As of November 2015,[update] it had over 6,000,000 Twitter followers and over 3,200,000 fans on Facebook. In June 2016, it acquired YouTube channel CineFix from Whalerock Industries. In December 2017, Ziff Davis bought Mashable for $50 million, a price described by Recode as a "fire sale" price. Mashable had not been meeting its advertising targets, accumulating $4.2 million in losses in the quarter ending September 2017. After the sale, Mashable laid off 50 staff, but preserved top management. Under Ziff Davis, Mashable has grown and expanded to many countries in multiple continents, including Europe, Asia, the Middle East and Australia in several languages. In June 2021, Jessica Coen, Mashable's editor-in-chief, left the company to join Morning Brew. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Meta_Platforms#cite_note-9] | [TOKENS: 8626] |
Contents Meta Platforms Meta Platforms, Inc. (doing business as Meta) is an American multinational technology company headquartered in Menlo Park, California. Meta owns and operates several prominent social media platforms and communication services, including Facebook, Instagram, WhatsApp, Messenger, Threads and Manus. The company also operates an advertising network for its own sites and third parties; as of 2023[update], advertising accounted for 97.8 percent of its total revenue. Meta has been described as a part of Big Tech, which refers to the largest six tech companies in the United States, Alphabet (Google), Amazon, Apple, Meta (Facebook), Microsoft, and Nvidia, which are also the largest companies in the world by market capitalization. The company was originally established in 2004 as TheFacebook, Inc., and was renamed Facebook, Inc. in 2005. In 2021, it rebranded as Meta Platforms, Inc. to reflect a strategic shift toward developing the metaverse—an interconnected digital ecosystem spanning virtual and augmented reality technologies. In 2023, Meta was ranked 31st on the Forbes Global 2000 list of the world's largest public companies. As of 2022, it was the world's third-largest spender on research and development, with R&D expenses totaling US$35.3 billion. History Facebook filed for an initial public offering (IPO) on January 1, 2012. The preliminary prospectus stated that the company sought to raise $5 billion, had 845 million monthly active users, and a website accruing 2.7 billion likes and comments daily. After the IPO, Zuckerberg would retain 22% of the total shares and 57% of the total voting power in Facebook. Underwriters valued the shares at $38 each, valuing the company at $104 billion, the largest valuation yet for a newly public company. On May 16, one day before the IPO, Facebook announced it would sell 25% more shares than originally planned due to high demand. The IPO raised $16 billion, making it the third-largest in US history (slightly ahead of AT&T Mobility and behind only General Motors and Visa). The stock price left the company with a higher market capitalization than all but a few U.S. corporations—surpassing heavyweights such as Amazon, McDonald's, Disney, and Kraft Foods—and made Zuckerberg's stock worth $19 billion. The New York Times stated that the offering overcame questions about Facebook's difficulties in attracting advertisers to transform the company into a "must-own stock". Jimmy Lee of JPMorgan Chase described it as "the next great blue-chip". Writers at TechCrunch, on the other hand, expressed skepticism, stating, "That's a big multiple to live up to, and Facebook will likely need to add bold new revenue streams to justify the mammoth valuation." Trading in the stock, which began on May 18, was delayed that day due to technical problems with the Nasdaq exchange. The stock struggled to stay above the IPO price for most of the day, forcing underwriters to buy back shares to support the price. At the closing bell, shares were valued at $38.23, only $0.23 above the IPO price and down $3.82 from the opening bell value. The opening was widely described by the financial press as a disappointment. The stock set a new record for trading volume of an IPO. On May 25, 2012, the stock ended its first full week of trading at $31.91, a 16.5% decline. On May 22, 2012, regulators from Wall Street's Financial Industry Regulatory Authority announced that they had begun to investigate whether banks underwriting Facebook had improperly shared information only with select clients rather than the general public. Massachusetts Secretary of State William F. Galvin subpoenaed Morgan Stanley over the same issue. The allegations sparked "fury" among some investors and led to the immediate filing of several lawsuits, one of them a class action suit claiming more than $2.5 billion in losses due to the IPO. Bloomberg estimated that retail investors may have lost approximately $630 million on Facebook stock since its debut. S&P Global Ratings added Facebook to its S&P 500 index on December 21, 2013. On May 2, 2014, Zuckerberg announced that the company would be changing its internal motto from "Move fast and break things" to "Move fast with stable infrastructure". The earlier motto had been described as Zuckerberg's "prime directive to his developers and team" in a 2009 interview in Business Insider, in which he also said, "Unless you are breaking stuff, you are not moving fast enough." In November 2016, Facebook announced the Microsoft Windows client of gaming service Facebook Gameroom, formerly Facebook Games Arcade, at the Unity Technologies developers conference. The client allows Facebook users to play "native" games in addition to its web games. The service was closed in June 2021. Lasso was a short-video sharing app from Facebook similar to TikTok that was launched on iOS and Android in 2018 and was aimed at teenagers. On July 2, 2020, Facebook announced that Lasso would be shutting down on July 10. In 2018, the Oculus lead Jason Rubin sent his 50-page vision document titled "The Metaverse" to Facebook's leadership. In the document, Rubin acknowledged that Facebook's virtual reality business had not caught on as expected, despite the hundreds of millions of dollars spent on content for early adopters. He also urged the company to execute fast and invest heavily in the vision, to shut out HTC, Apple, Google and other competitors in the VR space. Regarding other players' participation in the metaverse vision, he called for the company to build the "metaverse" to prevent their competitors from "being in the VR business in a meaningful way at all". In May 2019, Facebook founded Libra Networks, reportedly to develop their own stablecoin cryptocurrency. Later, it was reported that Libra was being supported by financial companies such as Visa, Mastercard, PayPal and Uber. The consortium of companies was expected to pool in $10 million each to fund the launch of the cryptocurrency coin named Libra. Depending on when it would receive approval from the Swiss Financial Market Supervisory authority to operate as a payments service, the Libra Association had planned to launch a limited format cryptocurrency in 2021. Libra was renamed Diem, before being shut down and sold in January 2022 after backlash from Swiss government regulators and the public. During the COVID-19 pandemic, the use of online services, including Facebook, grew globally. Zuckerberg predicted this would be a "permanent acceleration" that would continue after the pandemic. Facebook hired aggressively, growing from 48,268 employees in March 2020 to more than 87,000 by September 2022. Following a period of intense scrutiny and damaging whistleblower leaks, news started to emerge on October 21, 2021 about Facebook's plan to rebrand the company and change its name. In the Q3 2021 earnings call on October 25, Mark Zuckerberg discussed the ongoing criticism of the company's social services and the way it operates, and pointed to the pivoting efforts to building the metaverse – without mentioning the rebranding and the name change. The metaverse vision and the name change from Facebook, Inc. to Meta Platforms was introduced at Facebook Connect on October 28, 2021. Based on Facebook's PR campaign, the name change reflects the company's shifting long term focus of building the metaverse, a digital extension of the physical world by social media, virtual reality and augmented reality features. "Meta" had been registered as a trademark in the United States in 2018 (after an initial filing in 2015) for marketing, advertising, and computer services, by a Canadian company that provided big data analysis of scientific literature. This company was acquired in 2017 by the Chan Zuckerberg Initiative (CZI), a foundation established by Zuckerberg and his wife, Priscilla Chan, and became one of their projects. Following the rebranding announcement, CZI announced that it had already decided to deprioritize the earlier Meta project, thus it would be transferring its rights to the name to Meta Platforms, and the previous project would end in 2022. Soon after the rebranding, in early February 2022, Meta reported a greater-than-expected decline in profits in the fourth quarter of 2021. It reported no growth in monthly users, and indicated it expected revenue growth to stall. It also expected measures taken by Apple Inc. to protect user privacy to cost it some $10 billion in advertisement revenue, an amount equal to roughly 8% of its revenue for 2021. In meeting with Meta staff the day after earnings were reported, Zuckerberg blamed competition for user attention, particularly from video-based apps such as TikTok. The 27% reduction in the company's share price which occurred in reaction to the news eliminated some $230 billion of value from Meta's market capitalization. Bloomberg described the decline as "an epic rout that, in its sheer scale, is unlike anything Wall Street or Silicon Valley has ever seen". Zuckerberg's net worth fell by as much as $31 billion. Zuckerberg owns 13% of Meta, and the holding makes up the bulk of his wealth. According to published reports by Bloomberg on March 30, 2022, Meta turned over data such as phone numbers, physical addresses, and IP addresses to hackers posing as law enforcement officials using forged documents. The law enforcement requests sometimes included forged signatures of real or fictional officials. When asked about the allegations, a Meta representative said, "We review every data request for legal sufficiency and use advanced systems and processes to validate law enforcement requests and detect abuse." In June 2022, Sheryl Sandberg, the chief operating officer of 14 years, announced she would step down that year. Zuckerberg said that Javier Olivan would replace Sandberg, though in a “more traditional” role. In March 2022, Meta (except Meta-owned WhatsApp) and Instagram were banned in Russia and added to the Russian list of terrorist and extremist organizations for alleged Russophobia and hate speech (up to genocidal calls) amid the ongoing Russian invasion of Ukraine. Meta appealed against the ban, but it was upheld by a Moscow court in June of the same year. Also in March 2022, Meta and Italian eyewear giant Luxottica released Ray-Ban Stories, a series of smartglasses which could play music and take pictures. Meta and Luxottica parent company EssilorLuxottica declined to disclose sales on the line of products as of September 2022, though Meta has expressed satisfaction with its customer feedback. In July 2022, Meta saw its first year-on-year revenue decline when its total revenue slipped by 1% to $28.8bn. Analysts and journalists accredited the loss to its advertising business, which has been limited by Apple's app tracking transparency feature and the number of people who have opted not to be tracked by Meta apps. Zuckerberg also accredited the decline to increasing competition from TikTok. On October 27, 2022, Meta's market value dropped to $268 billion, a loss of around $700 billion compared to 2021, and its shares fell by 24%. It lost its spot among the top 20 US companies by market cap, despite reaching the top 5 in the previous year. In November 2022, Meta laid off 11,000 employees, 13% of its workforce. Zuckerberg said the decision to aggressively increase Meta's investments had been a mistake, as he had wrongly predicted that the surge in e-commerce would last beyond the COVID-19 pandemic. He also attributed the decline to increased competition, a global economic downturn and "ads signal loss". Plans to lay off a further 10,000 employees began in April 2023. The layoffs were part of a general downturn in the technology industry, alongside layoffs by companies including Google, Amazon, Tesla, Snap, Twitter and Lyft. Starting from 2022, Meta scrambled to catch up to other tech companies in adopting specialized artificial intelligence hardware and software. It had been using less expensive CPUs instead of GPUs for AI work, but that approach turned out to be less efficient. The company gifted the Inter-university Consortium for Political and Social Research $1.3 million to finance the Social Media Archive's aim to make their data available to social science research. In 2023, Ireland's Data Protection Commissioner imposed a record EUR 1.2 billion fine on Meta for transferring data from Europe to the United States without adequate protections for EU citizens.: 250 In March 2023, Meta announced a new round of layoffs that would cut 10,000 employees and close 5,000 open positions to make the company more efficient. Meta revenue surpassed analyst expectations for the first quarter of 2023 after announcing that it was increasing its focus on AI. On July 6, Meta launched a new app, Threads, a competitor to Twitter. Meta announced its artificial intelligence model Llama 2 in July 2023, available for commercial use via partnerships with major cloud providers like Microsoft. It was the first project to be unveiled out of Meta's generative AI group after it was set up in February. It would not charge access or usage but instead operate with an open-source model to allow Meta to ascertain what improvements need to be made. Prior to this announcement, Meta said it had no plans to release Llama 2 for commercial use. An earlier version of Llama was released to academics. In August 2023, Meta announced its permanent removal of news content from Facebook and Instagram in Canada due to the Online News Act, which requires Canadian news outlets to be compensated for content shared on its platform. The Online News Act was in effect by year-end, but Meta will not participate in the regulatory process. In October 2023, Zuckerberg said that AI would be Meta's biggest investment area in 2024. Meta finished 2023 as one of the best-performing technology stocks of the year, with its share price up 150 percent. Its stock reached an all-time high in January 2024, bringing Meta within 2% of achieving $1 trillion market capitalization. In November 2023 Meta Platforms launched an ad-free service in Europe, allowing subscribers to opt-out of personal data being collected for targeted advertising. A group of 28 European organizations, including Max Schrems' advocacy group NOYB, the Irish Council for Civil Liberties, Wikimedia Europe, and the Electronic Privacy Information Center, signed a 2024 letter to the European Data Protection Board (EDPB) expressing concern that this subscriber model would undermine privacy protections, specifically GDPR data protection standards. Meta removed the Facebook and Instagram accounts of Iran's Supreme Leader Ali Khamenei in February 2024, citing repeated violations of its Dangerous Organizations & Individuals policy. As of March, Meta was under investigation by the FDA for alleged use of their social media platforms to sell illegal drugs. On 16 May 2024, the European Commission began an investigation into Meta over concerns related to child safety. In May 2023, Iraqi social media influencer Esaa Ahmed-Adnan encountered a troubling issue when Instagram removed his posts, citing false copyright violations despite his content being original and free from copyrighted material. He discovered that extortionists were behind these takedowns, offering to restore his content for $3,000 or provide ongoing protection for $1,000 per month. This scam, exploiting Meta’s rights management tools, became widespread in the Middle East, revealing a gap in Meta’s enforcement in developing regions. An Iraqi nonprofit Tech4Peace’s founder, Aws al-Saadi helped Ahmed-Adnan and others, but the restoration process was slow, leading to significant financial losses for many victims, including prominent figures like Ammar al-Hakim. This situation highlighted Meta’s challenges in balancing global growth with effective content moderation and protection. On 16 September 2024, Meta announced it had banned Russian state media outlets from its platforms worldwide due to concerns about "foreign interference activity." This decision followed allegations that RT and its employees funneled $10 million through shell companies to secretly fund influence campaigns on various social media channels. Meta's actions were part of a broader effort to counter Russian covert influence operations, which had intensified since the invasion. At its 2024 Connect conference, Meta presented Orion, its first pair of augmented reality glasses. Though Orion was originally intended to be sold to consumers, the manufacturing process turned out to be too complex and expensive. Instead, the company pivoted to producing a small number of the glasses to be used internally. On 4 October 2024, Meta announced about its new AI model called Movie Gen, capable of generating realistic video and audio clips based on user prompts. Meta stated it would not release Movie Gen for open development, preferring to collaborate directly with content creators and integrate it into its products by the following year. The model was built using a combination of licensed and publicly available datasets. On October 31, 2024, ProPublica published an investigation into deceptive political advertisement scams that sometimes use hundreds of hijacked profiles and facebook pages run by organized networks of scammers. The authors cited spotty enforcement by Meta as a major reason for the extent of the issue. In November 2024, TechCrunch reported that Meta were considering building a $10bn global underwater cable spanning 25,000 miles. In the same month, Meta closed down 2 million accounts on Facebook and Instagram that were linked to scam centers in Myanmar, Laos, Cambodia, the Philippines, and the United Arab Emirates doing pig butchering scams. In December 2024, Meta announced that, beginning February 2025, they would require advertisers to run ads about financial services in Australia to verify information about who are the beneficiary and the payer in a bid to regulate scams. On December 4, 2024, Meta announced it will invest US$10 billion for its largest AI data center in northeast Louisiana, powered by natural gas facilities. On the 11th of that month, Meta experienced a global outage, impacting accounts on all of their social media and messaging applications. Outage reports from DownDetector reached 70,000+ and 100,000+ within minutes for Instagram and Facebook, respectively. In January 2025, Meta announced plans to roll back its diversity, equity, and inclusion (DEI) initiatives, citing shifts in the "legal and policy landscape" in the United States following the 2024 presidential election. The decision followed reports that CEO Mark Zuckerberg sought to align the company more closely with the incoming Trump administration, including changes to content moderation policies and executive leadership. The new content moderation policies continued to bar insults about a person's intellect or mental illness, but made an exception to allow calling LGBTQ people mentally ill because they are gay or transgender. Later that month, Meta agreed to pay $25 million to settle a 2021 lawsuit brought by Donald Trump for suspending his social media accounts after the January 6 riots. Changes to Meta's moderation policies were controversial among its oversight board, with a significant divide in opinion between the board's US conservatives and its global members. In June 2025, Meta Platforms Inc. has decided to make a multibillion-dollar investment into artificial intelligence startup Scale AI. The financing could exceed $10 billion in value which would make it one of the largest private company funding events of all time. In October 2025, it was announced that Meta would be laying off 600 employees in the artificial intelligence unit to perform better and simpler. They referred to their AI unit as "bloated" and are seeking to trim down the department. This mass layoff is going to impact Meta’s AI infrastructure units, Fundamental Artificial Intelligence Research unit (FAIR) and other product-related positions. Mergers and acquisitions Meta has acquired multiple companies (often identified as talent acquisitions). One of its first major acquisitions was in April 2012, when it acquired Instagram for approximately US$1 billion in cash and stock. In October 2013, Facebook, Inc. acquired Onavo, an Israeli mobile web analytics company. In February 2014, Facebook, Inc. announced it would buy mobile messaging company WhatsApp for US$19 billion in cash and stock. The acquisition was completed on October 6. Later that year, Facebook bought Oculus VR for $2.3 billion in cash and stock, which released its first consumer virtual reality headset in 2016. In late November 2019, Facebook, Inc. announced the acquisition of the game developer Beat Games, responsible for developing one of that year's most popular VR games, Beat Saber. In Late 2022, after Facebook Inc rebranded to Meta Platforms Inc, Oculus was rebranded to Meta Quest. In May 2020, Facebook, Inc. announced it had acquired Giphy for a reported cash price of $400 million. It will be integrated with the Instagram team. However, in August 2021, UK's Competition and Markets Authority (CMA) stated that Facebook, Inc. might have to sell Giphy, after an investigation found that the deal between the two companies would harm competition in display advertising market. Facebook, Inc. was fined $70 million by CMA for deliberately failing to report all information regarding the acquisition and the ongoing antitrust investigation. In October 2022, the CMA ruled for a second time that Meta be required to divest Giphy, stating that Meta already controls half of the advertising in the UK. Meta agreed to the sale, though it stated that it disagrees with the decision itself. In May 2023, Giphy was divested to Shutterstock for $53 million. In November 2020, Facebook, Inc. announced that it planned to purchase the customer-service platform and chatbot specialist startup Kustomer to promote companies to use their platform for business. It has been reported that Kustomer valued at slightly over $1 billion. The deal was closed in February 2022 after regulatory approval. In September 2022, Meta acquired Lofelt, a Berlin-based haptic tech startup. In December 2025, it was announced Meta had acquired the AI-wearables startup, Limitless. In the same month, they also acquired another AI startup, Manus AI, for $2 billion. Manus announced in December that its platform had achieved $100mm in recurring revenue just 8 months after its launch and Meta said it will scale the platform to many other businesses. In January 2026, it was announced Meta proposed acquisition of Manus was undergoing preliminary scrutiny by Chinese regulators. The examination concerns the cross-border transfer of artificial intelligence technology developed in China. Lobbying In 2020, Facebook, Inc. spent $19.7 million on lobbying, hiring 79 lobbyists. In 2019, it had spent $16.7 million on lobbying and had a team of 71 lobbyists, up from $12.6 million and 51 lobbyists in 2018. Facebook was the largest spender of lobbying money among the Big Tech companies in 2020. The lobbying team includes top congressional aide John Branscome, who was hired in September 2021, to help the company fend off threats from Democratic lawmakers and the Biden administration. In December 2024, Meta donated $1 million to the inauguration fund for then-President-elect Donald Trump. In 2025, Meta was listed among the donors funding the construction of the White House State Ballroom. Partnerships February 2026, Meta announced a long-term partnership with Nvidia. Censorship In August 2024, Mark Zuckerberg sent a letter to Jim Jordan indicating that during the COVID-19 pandemic the Biden administration repeatedly asked Meta to limit certain COVID-19 content, including humor and satire, on Facebook and Instagram. In 2016 Meta hired Jordana Cutler, formerly an employee at the Israeli Embassy to the United States, as its policy chief for Israel and the Jewish Diaspora. In this role, Cutler pushed for the censorship of accounts belonging to Students for Justice in Palestine chapters in the United States. Critics have said that Cutler's position gives the Israeli government an undue influence over Meta policy, and that few countries have such high levels of contact with Meta policymakers. Following the election of Donald Trump in 2025, various sources noted possible censorship related to the Democratic Party on Instagram and other Meta platforms. In February 2025, a Meta rep flagged journalist Gil Duran's article and other "critiques of tech industry figures" as spam or sensitive content, limiting their reach. In March 2025, Meta attempted to block former employee Sarah Wynn-Williams from promoting or further distributing her memoir, Careless People, that includes allegations of unaddressed sexual harassment in the workplace by senior executives. The New York Times reports that the arbitration is among Meta's most forcible attempts to repudiate a former employee's account of workplace dynamics. Publisher Macmillan reacted to the ruling by the Emergency International Arbitral Tribunal by stating that it will ignore its provisions. As of 15 March 2025[update], hardback and digital versions of Careless People were being offered for sale by major online retailers. From October 2025, Meta began removing and restricting access for accounts related to LGBTQ, reproductive health and abortion information pages on its platforms. Martha Dimitratou, executive director of Repro Uncensored, called Meta's shadow-banning of these issues "One of the biggest waves of censorship we are seeing". Disinformation concerns Since its inception, Meta has been accused of being a host for fake news and misinformation. In the wake of the 2016 United States presidential election, Zuckerberg began to take steps to eliminate the prevalence of fake news, as the platform had been criticized for its potential influence on the outcome of the election. The company initially partnered with ABC News, the Associated Press, FactCheck.org, Snopes and PolitiFact for its fact-checking initiative; as of 2018, it had over 40 fact-checking partners across the world, including The Weekly Standard. A May 2017 review by The Guardian found that the platform's fact-checking initiatives of partnering with third-party fact-checkers and publicly flagging fake news were regularly ineffective, and appeared to be having minimal impact in some cases. In 2018, journalists working as fact-checkers for the company criticized the partnership, stating that it had produced minimal results and that the company had ignored their concerns. In 2024 Meta's decision to continue to disseminate a falsified video of US president Joe Biden, even after it had been proven to be fake, attracted criticism and concern. In January 2025, Meta ended its use of third-party fact-checkers in favor of a user-run community notes system similar to the one used on X. While Zuckerberg supported these changes, saying that the amount of censorship on the platform was excessive, the decision received criticism by fact-checking institutions, stating that the changes would make it more difficult for users to identify misinformation. Meta also faced criticism for weakening its policies on hate speech that were designed to protect minorities and LGBTQ+ individuals from bullying and discrimination. While moving its content review teams from California to Texas, Meta changed their hateful conduct policy to eliminate restrictions on anti-LGBT and anti-immigrant hate speech, as well as explicitly allowing users to accuse LGBT people of being mentally ill or abnormal based on their sexual orientation or gender identity. In January 2025, Meta faced significant criticism for its role in removing LGBTQ+ content from its platforms, amid its broader efforts to address anti-LGBTQ+ hate speech. The removal of LGBTQ+ themes was noted as part of the wider crackdown on content deemed to violate its community guidelines. Meta's content moderation policies, which were designed to combat harmful speech and protect users from discrimination, inadvertently led to the removal or restriction of LGBTQ+ content, particularly posts highlighting LGBTQ+ identities, support, or political issues. According to reports, LGBTQ+ posts, including those that simply celebrated pride or advocated for LGBTQ+ rights, were flagged and removed for reasons that some critics argue were vague or inconsistently applied. Many LGBTQ+ activists and users on Meta's platforms expressed concern that such actions stifled visibility and expression, potentially isolating LGBTQ+ individuals and communities, especially in spaces that were historically important for outreach and support. Lawsuits Numerous lawsuits have been filed against the company, both when it was known as Facebook, Inc., and as Meta Platforms. In March 2020, the Office of the Australian Information Commissioner (OAIC) sued Facebook, for significant and persistent infringements of the rule on privacy involving the Cambridge Analytica fiasco. Every violation of the Privacy Act is subject to a theoretical cumulative liability of $1.7 million. The OAIC estimated that a total of 311,127 Australians had been exposed. On December 8, 2020, the U.S. Federal Trade Commission and 46 states (excluding Alabama, Georgia, South Carolina, and South Dakota), the District of Columbia and the territory of Guam, launched Federal Trade Commission v. Facebook as an antitrust lawsuit against Facebook. The lawsuit concerns Facebook's acquisition of two competitors—Instagram and WhatsApp—and the ensuing monopolistic situation. FTC alleges that Facebook holds monopolistic power in the U.S. social networking market and seeks to force the company to divest from Instagram and WhatsApp to break up the conglomerate. William Kovacic, a former chairman of the Federal Trade Commission, argued the case will be difficult to win as it would require the government to create a counterfactual argument of an internet where the Facebook-WhatsApp-Instagram entity did not exist, and prove that harmed competition or consumers. In November 2025, it was ruled that Meta did not violate antitrust laws and holds no monopoly in the market. On December 24, 2021, a court in Russia fined Meta for $27 million after the company declined to remove unspecified banned content. The fine was reportedly tied to the company's annual revenue in the country. In May 2022, a lawsuit was filed in Kenya against Meta and its local outsourcing company Sama. Allegedly, Meta has poor working conditions in Kenya for workers moderating Facebook posts. According to the lawsuit, 260 screeners were declared redundant with confusing reasoning. The lawsuit seeks financial compensation and an order that outsourced moderators be given the same health benefits and pay scale as Meta employees. In June 2022, 8 lawsuits were filed across the U.S. over the allege that excessive exposure to platforms including Facebook and Instagram has led to attempted or actual suicides, eating disorders and sleeplessness, among other issues. The litigation follows a former Facebook employee's testimony in Congress that the company refused to take responsibility. The company noted that tools have been developed for parents to keep track of their children's activity on Instagram and set time limits, in addition to Meta's "Take a break" reminders. In addition, the company is providing resources specific to eating disorders as well as developing AI to prevent children under the age of 13 signing up for Facebook or Instagram. In June 2022, Meta settled a lawsuit with the US Department of Justice. The lawsuit, which was filed in 2019, alleged that the company enabled housing discrimination through targeted advertising, as it allowed homeowners and landlords to run housing ads excluding people based on sex, race, religion, and other characteristics. The U.S. Department of Justice stated that this was in violation of the Fair Housing Act. Meta was handed a penalty of $115,054 and given until December 31, 2022, to shadow the algorithm tool. In January 2023, Meta was fined €390 million for violations of the European Union General Data Protection Regulation. In May 2023, the European Data Protection Board fined Meta a record €1.2 billion for breaching European Union data privacy laws by transferring personal data of Facebook users to servers in the U.S. In July 2024, Meta agreed to pay the state of Texas US$1.4 billion to settle a lawsuit brought by Texas Attorney General Ken Paxton accusing the company of collecting users' biometric data without consent, setting a record for the largest privacy-related settlement ever obtained by a state attorney general. In October 2024, Meta Platforms faced lawsuits in Japan from 30 plaintiffs who claimed they were defrauded by fake investment ads on Facebook and Instagram, featuring false celebrity endorsements. The plaintiffs are seeking approximately $2.8 million in damages. In April 2025, the Kenyan High Court ruled that a US$2.4 billion lawsuit in which three plaintiffs claim that Facebook inflamed civil violence in Ethiopia in 2021 could proceed. In April 2025, Meta was fined €200 million ($230 million) for breaking the Digital Markets Act, by imposing a “consent or pay” system that forces users to either allow their personal data to be used to target advertisements, or pay a subscription fee for advertising-free versions of Facebook and Instagram. In late April 2025, a case was filed against Meta in Ghana over the alleged psychological distress experienced by content moderators employed to take down disturbing social media content including depictions of murders, extreme violence and child sexual abuse. Meta moved the moderation service to the Ghanaian capital of Accra after legal issues in the previous location Kenya. The new moderation company is Teleperformance, a multinational corporation with a history of worker's rights violation. Reports suggests the conditions are worse here than in the previous Kenyan location, with many workers afraid of speaking out due to fear of returning to conflict zones. Workers reported developing mental illnesses, attempted suicides, and low pay. In 26 January 2026, a New Mexico state court case was filed, suggesting that Mark Zuckerberg approved allowing minors to access artificial intelligence chatbot companions that safety staffers warned were capable of sexual interactions. In 2020, the company UReputation, which had been involved in several cases concerning the management of digital armies[clarification needed], filed a lawsuit against Facebook, accusing it of unlawfully transmitting personal data to third parties. Legal actions were initiated in Tunisia, France, and the United States. In 2025, the United States District court for the Northern District of Georgia approved a discovery procedure, allowing UReputation to access documents and evidence held by Meta. Structure Meta's key management consists of: As of October 2022[update], Meta had 83,553 employees worldwide. As of June 2024[update], Meta's board consisted of the following directors; Meta Platforms is mainly owned by institutional investors, who hold around 80% of all shares. Insiders control the majority of voting shares. The three largest individual investors in 2024 were Mark Zuckerberg, Sheryl Sandberg and Christopher K. Cox. The largest shareholders in late 2024/early 2025 were: Roger McNamee, an early Facebook investor and Zuckerberg's former mentor, said Facebook had "the most centralized decision-making structure I have ever encountered in a large company". Facebook co-founder Chris Hughes has stated that chief executive officer Mark Zuckerberg has too much power, that the company is now a monopoly, and that, as a result, it should be split into multiple smaller companies. In an op-ed in The New York Times, Hughes said he was concerned that Zuckerberg had surrounded himself with a team that did not challenge him, and that it is the U.S. government's job to hold him accountable and curb his "unchecked power". He also said that "Mark's power is unprecedented and un-American." Several U.S. politicians agreed with Hughes. European Union Commissioner for Competition Margrethe Vestager stated that splitting Facebook should be done only as "a remedy of the very last resort", and that it would not solve Facebook's underlying problems. Revenue Facebook ranked No. 34 in the 2020 Fortune 500 list of the largest United States corporations by revenue, with almost $86 billion in revenue most of it coming from advertising. One analysis of 2017 data determined that the company earned US$20.21 per user from advertising. According to New York, since its rebranding, Meta has reportedly lost $500 billion as a result of new privacy measures put in place by companies such as Apple and Google which prevents Meta from gathering users' data. In February 2015, Facebook announced it had reached two million active advertisers, with most of the gain coming from small businesses. An active advertiser was defined as an entity that had advertised on the Facebook platform in the last 28 days. In March 2016, Facebook announced it had reached three million active advertisers with more than 70% from outside the United States. Prices for advertising follow a variable pricing model based on auctioning ad placements, and potential engagement levels of the advertisement itself. Similar to other online advertising platforms like Google and Twitter, targeting of advertisements is one of the chief merits of digital advertising compared to traditional media. Marketing on Meta is employed through two methods based on the viewing habits, likes and shares, and purchasing data of the audience, namely targeted audiences and "look alike" audiences. The U.S. IRS challenged the valuation Facebook used when it transferred IP from the U.S. to Facebook Ireland (now Meta Platforms Ireland) in 2010 (which Facebook Ireland then revalued higher before charging out), as it was building its double Irish tax structure. The case is ongoing and Meta faces a potential fine of $3–5bn. The U.S. Tax Cuts and Jobs Act of 2017 changed Facebook's global tax calculations. Meta Platforms Ireland is subject to the U.S. GILTI tax of 10.5% on global intangible profits (i.e. Irish profits). On the basis that Meta Platforms Ireland Limited is paying some tax, the effective minimum US tax for Facebook Ireland will be circa 11%. In contrast, Meta Platforms Inc. would incur a special IP tax rate of 13.125% (the FDII rate) if its Irish business relocated to the U.S. Tax relief in the U.S. (21% vs. Irish at the GILTI rate) and accelerated capital expensing, would make this effective U.S. rate around 12%. The insignificance of the U.S./Irish tax difference was demonstrated when Facebook moved 1.5bn non-EU accounts to the U.S. to limit exposure to GDPR. Facilities Users outside of the U.S. and Canada contract with Meta's Irish subsidiary, Meta Platforms Ireland Limited (formerly Facebook Ireland Limited), allowing Meta to avoid US taxes for all users in Europe, Asia, Australia, Africa and South America. Meta is making use of the Double Irish arrangement which allows it to pay 2–3% corporation tax on all international revenue. In 2010, Facebook opened its fourth office, in Hyderabad, India, which houses online advertising and developer support teams and provides support to users and advertisers. In India, Meta is registered as Facebook India Online Services Pvt Ltd. It also has offices or planned sites in Chittagong, Bangladesh; Dublin, Ireland; and Austin, Texas, among other cities. Facebook opened its London headquarters in 2017 in Fitzrovia in central London. Facebook opened an office in Cambridge, Massachusetts in 2018. The offices were initially home to the "Connectivity Lab", a group focused on bringing Internet access to those who do not have access to the Internet. In April 2019, Facebook opened its Taiwan headquarters in Taipei. In March 2022, Meta opened new regional headquarters in Dubai. In September 2023, it was reported that Meta had paid £149m to British Land to break the lease on Triton Square London office. Meta reportedly had another 18 years left on its lease on the site. As of 2023, Facebook operated 21 data centers. It committed to purchase 100% renewable energy and to reduce its greenhouse gas emissions 75% by 2020. Its data center technologies include Fabric Aggregator, a distributed network system that accommodates larger regions and varied traffic patterns. Reception US Representative Alexandria Ocasio-Cortez responded in a tweet to Zuckerberg's announcement about Meta, saying: "Meta as in 'we are a cancer to democracy metastasizing into a global surveillance and propaganda machine for boosting authoritarian regimes and destroying civil society ... for profit!'" Ex-Facebook employee Frances Haugen and whistleblower behind the Facebook Papers responded to the rebranding efforts by expressing doubts about the company's ability to improve while led by Mark Zuckerberg, and urged the chief executive officer to resign. In November 2021, a video published by Inspired by Iceland went viral, in which a Zuckerberg look-alike promoted the Icelandverse, a place of "enhanced actual reality without silly looking headsets". In a December 2021 interview, SpaceX and Tesla chief executive officer Elon Musk said he could not see a compelling use-case for the VR-driven metaverse, adding: "I don't see someone strapping a frigging screen to their face all day." In January 2022, Louise Eccles of The Sunday Times logged into the metaverse with the intention of making a video guide. She wrote: Initially, my experience with the Oculus went well. I attended work meetings as an avatar and tried an exercise class set in the streets of Paris. The headset enabled me to feel the thrill of carving down mountains on a snowboard and the adrenaline rush of climbing a mountain without ropes. Yet switching to the social apps, where you mingle with strangers also using VR headsets, it was at times predatory and vile. Eccles described being sexually harassed by another user, as well as "accents from all over the world, American, Indian, English, Australian, using racist, sexist, homophobic and transphobic language". She also encountered users as young as 7 years old on the platform, despite Oculus headsets being intended for users over 13. See also References External links 37°29′06″N 122°08′54″W / 37.48500°N 122.14833°W / 37.48500; -122.14833 |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Social_network#cite_note-Jone11-37] | [TOKENS: 5247] |
Contents Social network 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias A social network is a social structure consisting of a set of social actors (such as individuals or organizations), networks of dyadic ties, and other social interactions between actors. The social network perspective provides a set of methods for analyzing the structure of whole social entities along with a variety of theories explaining the patterns observed in these structures. The study of these structures uses social network analysis to identify local and global patterns, locate influential entities, and examine dynamics of networks. For instance, social network analysis has been used in studying the spread of misinformation on social media platforms or analyzing the influence of key figures in social networks. Social networks and the analysis of them is an inherently interdisciplinary academic field which emerged from social psychology, sociology, statistics, and graph theory. Georg Simmel authored early structural theories in sociology emphasizing the dynamics of triads and "web of group affiliations". Jacob Moreno is credited with developing the first sociograms in the 1930s to study interpersonal relationships. These approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in the social and behavioral sciences by the 1980s. Social network analysis is now one of the major paradigms in contemporary sociology, and is also employed in a number of other social and formal sciences. Together with other complex networks, it forms part of the nascent field of network science. Overview The social network is a theoretical construct useful in the social sciences to study relationships between individuals, groups, organizations, or even entire societies (social units, see differentiation). The term is used to describe a social structure determined by such interactions. The ties through which any given social unit connects represent the convergence of the various social contacts of that unit. This theoretical approach is, necessarily, relational. An axiom of the social network approach to understanding social interaction is that social phenomena should be primarily conceived and investigated through the properties of relations between and within units, instead of the properties of these units themselves. Thus, one common criticism of social network theory is that individual agency is often ignored although this may not be the case in practice (see agent-based modeling). Precisely because many different types of relations, singular or in combination, form these network configurations, network analytics are useful to a broad range of research enterprises. In social science, these fields of study include, but are not limited to anthropology, biology, communication studies, economics, geography, information science, organizational studies, social psychology, sociology, and sociolinguistics. History In the late 1890s, both Émile Durkheim and Ferdinand Tönnies foreshadowed the idea of social networks in their theories and research of social groups. Tönnies argued that social groups can exist as personal and direct social ties that either link individuals who share values and belief (Gemeinschaft, German, commonly translated as "community") or impersonal, formal, and instrumental social links (Gesellschaft, German, commonly translated as "society"). Durkheim gave a non-individualistic explanation of social facts, arguing that social phenomena arise when interacting individuals constitute a reality that can no longer be accounted for in terms of the properties of individual actors. Georg Simmel, writing at the turn of the twentieth century, pointed to the nature of networks and the effect of network size on interaction and examined the likelihood of interaction in loosely knit networks rather than groups. Major developments in the field can be seen in the 1930s by several groups in psychology, anthropology, and mathematics working independently. In psychology, in the 1930s, Jacob L. Moreno began systematic recording and analysis of social interaction in small groups, especially classrooms and work groups (see sociometry). In anthropology, the foundation for social network theory is the theoretical and ethnographic work of Bronislaw Malinowski, Alfred Radcliffe-Brown, and Claude Lévi-Strauss. A group of social anthropologists associated with Max Gluckman and the Manchester School, including John A. Barnes, J. Clyde Mitchell and Elizabeth Bott Spillius, often are credited with performing some of the first fieldwork from which network analyses were performed, investigating community networks in southern Africa, India and the United Kingdom. Concomitantly, British anthropologist S. F. Nadel codified a theory of social structure that was influential in later network analysis. In sociology, the early (1930s) work of Talcott Parsons set the stage for taking a relational approach to understanding social structure. Later, drawing upon Parsons' theory, the work of sociologist Peter Blau provides a strong impetus for analyzing the relational ties of social units with his work on social exchange theory. By the 1970s, a growing number of scholars worked to combine the different tracks and traditions. One group consisted of sociologist Harrison White and his students at the Harvard University Department of Social Relations. Also independently active in the Harvard Social Relations department at the time were Charles Tilly, who focused on networks in political and community sociology and social movements, and Stanley Milgram, who developed the "six degrees of separation" thesis. Mark Granovetter and Barry Wellman are among the former students of White who elaborated and championed the analysis of social networks. Beginning in the late 1990s, social network analysis experienced work by sociologists, political scientists, and physicists such as Duncan J. Watts, Albert-László Barabási, Peter Bearman, Nicholas A. Christakis, James H. Fowler, and others, developing and applying new models and methods to emerging data available about online social networks, as well as "digital traces" regarding face-to-face networks. Levels of analysis In general, social networks are self-organizing, emergent, and complex, such that a globally coherent pattern appears from the local interaction of the elements that make up the system. These patterns become more apparent as network size increases. However, a global network analysis of, for example, all interpersonal relationships in the world is not feasible and is likely to contain so much information as to be uninformative. Practical limitations of computing power, ethics and participant recruitment and payment also limit the scope of a social network analysis. The nuances of a local system may be lost in a large network analysis, hence the quality of information may be more important than its scale for understanding network properties. Thus, social networks are analyzed at the scale relevant to the researcher's theoretical question. Although levels of analysis are not necessarily mutually exclusive, there are three general levels into which networks may fall: micro-level, meso-level, and macro-level. At the micro-level, social network research typically begins with an individual, snowballing as social relationships are traced, or may begin with a small group of individuals in a particular social context. Dyadic level: A dyad is a social relationship between two individuals. Network research on dyads may concentrate on structure of the relationship (e.g. multiplexity, strength), social equality, and tendencies toward reciprocity/mutuality. Triadic level: Add one individual to a dyad, and you have a triad. Research at this level may concentrate on factors such as balance and transitivity, as well as social equality and tendencies toward reciprocity/mutuality. In the balance theory of Fritz Heider the triad is the key to social dynamics. The discord in a rivalrous love triangle is an example of an unbalanced triad, likely to change to a balanced triad by a change in one of the relations. The dynamics of social friendships in society has been modeled by balancing triads. The study is carried forward with the theory of signed graphs. Actor level: The smallest unit of analysis in a social network is an individual in their social setting, i.e., an "actor" or "ego." Egonetwork analysis focuses on network characteristics, such as size, relationship strength, density, centrality, prestige and roles such as isolates, liaisons, and bridges. Such analyses, are most commonly used in the fields of psychology or social psychology, ethnographic kinship analysis or other genealogical studies of relationships between individuals. Subset level: Subset levels of network research problems begin at the micro-level, but may cross over into the meso-level of analysis. Subset level research may focus on distance and reachability, cliques, cohesive subgroups, or other group actions or behavior. In general, meso-level theories begin with a population size that falls between the micro- and macro-levels. However, meso-level may also refer to analyses that are specifically designed to reveal connections between micro- and macro-levels. Meso-level networks are low density and may exhibit causal processes distinct from interpersonal micro-level networks. Organizations: Formal organizations are social groups that distribute tasks for a collective goal. Network research on organizations may focus on either intra-organizational or inter-organizational ties in terms of formal or informal relationships. Intra-organizational networks themselves often contain multiple levels of analysis, especially in larger organizations with multiple branches, franchises or semi-autonomous departments. In these cases, research is often conducted at a work group level and organization level, focusing on the interplay between the two structures. Experiments with networked groups online have documented ways to optimize group-level coordination through diverse interventions, including the addition of autonomous agents to the groups. Randomly distributed networks: Exponential random graph models of social networks became state-of-the-art methods of social network analysis in the 1980s. This framework has the capacity to represent social-structural effects commonly observed in many human social networks, including general degree-based structural effects commonly observed in many human social networks as well as reciprocity and transitivity, and at the node-level, homophily and attribute-based activity and popularity effects, as derived from explicit hypotheses about dependencies among network ties. Parameters are given in terms of the prevalence of small subgraph configurations in the network and can be interpreted as describing the combinations of local social processes from which a given network emerges. These probability models for networks on a given set of actors allow generalization beyond the restrictive dyadic independence assumption of micro-networks, allowing models to be built from theoretical structural foundations of social behavior. Scale-free networks: A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. In network theory a scale-free ideal network is a random network with a degree distribution that unravels the size distribution of social groups. Specific characteristics of scale-free networks vary with the theories and analytical tools used to create them, however, in general, scale-free networks have some common characteristics. One notable characteristic in a scale-free network is the relative commonness of vertices with a degree that greatly exceeds the average. The highest-degree nodes are often called "hubs", and may serve specific purposes in their networks, although this depends greatly on the social context. Another general characteristic of scale-free networks is the clustering coefficient distribution, which decreases as the node degree increases. This distribution also follows a power law. The Barabási model of network evolution shown above is an example of a scale-free network. Rather than tracing interpersonal interactions, macro-level analyses generally trace the outcomes of interactions, such as economic or other resource transfer interactions over a large population. Large-scale networks: Large-scale network is a term somewhat synonymous with "macro-level." It is primarily used in social and behavioral sciences, and in economics. Originally, the term was used extensively in the computer sciences (see large-scale network mapping). Complex networks: Most larger social networks display features of social complexity, which involves substantial non-trivial features of network topology, with patterns of complex connections between elements that are neither purely regular nor purely random (see, complexity science, dynamical system and chaos theory), as do biological, and technological networks. Such complex network features include a heavy tail in the degree distribution, a high clustering coefficient, assortativity or disassortativity among vertices, community structure (see stochastic block model), and hierarchical structure. In the case of agency-directed networks these features also include reciprocity, triad significance profile (TSP, see network motif), and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such as lattices and random graphs, do not show these features. Theoretical links Various theoretical frameworks have been imported for the use of social network analysis. The most prominent of these are Graph theory, Balance theory, Social comparison theory, and more recently, the Social identity approach. Few complete theories have been produced from social network analysis. Two that have are structural role theory and heterophily theory. The basis of Heterophily Theory was the finding in one study that more numerous weak ties can be important in seeking information and innovation, as cliques have a tendency to have more homogeneous opinions as well as share many common traits. This homophilic tendency was the reason for the members of the cliques to be attracted together in the first place. However, being similar, each member of the clique would also know more or less what the other members knew. To find new information or insights, members of the clique will have to look beyond the clique to its other friends and acquaintances. This is what Granovetter called "the strength of weak ties". Structural holes In the context of networks, social capital exists where people have an advantage because of their location in a network. Contacts in a network provide information, opportunities and perspectives that can be beneficial to the central player in the network. Most social structures tend to be characterized by dense clusters of strong connections. Information within these clusters tends to be rather homogeneous and redundant. Non-redundant information is most often obtained through contacts in different clusters. When two separate clusters possess non-redundant information, there is said to be a structural hole between them. Thus, a network that bridges structural holes will provide network benefits that are in some degree additive, rather than overlapping. An ideal network structure has a vine and cluster structure, providing access to many different clusters and structural holes. Networks rich in structural holes are a form of social capital in that they offer information benefits. The main player in a network that bridges structural holes is able to access information from diverse sources and clusters. For example, in business networks, this is beneficial to an individual's career because he is more likely to hear of job openings and opportunities if his network spans a wide range of contacts in different industries/sectors. This concept is similar to Mark Granovetter's theory of weak ties, which rests on the basis that having a broad range of contacts is most effective for job attainment. Structural holes have been widely applied in social network analysis, resulting in applications in a wide range of practical scenarios as well as machine learning-based social prediction. Research clusters Research has used network analysis to examine networks created when artists are exhibited together in museum exhibition. Such networks have been shown to affect an artist's recognition in history and historical narratives, even when controlling for individual accomplishments of the artist. Other work examines how network grouping of artists can affect an individual artist's auction performance. An artist's status has been shown to increase when associated with higher status networks, though this association has diminishing returns over an artist's career. In J.A. Barnes' day, a "community" referred to a specific geographic location and studies of community ties had to do with who talked, associated, traded, and attended church with whom. Today, however, there are extended "online" communities developed through telecommunications devices and social network services. Such devices and services require extensive and ongoing maintenance and analysis, often using network science methods. Community development studies, today, also make extensive use of such methods. Complex networks require methods specific to modelling and interpreting social complexity and complex adaptive systems, including techniques of dynamic network analysis. Mechanisms such as Dual-phase evolution explain how temporal changes in connectivity contribute to the formation of structure in social networks. The study of social networks is being used to examine the nature of interdependencies between actors and the ways in which these are related to outcomes of conflict and cooperation. Areas of study include cooperative behavior among participants in collective actions such as protests; promotion of peaceful behavior, social norms, and public goods within communities through networks of informal governance; the role of social networks in both intrastate conflict and interstate conflict; and social networking among politicians, constituents, and bureaucrats. In criminology and urban sociology, much attention has been paid to the social networks among criminal actors. For example, murders can be seen as a series of exchanges between gangs. Murders can be seen to diffuse outwards from a single source, because weaker gangs cannot afford to kill members of stronger gangs in retaliation, but must commit other violent acts to maintain their reputation for strength. Diffusion of ideas and innovations studies focus on the spread and use of ideas from one actor to another or one culture and another. This line of research seeks to explain why some become "early adopters" of ideas and innovations, and links social network structure with facilitating or impeding the spread of an innovation. A case in point is the social diffusion of linguistic innovation such as neologisms. Experiments and large-scale field trials (e.g., by Nicholas Christakis and collaborators) have shown that cascades of desirable behaviors can be induced in social groups, in settings as diverse as Honduras villages, Indian slums, or in the lab. Still other experiments have documented the experimental induction of social contagion of voting behavior, emotions, risk perception, and commercial products. In demography, the study of social networks has led to new sampling methods for estimating and reaching populations that are hard to enumerate (for example, homeless people or intravenous drug users.) For example, respondent driven sampling is a network-based sampling technique that relies on respondents to a survey recommending further respondents. The field of sociology focuses almost entirely on networks of outcomes of social interactions. More narrowly, economic sociology considers behavioral interactions of individuals and groups through social capital and social "markets". Sociologists, such as Mark Granovetter, have developed core principles about the interactions of social structure, information, ability to punish or reward, and trust that frequently recur in their analyses of political, economic and other institutions. Granovetter examines how social structures and social networks can affect economic outcomes like hiring, price, productivity and innovation and describes sociologists' contributions to analyzing the impact of social structure and networks on the economy. Analysis of social networks is increasingly incorporated into health care analytics, not only in epidemiological studies but also in models of patient communication and education, disease prevention, mental health diagnosis and treatment, and in the study of health care organizations and systems. Human ecology is an interdisciplinary and transdisciplinary study of the relationship between humans and their natural, social, and built environments. The scientific philosophy of human ecology has a diffuse history with connections to geography, sociology, psychology, anthropology, zoology, and natural ecology. In the study of literary systems, network analysis has been applied by Anheier, Gerhards and Romo, De Nooy, Senekal, and Lotker, to study various aspects of how literature functions. The basic premise is that polysystem theory, which has been around since the writings of Even-Zohar, can be integrated with network theory and the relationships between different actors in the literary network, e.g. writers, critics, publishers, literary histories, etc., can be mapped using visualization from SNA. Research studies of formal or informal organization relationships, organizational communication, economics, economic sociology, and other resource transfers. Social networks have also been used to examine how organizations interact with each other, characterizing the many informal connections that link executives together, as well as associations and connections between individual employees at different organizations. Many organizational social network studies focus on teams. Within team network studies, research assesses, for example, the predictors and outcomes of centrality and power, density and centralization of team instrumental and expressive ties, and the role of between-team networks. Intra-organizational networks have been found to affect organizational commitment, organizational identification, interpersonal citizenship behaviour. Social capital is a form of economic and cultural capital in which social networks are central, transactions are marked by reciprocity, trust, and cooperation, and market agents produce goods and services not mainly for themselves, but for a common good. Social capital is split into three dimensions: the structural, the relational and the cognitive dimension. The structural dimension describes how partners interact with each other and which specific partners meet in a social network. Also, the structural dimension of social capital indicates the level of ties among organizations. This dimension is highly connected to the relational dimension which refers to trustworthiness, norms, expectations and identifications of the bonds between partners. The relational dimension explains the nature of these ties which is mainly illustrated by the level of trust accorded to the network of organizations. The cognitive dimension analyses the extent to which organizations share common goals and objectives as a result of their ties and interactions. Social capital is a sociological concept about the value of social relations and the role of cooperation and confidence to achieve positive outcomes. The term refers to the value one can get from their social ties. For example, newly arrived immigrants can make use of their social ties to established migrants to acquire jobs they may otherwise have trouble getting (e.g., because of unfamiliarity with the local language). A positive relationship exists between social capital and the intensity of social network use. In a dynamic framework, higher activity in a network feeds into higher social capital which itself encourages more activity. This particular cluster focuses on brand-image and promotional strategy effectiveness, taking into account the impact of customer participation on sales and brand-image. This is gauged through techniques such as sentiment analysis which rely on mathematical areas of study such as data mining and analytics. This area of research produces vast numbers of commercial applications as the main goal of any study is to understand consumer behaviour and drive sales. In many organizations, members tend to focus their activities inside their own groups, which stifles creativity and restricts opportunities. A player whose network bridges structural holes has an advantage in detecting and developing rewarding opportunities. Such a player can mobilize social capital by acting as a "broker" of information between two clusters that otherwise would not have been in contact, thus providing access to new ideas, opinions and opportunities. British philosopher and political economist John Stuart Mill, writes, "it is hardly possible to overrate the value of placing human beings in contact with persons dissimilar to themselves.... Such communication [is] one of the primary sources of progress." Thus, a player with a network rich in structural holes can add value to an organization through new ideas and opportunities. This in turn, helps an individual's career development and advancement. A social capital broker also reaps control benefits of being the facilitator of information flow between contacts. Full communication with exploratory mindsets and information exchange generated by dynamically alternating positions in a social network promotes creative and deep thinking. In the case of consulting firm Eden McCallum, the founders were able to advance their careers by bridging their connections with former big three consulting firm consultants and mid-size industry firms. By bridging structural holes and mobilizing social capital, players can advance their careers by executing new opportunities between contacts. There has been research that both substantiates and refutes the benefits of information brokerage. A study of high tech Chinese firms by Zhixing Xiao found that the control benefits of structural holes are "dissonant to the dominant firm-wide spirit of cooperation and the information benefits cannot materialize due to the communal sharing values" of such organizations. However, this study only analyzed Chinese firms, which tend to have strong communal sharing values. Information and control benefits of structural holes are still valuable in firms that are not quite as inclusive and cooperative on the firm-wide level. In 2004, Ronald Burt studied 673 managers who ran the supply chain for one of America's largest electronics companies. He found that managers who often discussed issues with other groups were better paid, received more positive job evaluations and were more likely to be promoted. Thus, bridging structural holes can be beneficial to an organization, and in turn, to an individual's career. Computer networks combined with social networking software produce a new medium for social interaction. A relationship over a computerized social networking service can be characterized by context, direction, and strength. The content of a relation refers to the resource that is exchanged. In a computer-mediated communication context, social pairs exchange different kinds of information, including sending a data file or a computer program as well as providing emotional support or arranging a meeting. With the rise of electronic commerce, information exchanged may also correspond to exchanges of money, goods or services in the "real" world. Social network analysis methods have become essential to examining these types of computer mediated communication. In addition, the sheer size and the volatile nature of social media has given rise to new network metrics. A key concern with networks extracted from social media is the lack of robustness of network metrics given missing data. Based on the pattern of homophily, ties between people are most likely to occur between nodes that are most similar to each other, or within neighbourhood segregation, individuals are most likely to inhabit the same regional areas as other individuals who are like them. Therefore, social networks can be used as a tool to measure the degree of segregation or homophily within a social network. Social Networks can both be used to simulate the process of homophily but it can also serve as a measure of level of exposure of different groups to each other within a current social network of individuals in a certain area. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Lod#cite_ref-:1_8-0] | [TOKENS: 4733] |
Contents Lod Lod (Hebrew: לוד, fully vocalized: לֹד), also known as Lydda (Ancient Greek: Λύδδα) and Lidd (Arabic: اللِّدّ, romanized: al-Lidd, or اللُّدّ, al-Ludd), is a city 15 km (9+1⁄2 mi) southeast of Tel Aviv and 40 km (25 mi) northwest of Jerusalem in the Central District of Israel. It is situated between the lower Shephelah on the east and the coastal plain on the west. The city had a population of 90,814 in 2023. Lod has been inhabited since at least the Neolithic period. It is mentioned a few times in the Hebrew Bible and in the New Testament. Between the 5th century BCE and up until the late Roman period, it was a prominent center for Jewish scholarship and trade. Around 200 CE, the city became a Roman colony and was renamed Diospolis (Ancient Greek: Διόσπολις, lit. 'city of Zeus'). Tradition identifies Lod as the 4th century martyrdom site of Saint George; the Church of Saint George and Mosque of Al-Khadr located in the city is believed to have housed his remains. Following the Arab conquest of the Levant, Lod served as the capital of Jund Filastin; however, a few decades later, the seat of power was transferred to Ramla, and Lod slipped in importance. Under Crusader rule, the city was a Catholic diocese of the Latin Church and it remains a titular see to this day.[citation needed] Lod underwent a major change in its population in the mid-20th century. Exclusively Palestinian Arab in 1947, Lod was part of the area designated for an Arab state in the United Nations Partition Plan for Palestine; however, in July 1948, the city was occupied by the Israel Defense Forces, and most of its Arab inhabitants were expelled in the Palestinian expulsion from Lydda and Ramle. The city was largely resettled by Jewish immigrants, most of them expelled from Arab countries. Today, Lod is one of Israel's mixed cities, with an Arab population of 30%. Lod is one of Israel's major transportation hubs. The main international airport, Ben Gurion Airport, is located 8 km (5 miles) north of the city. The city is also a major railway and road junction. Religious references The Hebrew name Lod appears in the Hebrew Bible as a town of Benjamin, founded along with Ono by Shamed or Shamer (1 Chronicles 8:12; Ezra 2:33; Nehemiah 7:37; 11:35). In Ezra 2:33, it is mentioned as one of the cities whose inhabitants returned after the Babylonian captivity. Lod is not mentioned among the towns allocated to the tribe of Benjamin in Joshua 18:11–28. The name Lod derives from a tri-consonantal root not extant in Northwest Semitic, but only in Arabic (“to quarrel; withhold, hinder”). An Arabic etymology of such an ancient name is unlikely (the earliest attestation is from the Achaemenid period). In the New Testament, the town appears in its Greek form, Lydda, as the site of Peter's healing of Aeneas in Acts 9:32–38. The city is also mentioned in an Islamic hadith as the location of the battlefield where the false messiah (al-Masih ad-Dajjal) will be slain before the Day of Judgment. History The first occupation dates to the Neolithic in the Near East and is associated with the Lodian culture. Occupation continued in the Levant Chalcolithic. Pottery finds have dated the initial settlement in the area now occupied by the town to 5600–5250 BCE. In the Early Bronze, it was an important settlement in the central coastal plain between the Judean Shephelah and the Mediterranean coast, along Nahal Ayalon. Other important nearby sites were Tel Dalit, Tel Bareqet, Khirbat Abu Hamid (Shoham North), Tel Afeq, Azor and Jaffa. Two architectural phases belong to the late EB I in Area B. The first phase had a mudbrick wall, while the late phase included a circulat stone structure. Later excavations have produced an occupation later, Stratum IV. It consists of two phases, Stratum IVb with mudbrick wall on stone foundations and rounded exterior corners. In Stratum IVa there was a mudbrick wall with no stone foundations, with imported Egyptian potter and local pottery imitations. Another excavations revealed nine occupation strata. Strata VI-III belonged to Early Bronze IB. The material culture showed Egyptian imports in strata V and IV. Occupation continued into Early Bronze II with four strata (V-II). There was continuity in the material culture and indications of centralized urban planning. North to the tell were scattered MB II burials. The earliest written record is in a list of Canaanite towns drawn up by the Egyptian pharaoh Thutmose III at Karnak in 1465 BCE. From the fifth century BCE until the Roman period, the city was a centre of Jewish scholarship and commerce. According to British historian Martin Gilbert, during the Hasmonean period, Jonathan Maccabee and his brother, Simon Maccabaeus, enlarged the area under Jewish control, which included conquering the city. The Jewish community in Lod during the Mishnah and Talmud era is described in a significant number of sources, including information on its institutions, demographics, and way of life. The city reached its height as a Jewish center between the First Jewish-Roman War and the Bar Kokhba revolt, and again in the days of Judah ha-Nasi and the start of the Amoraim period. The city was then the site of numerous public institutions, including schools, study houses, and synagogues. In 43 BC, Cassius, the Roman governor of Syria, sold the inhabitants of Lod into slavery, but they were set free two years later by Mark Antony. During the First Jewish–Roman War, the Roman proconsul of Syria, Cestius Gallus, razed the town on his way to Jerusalem in Tishrei 66 CE. According to Josephus, "[he] found the city deserted, for the entire population had gone up to Jerusalem for the Feast of Tabernacles. He killed fifty people whom he found, burned the town and marched on". Lydda was occupied by Emperor Vespasian in 68 CE. In the period following the destruction of Jerusalem in 70 CE, Rabbi Tarfon, who appears in many Tannaitic and Jewish legal discussions, served as a rabbinic authority in Lod. During the Kitos War, 115–117 CE, the Roman army laid siege to Lod, where the rebel Jews had gathered under the leadership of Julian and Pappos. Torah study was outlawed by the Romans and pursued mostly in the underground. The distress became so great, the patriarch Rabban Gamaliel II, who was shut up there and died soon afterwards, permitted fasting on Ḥanukkah. Other rabbis disagreed with this ruling. Lydda was next taken and many of the Jews were executed; the "slain of Lydda" are often mentioned in words of reverential praise in the Talmud. In 200 CE, emperor Septimius Severus elevated the town to the status of a city, calling it Colonia Lucia Septimia Severa Diospolis. The name Diospolis ("City of Zeus") may have been bestowed earlier, possibly by Hadrian. At that point, most of its inhabitants were Christian. The earliest known bishop is Aëtius, a friend of Arius. During the following century (200-300CE), it's said that Joshua ben Levi founded a yeshiva in Lod. In December 415, the Council of Diospolis was held here to try Pelagius; he was acquitted. In the sixth century, the city was renamed Georgiopolis after St. George, a soldier in the guard of the emperor Diocletian, who was born there between 256 and 285 CE. The Church of Saint George and Mosque of Al-Khadr is named for him. The 6th-century Madaba map shows Lydda as an unwalled city with a cluster of buildings under a black inscription reading "Lod, also Lydea, also Diospolis". An isolated large building with a semicircular colonnaded plaza in front of it might represent the St George shrine. After the Muslim conquest of Palestine by Amr ibn al-'As in 636 CE, Lod which was referred to as "al-Ludd" in Arabic served as the capital of Jund Filastin ("Military District of Palaestina") before the seat of power was moved to nearby Ramla during the reign of the Umayyad Caliph Suleiman ibn Abd al-Malik in 715–716. The population of al-Ludd was relocated to Ramla, as well. With the relocation of its inhabitants and the construction of the White Mosque in Ramla, al-Ludd lost its importance and fell into decay. The city was visited by the local Arab geographer al-Muqaddasi in 985, when it was under the Fatimid Caliphate, and was noted for its Great Mosque which served the residents of al-Ludd, Ramla, and the nearby villages. He also wrote of the city's "wonderful church (of St. George) at the gate of which Christ will slay the Antichrist." The Crusaders occupied the city in 1099 and named it St Jorge de Lidde. It was briefly conquered by Saladin, but retaken by the Crusaders in 1191. For the English Crusaders, it was a place of great significance as the birthplace of Saint George. The Crusaders made it the seat of a Latin Church diocese, and it remains a titular see. It owed the service of 10 knights and 20 sergeants, and it had its own burgess court during this era. In 1226, Ayyubid Syrian geographer Yaqut al-Hamawi visited al-Ludd and stated it was part of the Jerusalem District during Ayyubid rule. Sultan Baybars brought Lydda again under Muslim control by 1267–8. According to Qalqashandi, Lydda was an administrative centre of a wilaya during the fourteenth and fifteenth century in the Mamluk empire. Mujir al-Din described it as a pleasant village with an active Friday mosque. During this time, Lydda was a station on the postal route between Cairo and Damascus. In 1517, Lydda was incorporated into the Ottoman Empire as part of the Damascus Eyalet, and in the 1550s, the revenues of Lydda were designated for the new waqf of Hasseki Sultan Imaret in Jerusalem, established by Hasseki Hurrem Sultan (Roxelana), the wife of Suleiman the Magnificent. By 1596 Lydda was a part of the nahiya ("subdistrict") of Ramla, which was under the administration of the liwa ("district") of Gaza. It had a population of 241 households and 14 bachelors who were all Muslims, and 233 households who were Christians. They paid a fixed tax-rate of 33,3 % on agricultural products, including wheat, barley, summer crops, vineyards, fruit trees, sesame, special product ("dawalib" =spinning wheels), goats and beehives, in addition to occasional revenues and market toll, a total of 45,000 Akçe. All of the revenue went to the Waqf. In 1051 AH/1641/2, the Bedouin tribe of al-Sawālima from around Jaffa attacked the villages of Subṭāra, Bayt Dajan, al-Sāfiriya, Jindās, Lydda and Yāzūr belonging to Waqf Haseki Sultan. The village appeared as Lydda, though misplaced, on the map of Pierre Jacotin compiled in 1799. Missionary William M. Thomson visited Lydda in the mid-19th century, describing it as a "flourishing village of some 2,000 inhabitants, imbosomed in noble orchards of olive, fig, pomegranate, mulberry, sycamore, and other trees, surrounded every way by a very fertile neighbourhood. The inhabitants are evidently industrious and thriving, and the whole country between this and Ramleh is fast being filled up with their flourishing orchards. Rarely have I beheld a rural scene more delightful than this presented in early harvest ... It must be seen, heard, and enjoyed to be appreciated." In 1869, the population of Ludd was given as: 55 Catholics, 1,940 "Greeks", 5 Protestants and 4,850 Muslims. In 1870, the Church of Saint George was rebuilt. In 1892, the first railway station in the entire region was established in the city. In the second half of the 19th century, Jewish merchants migrated to the city, but left after the 1921 Jaffa riots. In 1882, the Palestine Exploration Fund's Survey of Western Palestine described Lod as "A small town, standing among enclosure of prickly pear, and having fine olive groves around it, especially to the south. The minaret of the mosque is a very conspicuous object over the whole of the plain. The inhabitants are principally Moslim, though the place is the seat of a Greek bishop resident of Jerusalem. The Crusading church has lately been restored, and is used by the Greeks. Wells are found in the gardens...." From 1918, Lydda was under the administration of the British Mandate in Palestine, as per a League of Nations decree that followed the Great War. During the Second World War, the British set up supply posts in and around Lydda and its railway station, also building an airport that was renamed Ben Gurion Airport after the death of Israel's first prime minister in 1973. At the time of the 1922 census of Palestine, Lydda had a population of 8,103 inhabitants (7,166 Muslims, 926 Christians, and 11 Jews), the Christians were 921 Orthodox, 4 Roman Catholics and 1 Melkite. This had increased by the 1931 census to 11,250 (10,002 Muslims, 1,210 Christians, 28 Jews, and 10 Bahai), in a total of 2475 residential houses. In 1938, Lydda had a population of 12,750. In 1945, Lydda had a population of 16,780 (14,910 Muslims, 1,840 Christians, 20 Jews and 10 "other"). Until 1948, Lydda was an Arab town with a population of around 20,000—18,500 Muslims and 1,500 Christians. In 1947, the United Nations proposed dividing Mandatory Palestine into two states, one Jewish state and one Arab; Lydda was to form part of the proposed Arab state. In the ensuing war, Israel captured Arab towns outside the area the UN had allotted it, including Lydda. In December 1947, thirteen Jewish passengers in a seven-car convoy to Ben Shemen Youth Village were ambushed and murdered.In a separate incident, three Jewish youths, two men and a woman were captured, then raped and murdered in a neighbouring village. Their bodies were paraded in Lydda’s principal street. The Israel Defense Forces entered Lydda on 11 July 1948. The following day, under the impression that it was under attack, the 3rd Battalion was ordered to shoot anyone "seen on the streets". According to Israel, 250 Arabs were killed. Other estimates are higher: Arab historian Aref al Aref estimated 400, and Nimr al Khatib 1,700. In 1948, the population rose to 50,000 during the Nakba, as Arab refugees fleeing other areas made their way there. A key event was the Palestinian expulsion from Lydda and Ramle, with the expulsion of 50,000-70,000 Palestinians from Lydda and Ramle by the Israel Defense Forces. All but 700 to 1,056 were expelled by order of the Israeli high command, and forced to walk 17 km (10+1⁄2 mi) to the Jordanian Arab Legion lines. Estimates of those who died from exhaustion and dehydration vary from a handful to 355. The town was subsequently sacked by the Israeli army. Some scholars, including Ilan Pappé, characterize this as ethnic cleansing. The few hundred Arabs who remained in the city were soon outnumbered by the influx of Jews who immigrated to Lod from August 1948 onward, most of them from Arab countries. As a result, Lod became a predominantly Jewish town. After the establishment of the state, the biblical name Lod was readopted. The Jewish immigrants who settled Lod came in waves, first from Morocco and Tunisia, later from Ethiopia, and then from the former Soviet Union. Since 2008, many urban development projects have been undertaken to improve the image of the city. Upscale neighbourhoods have been built, among them Ganei Ya'ar and Ahisemah, expanding the city to the east. According to a 2010 report in the Economist, a three-meter-high wall was built between Jewish and Arab neighbourhoods and construction in Jewish areas was given priority over construction in Arab neighborhoods. The newspaper says that violent crime in the Arab sector revolves mainly around family feuds over turf and honour crimes. In 2010, the Lod Community Foundation organised an event for representatives of bicultural youth movements, volunteer aid organisations, educational start-ups, businessmen, sports organizations, and conservationists working on programmes to better the city. In the 2021 Israel–Palestine crisis, a state of emergency was declared in Lod after Arab rioting led to the death of an Israeli Jew. The Mayor of Lod, Yair Revivio, urged Prime Minister of Israel Benjamin Netanyahu to deploy Israel Border Police to restore order in the city. This was the first time since 1966 that Israel had declared this kind of emergency lockdown. International media noted that both Jewish and Palestinian mobs were active in Lod, but the "crackdown came for one side" only. Demographics In the 19th century and until the Lydda Death March, Lod was an exclusively Muslim-Christian town, with an estimated 6,850 inhabitants, of whom approximately 2,000 (29%) were Christian. According to the Israel Central Bureau of Statistics (CBS), the population of Lod in 2010 was 69,500 people. According to the 2019 census, the population of Lod was 77,223, of which 53,581 people, comprising 69.4% of the city's population, were classified as "Jews and Others", and 23,642 people, comprising 30.6% as "Arab". Education According to CBS, 38 schools and 13,188 pupils are in the city. They are spread out as 26 elementary schools and 8,325 elementary school pupils, and 13 high schools and 4,863 high school pupils. About 52.5% of 12th-grade pupils were entitled to a matriculation certificate in 2001.[citation needed] Economy The airport and related industries are a major source of employment for the residents of Lod. Other important factories in the city are the communication equipment company "Talard", "Cafe-Co" - a subsidiary of the Strauss Group and "Kashev" - the computer center of Bank Leumi. A Jewish Agency Absorption Centre is also located in Lod. According to CBS figures for 2000, 23,032 people were salaried workers and 1,405 were self-employed. The mean monthly wage for a salaried worker was NIS 4,754, a real change of 2.9% over the course of 2000. Salaried men had a mean monthly wage of NIS 5,821 (a real change of 1.4%) versus NIS 3,547 for women (a real change of 4.6%). The mean income for the self-employed was NIS 4,991. About 1,275 people were receiving unemployment benefits and 7,145 were receiving an income supplement. Art and culture In 2009-2010, Dor Guez held an exhibit, Georgeopolis, at the Petach Tikva art museum that focuses on Lod. Archaeology A well-preserved mosaic floor dating to the Roman period was excavated in 1996 as part of a salvage dig conducted on behalf of the Israel Antiquities Authority and the Municipality of Lod, prior to widening HeHalutz Street. According to Jacob Fisch, executive director of the Friends of the Israel Antiquities Authority, a worker at the construction site noticed the tail of a tiger and halted work. The mosaic was initially covered over with soil at the conclusion of the excavation for lack of funds to conserve and develop the site. The mosaic is now part of the Lod Mosaic Archaeological Center. The floor, with its colorful display of birds, fish, exotic animals and merchant ships, is believed to have been commissioned by a wealthy resident of the city for his private home. The Lod Community Archaeology Program, which operates in ten Lod schools, five Jewish and five Israeli Arab, combines archaeological studies with participation in digs in Lod. Sports The city's major football club, Hapoel Bnei Lod, plays in Liga Leumit (the second division). Its home is at the Lod Municipal Stadium. The club was formed by a merger of Bnei Lod and Rakevet Lod in the 1980s. Two other clubs in the city play in the regional leagues: Hapoel MS Ortodoxim Lod in Liga Bet and Maccabi Lod in Liga Gimel. Hapoel Lod played in the top division during the 1960s and 1980s, and won the State Cup in 1984. The club folded in 2002. A new club, Hapoel Maxim Lod (named after former mayor Maxim Levy) was established soon after, but folded in 2007. Notable people Twin towns-sister cities Lod is twinned with: See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-148] | [TOKENS: 10728] |
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_note-Noveck-100] | [TOKENS: 9291] |
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_note-nasa-100] | [TOKENS: 10628] |
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Orion_in_Chinese_astronomy] | [TOKENS: 130] |
Contents Orion in Chinese astronomy The modern constellation Orion lies across two of the quadrants, symbolized by the White Tiger of the West (西方白虎, Xī Fāng Bái Hǔ) and Vermilion Bird of the South (南方朱雀, Nán Fāng Zhū Què), that divide the sky in traditional Chinese uranography. The name of the western constellation in modern Chinese is 猎户座 (liè hù zuò), meaning "the hunter constellation". Stars The map of Chinese constellation in constellation Orion area consists of : See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Insult_comedy] | [TOKENS: 121] |
Contents Insult comedy Insult comedy is a comedy genre in which the act consists mainly of offensive insults, usually directed at the audience or other performers. Typical targets for insult include people in the show's audience, the town hosting the performance, or the subject of a roast. The style can be distinguished from an act based on satire, or political humor. Insult comedy is often used to deflect or silence hecklers even when the rest of the show is not focused on it. Performers See also References This comedy- or humor-related article is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Haredi_Judaism] | [TOKENS: 13145] |
Contents Haredi Judaism Haredi Judaism (Hebrew: יהדות חֲרֵדִית, romanized: Yahadut Ḥaredit, IPA: [χaʁeˈdi]) is a branch of Orthodox Judaism that is characterized by its strict interpretation of religious sources and its accepted halakha (Jewish law) and traditions, in opposition to more accommodating values and practices. Its members are often referred to as ultra-Orthodox in English, a term considered pejorative by many of its adherents, who prefer the term strictly Orthodox or Haredi (plural: Haredim). Haredim regard themselves as the most authentic custodians of Jewish religious law and tradition which, in their opinion, is binding and unchangeable. Many consider all other contemporary expressions of Judaism, including Modern Orthodoxy, as "deviations from God's laws", although other movements of Judaism would disagree. Some scholars have suggested that Haredi Judaism is a reaction to societal changes, including political emancipation, the Haskalah movement derived from the Enlightenment, acculturation, secularization, religious reform in all its forms from mild to extreme, and the rise of the Jewish national movements. In contrast to Modern Orthodox Jews, Haredim segregate themselves from other parts of society, although some Haredi communities encourage young people to get a professional degree or establish a business. Furthermore, some Haredi groups, like Chabad-Lubavitch, encourage outreach to non-observant and unaffiliated Jews. As of 2020, there were about 2.1 million Haredim globally, representing 14% of the world's Jewish population. Haredim primarily live in Israel (17% of Israeli Jews and 14% of Israel's total population), North America (12% of American Jews), and Western Europe (most notably Antwerp and Stamford Hill in London). Absence of intermarriage, coupled with both a high birth and retention rate, spur rapid growth of the Haredi population, which is on pace to more than double every 20 years. Their numbers have been further boosted since the 1970s by secular Jews adopting a Haredi lifestyle as part of the baal teshuva movement; however, this has been somewhat offset by those leaving. Terminology The term Haredi is a Modern Hebrew adjective derived from the Biblical verb hared, which appears in the Book of Isaiah (66:2; its plural haredim appears in Isaiah 66:5) and is translated as "[one who] trembles" at the word of God. The word connotes an awe-inspired fear to perform the will of God; it is used to distinguish them from other Orthodox Jews (similar to the names used by Christian Quakers and Shakers to describe their relationship to God). The term most commonly used by outsiders, for example most American news organizations, is ultra-Orthodox Judaism. Hillel Halkin suggests the origins of the term may date to the 1950s, a period in which Haredi survivors of the Holocaust first began arriving in America. However, Isaac Leeser (1806–1868) was described in 1916 as "ultra-Orthodox". The word Haredi is often used in the Jewish diaspora in place of the term ultra-Orthodox, which many view as inaccurate or offensive, it being seen as a derogatory term suggesting extremism; English-language alternatives that have been proposed include fervently Orthodox, strictly Orthodox, or traditional Orthodox. Others, however, dispute the characterization of the term as pejorative. Ari L. Goldman, a professor at Columbia University, notes that the term simply serves a practical purpose to distinguish a specific part of the Orthodox community, and is not meant as pejorative. Others, such as Samuel Heilman, criticized terms such as ultra-Orthodox and traditional Orthodox, arguing that they misidentify Haredi Jews as more authentically Orthodox than others, as opposed to adopting customs and practises that reflect their desire to separate from the outside world. The community has sometimes been characterized as traditional Orthodox, in contradistinction to the Modern Orthodox, the other major branch of Orthodox Judaism, and not to be confused with the movement represented by the Union for Traditional Judaism, which originated in Conservative Judaism. Haredi Jews also use other terms to refer to themselves. Common Yiddish words include Yidn (Jews), erlekhe Yidn (virtuous Jews), ben Torah (son of the Torah), frum (pious), and heimish (home-like; i.e., "our crowd"). In Israel, Haredi Jews are sometimes also called by the derogatory slang words dos (plural dosim), that mimics the traditional Ashkenazi Hebrew pronunciation of the Hebrew word datiyim (religious), and more rarely, sh'chorim (blacks), a reference to the black clothes they typically wear; a related informal term used in English is black hat. Population Due to its imprecise definition, lack of data collection, and rapid change over time, estimating the global Haredi population is difficult. The true number of Haredim may be significantly underestimated due to their reluctance to participate in surveys and censuses. In 1992, out of a total of 1,500,000 Orthodox Jews worldwide, about 550,000 were Haredi (half of them in Israel). One estimate given in 2011 stated that there were approximately 1.3 million Haredi Jews globally. Studies have shown a very high growth rate, with a large young population. Haredi population grew to 2.1 million in 2020 and is expected to double by 2040. The vast majority of Haredi Jews are Ashkenazi. However, some 20% of the Haredi population are thought to belong to the Sephardic Haredi stream. In recent decades, Haredi society has grown due to the addition of a religious population that identifies with the Shas movement. The percentage of people leaving the Haredi population has been estimated between 6% and 18%. Israel has the largest Haredi population. In 1948, there were about 35,000 to 45,000 Haredi Jews in Israel. By 1980, Haredim made up 4% of the Israeli population. Haredim made up 9.9% of the Israeli population in 2009, with 750,000 out of 7,552,100; by 2014, that figure had risen to 11.1%, with 910,500 Haredim out of a total Israeli population of 8,183,400. According to a December 2017 study conducted by the Israeli Democracy Institute, the number of Haredi Jews in Israel exceeded 1 million in 2017, making up 12% of the population in Israel. In 2019, Haredim reached a population of almost 1,126,000; over the next year, it reached 1,175,000 (12.6% of total population). By the end of 2023, it reached almost 1,335,000, or 13.6% of total population; at the end of 2024, it passed over 1,392,000, thus representing 13.9% of the population, and by the end of 2025 it numbered over 1,452,000 or 14.3% of the total population of Israel. The number of Haredi Jews in Israel continues to rise rapidly, with their current population growth rate being 4% per year. The number of children per woman is 7.2, and the share of Haredim among those under the age of 20 was 16.3% in 2009 (29% of Jews). By 2030, the Haredi Jewish community is projected to make up 16% of the total population, and by 2065, a third of the Israeli population, including non-Jews. By then, one in two Israeli children would be Haredi. It is also projected that the number of Haredim in 2059 may be between 2.73 and 5.84 million, of an estimated total number of Israeli Jews between 6.09 and 9.95 million. The largest Israeli Haredi concentrations are in Jerusalem, Bnei Brak, Modi'in Illit, Beitar Illit, Beit Shemesh, Kiryat Ye'arim, Ashdod, Rekhasim, Safed, and El'ad. Two Haredi cities, Kasif and Harish, are planned.[citation needed] The United States has the second largest Haredi population, which has a growth rate on pace to double every 20 years. In 2000, there were 360,000 Haredi Jews in the US (7.2 per cent of the approximately 5 million Jews in the U.S.); by 2006, demographers estimate the number had grown to 468,000 (30% increase), or 9.4 percent of all U.S. Jews. In 2013, it was estimated that there were 530,000 total ultra-Orthodox Jews in the United States, or 10% of all American Jews. By 2011, 61% of all Jewish children in Eight-County New York City metropolitan area were Orthodox, with Haredim making up 49%. In 2020, it was estimated that there were approximately 700,000 total ultra-Orthodox Jews in the United States, or 12% of all American Jews. This number is expected to grow significantly in the coming years, due to high Haredi birth rates in America. Most American Haredi Jews live in the greater New York metropolitan area. The largest centers of Haredi and Hasidic life in New York are found in Brooklyn. The New York City borough of Queens is home to a growing Haredi population, mainly affiliated with the Yeshiva Chofetz Chaim and Yeshivas Ohr HaChaim in Kew Gardens Hills and Yeshiva Shaar Hatorah in Kew Gardens. Many of the students attend Queens College. There are major yeshivas and communities of Haredi Jews in Far Rockaway, such as Yeshiva of Far Rockaway and a number of others. Hasidic shtibelach exist in these communities as well, mostly catering to Haredi Jews who follow Hasidic customs, while living a Litvish or Modern Orthodox cultural lifestyle, although small Hasidic enclaves do exist, such as in the Bayswater section of Far Rockaway. One of the oldest Haredi communities in New York is on the Lower East Side, home to the Mesivtha Tifereth Jerusalem. Washington Heights, in northern Manhattan, is the historical home to German Jews, with Khal Adath Jeshurun and Yeshiva Rabbi Samson Raphael Hirsch. The presence of Yeshiva University attracts young people, many of whom remain in the area after graduation. The Yeshiva Sh'or Yoshuv, together with many synagogues in the Lawrence neighborhood and other Five Towns neighborhoods, such as Woodmere and Cedarhurst, have attracted many Haredi Jews. The Hudson Valley, north of New York City, has the most rapidly growing Haredi communities, such as the Hasidic communities in Kiryas Joel of Satmar Hasidim, and New Square of the Skver. A vast community of Haredi Jews lives in the Monsey, New York, area. There are significant Haredi communities in Lakewood (New Jersey), home to the largest non-Hasidic Lithuanian yeshiva in America, Beth Medrash Govoha. There are also sizable communities in Teaneck, Englewood, Mahwah, Passaic and Edison, where a branch of the Rabbi Jacob Joseph Yeshiva opened in 1982. There is also a community of Syrian Jews favorable to the Haredim in their midst in Deal, New Jersey. The Haredi community of New Haven has close to 150 families and a number of thriving Haredi educational institutions. Waterbury, Connecticut has a growing Haredi community, in Waterbury proper, and in the neighboring areas of Blueridge and Naugatuck. Baltimore, Maryland, has a large Haredi population. The major yeshiva is Yeshivas Ner Yisroel, founded in 1933, with thousands of alumni and their families. Ner Yisroel is also a Maryland state-accredited college, and has agreements with Johns Hopkins University, Towson University, Loyola College in Maryland, University of Baltimore, and University of Maryland, Baltimore County, allowing undergraduate students to take night courses at these colleges and universities in a variety of academic fields. The agreement also allows the students to receive academic credits for their religious studies. Silver Spring, Maryland, and its environs has a growing Haredi community, mostly of highly educated and skilled professionals working for the United States government in various capacities, most living in Kemp Mill, White Oak, and Woodside, and many of its children attend the Yeshiva of Greater Washington and Yeshivas Ner Yisroel in Baltimore. Aventura, Sunny Isles Beach, Golden Beach, Surfside and Bal Harbour are home to a large and growing Haredi population. The community is long-established in the area, with several synagogues including The Shul of Bal Harbour, Young Israel of Bal Harbour, Aventura Chabad, Beit Rambam, Safra Synagogue of Aventura, and Chabad of Sunny Isles; mikvehs, Jewish schools and kosher restaurants. The community has recently grown much further, due to many Orthodox Jews from New York moving to Florida during the COVID-19 pandemic. North of Miami, the communities of Boca Raton, Lauderhill, Boynton Beach, and Hollywood have significant Haredi populations. Los Angeles has many Haredi Jews, most living in the Pico-Robertson and Fairfax (Fairfax Avenue-La Brea Avenue) areas. Chicago is home to the Haredi Telshe Yeshiva, with many other Haredim living in the city. Haredim in Philadelphia primarily live in Bala Cynwyd, and the community is centered around Aish HaTorah and the Philadelphia Community Kollel. In Pittsburgh a small yeshiva opened in 1945. Today there are approximately 200 Chabad families living in the Squirrel Hill neighborhood. Kingston has a young growing Chabad Haredi community which has been growing steadily over the past 20 years since the first families moved there when a yeshiva was opened. Denver has a large Haredi population of Ashkenazi origin, dating back to the early 1920s. The Haredi Denver West Side Jewish Community adheres to Litvak Jewish traditions (Lithuanian), and has several congregations located within their communities. Boston and Brookline, Massachusetts, have the largest Haredi populations in New England. One of the oldest Haredi Lithuanian yeshivas, Telshe Yeshiva, transplanted itself to Cleveland in 1941. Beachwood, Ohio has a large and growing Haredi community, and is a heavily Jewish suburb of Cleveland. The haredi community is centered around the Beachwood Kehilla and Green Road Synagogue, has a mikvah and a Jewish day school. In 1998, the Haredi population in the Jewish community of the United Kingdom was estimated at 27,000 (13% of affiliated Jews). The largest communities are located in London, particularly Stamford Hill, Golders Green, Hendon, Edgware; in Salford and Prestwich in Greater Manchester; and in Gateshead. A 2007 study asserted that three out of four British Jewish births were Haredi, who then accounted for 17% of British Jews (45,500 out of around 275,000). Another study in 2010 established that there were 9,049 Haredi households in the UK, which would account for a population of nearly 53,400, or 20% of the community. The Board of Deputies of British Jews has predicted that the Haredi community will become the largest group in Anglo-Jewry within the next three decades: In comparison with the national average of 2.4 children per family, Haredi families have an average of 5.9 children, and consequently, the population distribution is heavily biased to the under-20-year-olds. By 2006, membership of Haredi synagogues had doubled since 1990. British Haredi fertility rate has also been estimated to be as high as 6.9 children per woman. An investigation by The Independent in 2014 reported that more than 1,000 children in Haredi communities were attending illegal schools where secular knowledge is banned, and they learn only religious texts, meaning they leave school with no qualifications and often unable to speak any English. The 2018 Survey by the Jewish Policy Research (JPR) and the Board of Deputies of British Jews showed that the high birth rate in the Haredi and Orthodox community reversed the decline in the Jewish population in Britain. In 2020, it was estimated that there were approximately 76,000 total ultra-Orthodox Jews in the United Kingdom, or 25% of all British Jews, a significant increase from 1998 and 2010. About 25,000 Haredim live in the Jewish community of France, mostly people of Sephardic, Maghrebi Jewish descent. Important communities are located in Paris (19th arrondissement), Strasbourg, and Lyon. Other important communities, mostly of Ashkenazi Jews, are the Antwerp community in Belgium, as well as in the Swiss communities of Zürich and Basel, and in the Dutch community in Amsterdam. There is also a Haredi community in Vienna, in the Jewish community of Austria. Other countries with significant Haredi populations include: Canada, with a total number of 30,000 Haredim, with large Haredi centres in Montreal and Toronto; South Africa, primarily in Johannesburg; and an estimated 7,500 Haredim in Australia, centred in Melbourne. Haredi communities also exist in Argentina, especially in Buenos Aires, and in Brazil, primarily in São Paulo. A Haredi city is under construction (2021) in Mexico near Ixtapan de la Sal. Decades after The Holocaust, Haredim are growing again in Budapest, opening several new synagogues and two mikvehs in the city over the past couple of years. History Throughout Jewish history, Judaism has always faced internal and external challenges to its beliefs and practices which have emerged over time and produced counter-responses. According to its adherents, Haredi Judaism is a continuation of Rabbinic Judaism, and the immediate forebears of contemporary Haredi Jews were the Jewish religious traditionalists of Central and Eastern Europe who fought against secular modernization's influence which reduced Jewish religious observance. Indeed, adherents of Haredi Judaism, just like Rabbinic Jews, see their beliefs as part of an unbroken tradition which dates back to the revelation at Sinai. However, most historians of Orthodoxy consider Haredi Judaism, in its most modern incarnation, to date back to the beginning of the 20th century. For centuries, before Jewish emancipation, European Jews were forced to live in ghettos where Jewish culture and religious observance were preserved. Change began in the wake of the Age of Enlightenment, when some European liberals sought to include the Jewish population in the emerging empires and nation states. The influence of the Haskalah movement (Jewish Enlightenment) was also evident. Supporters of the Haskalah held that Judaism must change, in keeping with the social changes around them. Other Jews insisted on strict adherence to halakha (Jewish law and custom). In Germany, the opponents of Reform rallied to Samson Raphael Hirsch, who led a secession from German Jewish communal organizations to form a strictly Orthodox movement, with its own network of synagogues and religious schools. His approach was to accept the tools of modern scholarship and apply them in defence of Orthodox Judaism. In the Polish–Lithuanian Commonwealth (including areas traditionally considered Lithuanian), Jews true to traditional values gathered under the banner of Agudas Shlumei Emunei Yisroel. Moses Sofer was opposed to any philosophical, social, or practical change to customary Orthodox practice. Thus, he did not allow any secular studies to be added to the curriculum of his Pressburg Yeshiva. Sofer's student Moshe Schick, together with Sofer's sons Shimon and Samuel Benjamin, took an active role in arguing against the Reform movement. Others, such as Hillel Lichtenstein, advocated an even more stringent position for Orthodoxy. A major historic event was the meltdown after the Universal Israelite Congress of 1868–1869 in Pest, Hungary. In an attempt to unify all streams of Judaism under one constitution, the Orthodox offered the Shulchan Aruch as the ruling Code of law and observance. This was dismissed by the reformists, leading many Orthodox rabbis to resign from the Congress and form their own social and political groups. Hungarian Jewry split into two major institutionally sectarian groups: Orthodox, and Neolog. However, some communities refused to join either of the groups, calling themselves "Status Quo".[citation needed] Schick demonstrated support in 1877 for the separatist policies of Samson Raphael Hirsch in Germany. Schick's own son was enrolled in the Hildesheimer Rabbinical Seminary, headed by Azriel Hildesheimer, which taught secular studies. Hirsch, however, did not reciprocate, and expressed astonishment at Schick's halakhic contortions in condemning even those Status Quo communities that clearly adhered to halakha. Lichtenstein opposed Hildesheimer, and his son Hirsh Hildesheimer, as they made use of the German language in sermons from the pulpit and seemed to lean in the direction of Zionism. Shimon Sofer was somewhat more lenient than Lichtenstein on the use of German in sermons, allowing the practice as needed for the sake of keeping cordial relations with the various governments. Likewise, he allowed extra-curricular studies of the gymnasium for students whose rabbinical positions would be recognized by the governments, stipulating the necessity to prove the strict adherence to the God-fearing standards per individual case. In 1912, the World Agudath Israel was founded, to differentiate itself from the Torah Nationalist Mizrachi and secular Zionist organizations. It was dominated by the Hasidic rebbes and Lithuanian rabbis and roshei yeshiva (deans). The organization nominated rabbis who subsequently were elected as representatives in the Polish legislature Sejm, such as Meir Shapiro and Yitzhak-Meir Levin. Not all Hasidic factions joined the Agudath Israel, remaining independent instead, such as Machzikei Hadat of Galicia. In 1919, Yosef Chaim Sonnenfeld and Yitzchok Yerucham Diskin founded the Edah HaChareidis as part of Agudath Israel in then-Mandate Palestine. In 1924, Agudath Israel obtained 75 percent of the votes in the Kehilla elections. The Orthodox community polled some 16,000 of a total 90,000 at the Knesseth Israel in 1929. But Sonnenfeld lobbied Sir John Chancellor, the High Commissioner, for separate representation in the Palestine Communities Ordinance from that of the Knesseth Israel. He explained that the Agudas Israel community would cooperate with the Vaad Leumi and the National Jewish Council in matters pertaining to the municipality, but sought to protect its religious convictions independently. The community petitioned the Permanent Mandates Commission of the League of Nations on this issue. The one community principle was victorious, despite their opposition, but this is seen as the creation of the Haredi community in Israel, separate from the other Orthodox and Zionist movements. In 1932, Sonnenfeld was succeeded by Yosef Tzvi Dushinsky, a disciple of the Shevet Sofer, one of the grandchildren of Moses Sofer. Dushinsky promised to build up a strong Jewish Orthodoxy at peace with the other Jewish communities and the non-Jews. In general, the present-day Haredi population originate from two distinct post-Holocaust waves. The vast majority of Hasidic and Litvak communities were destroyed during the Holocaust. Although Hasidic customs have largely been preserved, the customs of Lithuanian Jewry, including its unique Hebrew pronunciation, have been almost lost. Litvish customs are still preserved primarily by the few older Jews who were born in Lithuania prior to the Holocaust. In the decade or so after 1945, there was a strong drive to revive and maintain these lifestyles by some notable Haredi leaders. The Chazon Ish was particularly prominent in the early days of the State of Israel. Aharon Kotler established many of the Haredi schools and yeshivas in the United States and Israel; and Joel Teitelbaum had a significant impact on revitalizing Hasidic Jewry, as well as many of the Jews who fled Hungary during the 1956 revolution who became followers of his Satmar dynasty, and became the largest Hasidic group in the world. These Jews typically have maintained a connection only with other religious family members. As such, those growing up in such families have little or no contact with non-Haredi Jews. The second wave began in the 1970s associated with the religious revival of the so-called baal teshuva movement, although most of the newly religious become Orthodox, and not necessarily fully Haredi.[citation needed] The formation and spread of the Sephardic Haredi lifestyle movement also began in the 1980s by Ovadia Yosef, alongside the establishment of the Shas party in 1984. This led many Sephardi Jews to adopt the clothing and culture of the Lithuanian Haredi Judaism, though it had no historical basis in their own tradition.[citation needed] Many yeshivas were also established specifically for new adopters of the Haredi way of life.[citation needed] The original Haredi population has been instrumental in the expansion of their lifestyle, though criticisms have been made of discrimination towards the later adopters of the Haredi lifestyle in shidduchim (matchmaking) and the school system. Practices and beliefs The Haredim represent the conservative or pietistic form of Jewish fundamentalism, distinct from the radical fundamentalism of Gush Emunim, and emphasising withdrawal from, and disdain for, the secular world, and the creation of an alternative world which insulates the Torah and the life it prescribes from outside influences. Haredi Judaism is not an institutionally cohesive or homogeneous group, but comprises a diversity of spiritual and cultural orientations, generally divided into a broad range of Hasidic courts and Litvishe-Yeshivish streams from Eastern Europe, and Oriental Sephardic Haredi Jews. These groups often differ significantly from one another in their specific ideologies and lifestyles, as well as the degree of stringency in religious practice, rigidity of religious philosophy, and isolation from the general culture that they maintain.[citation needed] Some Haredis encourage outreach to less observant and unaffiliated Jews and hilonim (secular Israeli Jews). Some scholars, including some secular and Reform Jews, describe the Haredim as "radical fundamentalists". Efforts to keep clear of external influence is a core characteristic of Haredi Judaism. Historically, new mediums of communication such as books, newspapers and magazines, and later tapes, CDs and television, were dealt with by either transforming and controlling the content, or choosing to have rabbinic leadership censor it selectively or altogether. In the modern digital era, difficulty in censoring the Internet and conversely, the Internet's importance, resulted in a decades long and ongoing struggle of comprehension, adaption, and regulation on the part of rabbinical leadership and community activists. These beliefs and practices, which have been interpreted as "isolationist", can bring them into conflict with authorities. In 2018, a Haredi school in the United Kingdom was rated as "inadequate" by the Office for Standards in Education, after repeated complaints were raised about the censoring of textbooks and exam papers which contained mentions of homosexuality, examples of women socializing with men, pictures showing women's shoulders and legs, or information that contradicts a creationist worldview. Haredi life, like Orthodox Jewish life in general, is very family-centered and ordered. Boys and girls attend separate schools, and proceed to higher Torah study, in a yeshiva or seminary, respectively, starting anywhere between the ages of 13 and 18. A significant proportion of young men remain in yeshiva until their marriage (often arranged). After marriage, many Haredi men continue their Torah studies in a kollel. Studying in secular institutions is often discouraged, although educational facilities for vocational training in a Haredi framework do exist. In the United States and Europe, the majority of Haredi males are active in the workforce. For various reasons, in Israel, a majority (56%) of their male members do not work, though some of those are part of the unofficial workforce. Haredi families (and Orthodox Jewish families in general) are usually much larger than non-Orthodox families, with an average of seven children per family, but it is not unheard of for families to have twelve or more children. About 80% of female Haredi Jews in Israel work. Haredi Jews are typically opposed to the viewing of television and films, and the reading of secular newspapers and books. There has been a strong campaign against the Internet, and Internet-enabled mobile phones without filters have also been banned by leading rabbis. In May 2012, 40,000 Haredim gathered at Citi Field, a baseball park in New York City, to discuss the dangers of unfiltered Internet. The event was organized by the Ichud HaKehillos LeTohar HaMachane. The Internet has been allowed for business purposes, so long as filters are installed. In some instances, forms of recreation which conform to Jewish law are treated as antithetical to Haredi Judaism. In 2013, the Rabbinical Court of the Ashkenazi Community in the Haredi settlement of Beitar Illit ruled against Zumba (a type of dance fitness) classes, although they were held with a female instructor and all-female participants. The Court said in part: "Both in form and manner, the activity [Zumba] is entirely at odds with both the ways of the Torah and the holiness of Israel, as are the songs associated to it." Jewish Chicago has lauded the Haredim for their lifestyle, arguing that it has low crime and drugs, and a strong sense of family and community. With Haredi Judaism having a heavy emphasis on marriage, especially while young, some members rely on the shidduch (matchmaking) system. They employ a schadhan (a professional matchmaker) to support them in their search for a spouse. While there is no current statistical data showing how many people use the services of a schadhan, it is estimated that the vast majority of Haredi couples were paired by one. However, with the broader societal shift to online dating, matchmaking in Orthodox and Haredi Judaism has started making inroads online. Vastly different from the most popular online dating services, apps like Shidduch pair couples based upon shared values and life goals. To do this, users fill-out a digital resume. The app was made possible by a partnership between its developers and the Orthodox Union — the same group responsible for kosher food certification ("Circle-U"). The standard mode of dress for males of the Lithuanian stream is a black or navy suit and a white shirt. Headgear includes black Fedora or Homburg hats, with black skull caps. Pre-war Lithuanian yeshiva students also wore light coloured suits, along with beige or grey hats, and prior to the 1990s, it was common for Americans of the Lithuanian stream to wear coloured shirts throughout the week, reserving white shirts for Shabbos. Beards are common among Haredi and many other Orthodox Jewish men, and Hasidic men will almost never be clean-shaven. Women adhere to the laws of modest dress, and wear long skirts and sleeves, high necklines, and, if married, some form of hair covering. Haredi women never wear trousers, although most do wear pajama-trousers within the home at night. Over the years, it has become popular among some Haredi women to wear sheitels (wigs), that are thought to be more attractive than their own natural hair (drawing criticism from some more conservative Haredi rabbis). Mainstream Sephardi Haredi rabbi Ovadia Yosef forbade the wearing of wigs altogether. Haredi women often dress more freely and casually within the home, as long as the body remains covered in accordance with the halakha. More modernized Haredi women are somewhat more lenient in matters of their dress, and some follow the latest trends and fashions, while conforming to halakha. Non-Lithuanian Hasidic men and women differ from the Lithuanian stream by having a much more specific dress code, the most obvious difference for men being the full-length suit jacket (rekel) on weekdays, and the fur hat (shtreimel) and silk caftan (bekishe) on the Sabbath. Haredi neighborhoods have been said by some to be safer, with less violent crime, although this is a generalization, and even that may apply to only specific communities, rather than all. In Israel, the entrances to some of the most extreme Haredi neighborhoods are fitted with signs that ask for modest clothing to be worn. Some areas are known to have "modesty patrols", and people dressed in ways perceived as immodest may suffer harassment, and advertisements featuring scantily dressed models may be targeted for vandalism. These concerns are also addressed through public lobbying and legal avenues. During the week-long Rio Carnival in Rio de Janeiro, many of the city's 7,000 Orthodox Jews feel compelled to leave the town, due to the immodest exposure of participants. In 2001, Haredi campaigners in Jerusalem succeeded in persuading the Egged bus company to get all their advertisements approved by a special committee. By 2011, Egged had gradually removed all bus adverts that featured women, in response to their continuous defacement. A court order that stated such action was discriminatory led to Egged's decision not to feature people at all (neither male nor female). Depictions of certain other creatures, such as space aliens, were also banned, in order not to offend Haredi sensibilities. Haredi Jews also campaign against other types of advertising that promote activities they deem offensive or inappropriate. Due to halakha, i.e., activities that Orthodox Jews believe are prohibited on Shabbat, most state-run buses in Israel do not run on Saturdays, regardless of whether riders are Orthodox, or even whether they are Jewish. In a similar vein, Haredi Jews in Israel have demanded that the roads in their neighborhoods be closed on Saturdays, vehicular traffic being viewed as an "intolerable provocation" upon their religious lifestyle (see Driving on Shabbat in Jewish law). In most cases, the authorities granted permission after Haredi petitioning and demonstrations, some of them including fierce clashes between Haredi Jews and secular counter-demonstrators, and violence against police and motorists. While Jewish modesty law requires gender separation under various circumstances, observers have contended that there is a growing trend among some groups of Hasidic Haredi Jews to extend its observance to the public arena. In the Hasidic village of Kiryas Joel, New York, an entrance sign asks visitors to "maintain sex separation in all public areas", and the bus stops have separate waiting areas for men and women. In New Square, another Hasidic enclave, men and women are expected to walk on opposite sides of the road. In Israel, Jerusalem residents of Mea Shearim were banned from erecting a street barrier dividing men and women during the week-long Sukkot festival's nightly parties; and street signs requesting that women avoid certain pavements in Beit Shemesh have been repeatedly removed by the municipality. Since 1973, buses catering to Haredi Jews running from Rockland County and Brooklyn into Manhattan have had separate areas for men and women, allowing passengers to conduct on-board prayer services. Although the lines are privately operated, they serve the general public, and in 2011, the set-up was challenged on grounds of discrimination, and the arrangement was deemed illegal. During 2010–2012, there was much public debate in Israel surrounding the existence of segregated Haredi Mehadrin bus lines (whose policy calls for both men and women to stay in their respective areas: men in the front of the bus, and women in the rear of the bus) following an altercation that occurred after a woman refused to move to the rear of the bus to sit among the women. A subsequent court ruling stated that while voluntary segregation should be allowed, forced separation is unlawful. Israeli national airline El Al has agreed to provide gender-separated flights in consideration of Haredi requirements. Education in the Haredi community is strictly segregated by sex. Yeshiva education for boys is primarily focused on the study of Jewish scriptures, such as the Torah and Talmud (non-Hasidic yeshivas in the United States teach secular studies in the afternoon); girls obtain studies both in Jewish religious education as well as broader secular subjects. In 1930s Poland, the Agudath Israel movement published its own Yiddish-language paper, Dos Yiddishe Tagblatt. In 1950, the Agudah started printing Hamodia, a Hebrew-language Israeli daily. Haredi publications tend to shield their readership from objectionable material, and perceive themselves as a "counterculture", desisting from advertising secular entertainment and events. The editorial policy of a Haredi newspaper is determined by a rabbinical board, and every edition is checked by a rabbinical censor. A strict policy of modesty is characteristic of the Haredi press in recent years, and pictures of women are usually not printed. In 2009, the Israeli daily Yated Ne'eman doctored an Israeli cabinet photograph replacing two female ministers with images of men, and in 2013, the Bakehilah magazine pixelated the faces of women appearing in a photograph of the Warsaw Ghetto Uprising. The mainstream Haredi political Shas party also refrains from publishing female images. Among Haredi publishers which have not adopted this policy is ArtScroll, which does publish pictures of women in their books. No coverage is given to serious crime, violence, sex, or drugs, and little coverage is given to non-Orthodox streams of Judaism. Inclusion of "immoral" content is avoided, and when publication of such stories is a necessity, they are often written ambiguously. The Haredi press generally takes an ambivalent stance towards Zionism and gives more coverage to issues that concern the Haredi community, such as the drafting of girls and yeshiva students into the army, autopsies, and Shabbat observance. In Israel, it portrays the secular world as "spitefully anti-Semitic", and describes secular youth as "mindless, immoral, drugged, and unspeakably lewd". Such attacks have led to Haredi editors being warned about libelous provocations. While the Haredi press is extensive and varied in Israel, only around half the Haredi population reads newspapers. Around 10% read secular newspapers, while 40% do not read any newspaper at all. According to a 2007 survey, 27% read the weekend Friday edition of Hamodia, and 26% the Yated Ne'eman. In 2006, the most-read Haredi magazine in Israel was the Mishpacha weekly, which sold 110,000 copies. Other popular Haredi publications include Ami Magazine and The Flatbush Jewish Journal. Haredi leaders have at times suggested a ban on the internet and any internet-capable device, their reasoning being that the immense amount of information can be corrupting, and the ability to use the internet with no observation from the community can lead to individuation. Some Haredi businessmen utilize the internet throughout the week, but they still observe Shabbat in every aspect by not accepting or processing orders from Friday evening to Saturday evening. They utilize the internet under strict filters and guidelines. The Kosher cell phone was introduced to the Jewish public with the sole ability to call other phones. It was unable to utilize the internet, text other phones, and had no camera feature. In fact, a kosher phone plan was created, with decreased rates for kosher-to-kosher calls, to encourage community. News hotlines are an important source of news in the Haredi world. Since many Haredi Jews do not listen to the radio or have access to the internet, even if they read newspapers, they are left with little or no access to breaking news. News hotlines were formed to fill this gap, and many have expanded to additional fields over time. Currently, many news lines provide rabbinic lectures, entertainment, business advice, and similar services, in addition to their primary function of reporting the news. Many Hasidic sects maintain their own hotlines, where relevant internal news is reported and the group's perspective can be advocated for. In the Israeli Haredi community, there are dozens of prominent hotlines, in both Yiddish and Hebrew. Some Haredi hotlines have played significant public roles. In Israel From the founding of Zionism in the 1890s, Haredi Jews leaders voiced objections to its secular orientation. After the establishment of the State of Israel, some Haredi Jews observed the Israeli Independence Day as a day of mourning and referred to Israeli state-holidays as byimey edeyhem ("idolatrous holidays"). The chief political division among Haredi Jews has been in their approach to the State of Israel. After Israeli independence, different Haredi movements took varying positions on it. Only a minority of Haredi Jews consider themselves to be Zionists. Haredim who do not consider themselves Zionists fall into two-camps: non-Zionist, and anti-Zionist. Non-Zionist Haredim, who comprise the majority, do not object to the State of Israel as an independent Jewish state, and many even consider it to be positive, but they do not believe that it has any religious significance. Anti-Zionist Haredim, who are a minority, but are more publicly visible than the non-Zionist majority, believe that any Jewish independence prior to the coming of the Messiah is a sin. The ideologically non-Zionist United Torah Judaism alliance comprising Agudat Yisrael and Degel HaTorah (and the umbrella organizations World Agudath Israel and Agudath Israel of America) represents a moderate and pragmatic stance of cooperation with the State of Israel, and participation in the political system. UTJ has been a participant in numerous coalition governments, seeking to influence state and society in a more religious direction and maintain welfare and religious funding policies. In general, their position is supportive of Israel. Haredim who are stridently anti-Zionist are under the umbrella of Edah HaChareidis, who reject participation in politics and state funding of its affiliated institutions, in contradistinction to Agudah-affiliated institutions. Neturei Karta is a very small activist organization of anti-Zionist Haredim, whose controversial activities have been strongly condemned, including by other anti-Zionist Haredim. Haredi support is often required to form coalition governments in the Knesset. In recent years, some rebbes affiliated with Agudath Israel, such as the Sadigura rebbe Avrohom Yaakov Friedman, have taken stances closer to the Israeli right wing on security, settlements and withdrawal from the Gaza Strip. Shas represents Sephardi and Mizrahi Haredim, and, while having many points in common with Ashkenazi Haredim, differs from them by its more enthusiastic support for the State of Israel and the IDF. The Sikirim group, an anti-Zionist group composed of Haredi Jews, is considered a radical organization by Israelis. The purpose of marriage in the Haredi (and Orthodox) viewpoint is for the purpose of companionship, as well as for the purpose of having children. There is a high rate of marriage in the Haredi community. 83% are married, compared to the non-Haredi community in Israel of 63%. Marriage is viewed as holy, and as the natural home for a man and a woman to truly love each other. In 2016, the divorce rate in Israel was 5% among the Haredi population, compared to the general population rate of 14%. In 2016, Haaretz claimed that divorces among Haredim are increasing in Israel. In 2017, some predominantly Haredi cities reported the highest growth rates in divorce in the Israel, in the context of generally falling rates of divorce, and in 2018, some predominantly Haredi cities reported drops in divorce, in the context of generally rising rates of divorce. When the divorce is linked to one spouse leaving the community, the one who chooses to leave is often shunned from his or her communities and forced to abandon their children, as most courts prefer keeping children in an established status quo. Haredim primarily educate their children in their own private schools, starting with chederim for pre-school to primary school ages, to yeshivos for boys from secondary school ages, and in seminaries, often called Bais Yaakovs, for girls of secondary school ages. Only Jewish religiously observant students are admitted, and parents must agree to abide by the rules of the school to keep their children enrolled. Yeshivas are headed by rosh yeshivas (deans) and principals. Many Hasidic schools in Israel, Europe, and North America teach few (or no) secular subjects, while some of the Litvish (Lithuanian style) schools in Israel follow educational policies to the Hasidic school. In the U.S., most teach secular subjects to boys and girls, as part of a dual curriculum of secular subjects (generally called "English") and Torah subjects. Yeshivas teach mostly Talmud and Rabbinic literature, while the girls' schools teach Jewish Law, Midrash, and Tanach (Hebrew Bible). Between 2007 and 2017, the number of Haredim studying in higher education had risen from 1,000 to 10,800. In 2007, the Kemach Foundation was established to become an investor in the sector's social and economic development, and provide opportunities for employment. Through the philanthropy of Leo Noé of London, later joined by the Wolfson family of New York and Elie Horn from Brazil, Kemach has facilitated academic and vocational training. With a $22 million budget, including government funding, Kemach provides individualized career assessment, academic or vocational scholarships, and job placement for the entire Haredi population in Israel. The Foundation is managed by specialists who, coming from the Haredi sector themselves, are familiar with the community's needs and sensitivities. By April 2014, more than 17,800 Haredim have received the services of Kemach, and more than 7,500 have received, or continue to receive, monthly scholarships to fund their academic or vocational studies. From 500 graduates, the net benefits to the government would be 80.8 million NIS if they work for one year, 572.3 million NIS if they work for 5 years, and 2.8 billion NIS (discounted) if they work for 30 years. The Council for Higher Education announced in 2012 that it was investing NIS 180 million over the following five years to establish appropriate frameworks for the education of Haredim, focusing on specific professions. The largest Haredi campus in Israel is The Haredi Campus - The Academic College Ono. In the midst of a controversy surrounding the limited secular education in some Haredi yeshivas, New York City mayor Eric Adams held up the Haredi yeshiva model as a model to emulate, arguing that "We need to ask, 'What are we doing wrong in our schools?' And learn what you are doing in the yeshivas to improve education." Tucker Carlson, in an interview with a former yeshiva student, observed that the yeshiva system, with its emphasis on asking questions, "seems like a great education". Upon the establishment of the State of Israel in 1948, universal conscription was instituted for all able-bodied Jewish males. However, military-aged Haredi men were exempted from service in the Israel Defense Forces (IDF) under the Torato Umanuto arrangement, which officially granted deferred entry into the IDF for yeshiva students, but in practice allowed young Haredi men to serve for a significantly reduced period of time or bypass military service altogether. At that time, the Haredi population was very low and only 400 individuals were affected. However, the Haredi population rapidly grew. In 2018, the Israel Democracy Institute estimated that the Haredim comprised 12% of Israel's total population and 15% of its Jewish population. Haredim are also younger than the general population. Their absence from the IDF attracts significant resentment from secular Israelis. The most common criticisms of the exemption policy are: Over the years, as many as 1,000 Haredi Jews have volunteered to serve in a Haredi Jewish unit of the IDF known as the Netzah Yehuda Battalion, or Nahal Haredi. The vast majority of Haredi men, however, continue to receive deferments from military service. Haredim usually reject the practice of IDF service and contend that: The Torato Umanuto arrangement was enshrined in the Tal Law that came into force in 2002. The High Court of Justice later ruled that it could not be extended in its current form beyond August 2012. A replacement was expected. The IDF was, however, experiencing a shortage of personnel, and there were pressures to reduce the scope of the Torato Umanuto exemption. In March 2014, Israel's parliament approved legislation to end exemptions from military service for Haredi seminary students. The bill was passed by 65 votes to one, and an amendment allowing civilian national service by 67 to one. In June 2024, the Supreme Court of Israel declared any continued exemption of IDF conscription unlawful. The army began drafting 3,000 Haredi men the following month. There has been much uproar in Haredi society following actions towards Haredi conscription. While some Haredim see this as a great social and economic opportunity, others (including leading rabbis among them) strongly oppose this move. Among the extreme Haredim, there have been some more severe reactions. Several Haredi leaders have threatened that Haredi populations would leave the country if forced to enlist. Others have fueled public incitement against secular and National-Religious Jews, and specifically against politicians Yair Lapid and Naftali Bennett, who support and promote Haredi enlistment. Some Haredim have taken to threatening their fellows who agree to enlist, to the point of physically attacking some of them. The Shahar program, also known as Shiluv Haredim (Ultra-Orthodox integration), allows Haredi men aged 22 to 26 to serve in the army for about a year and a half. At the beginning of their service, they study mathematics and English, which are often not well covered in Haredi boy schools. The program is partly aimed at encouraging Haredi participation in the workforce after military service. However, not all beneficiaries seem to be Haredim. As of 2013[update], figures from the Central Bureau of Statistics on employment rates place Haredi women at 73%, close to the 80% for the non-Haredi Jewish women's national figure; while the number of working Haredi men has increased to 56%, it is still far below the 90% of non-Haredi Jewish men nationwide. As of 2021[update], most Haredi boys instead go to yeshivas and then continue to study at yeshiva after getting married. The Trajtenberg Committee, charged in 2011 with drafting proposals for economic and social change, called, among other things, for increasing employment among the Haredi population. Its proposals included encouraging military or national service and offering college prep courses for volunteers, creating more employment centers targeting Haredim and experimental matriculation prep courses after yeshiva hours. The committee also called for increasing the number of Haredi students receiving technical training through the Industry, Trade, and Labor Ministry and forcing Haredi schools to carry out standardized testing, as is done at other public schools. It is estimated that half as many of the Haredi community are in employment as the rest of population. This has led to increasing financial deprivation, and 50% of children within the community live below the poverty line. This puts strain on each family, the community, and often the Israeli economy. The demographic trend indicates the community will constitute an increasing percentage of the population, and consequently, Israel faces an economic challenge in the years ahead due to fewer people in the labor force. A report commissioned by the Treasury found that the Israeli economy may lose more than six billion shekels annually as a result of low Haredi participation in the workforce. The OECD in a 2010 report stated that, "Haredi families are frequently jobless, or are one-earner families in low-paid employment. Poverty rates are around 60% for Haredim." As of 2017, according to an Israeli finance ministry study, the Haredi participation rate in the labour force is 51%, compared to 89% for the rest of Israeli Jews. A 2018 study by Oren Heller, a National Insurance Institute of Israel senior economic researcher, has found that while upward mobility among Haredim is significantly greater than the national average, unlike it, this tends not to translate into significantly higher pay. Haredi families living in Israel benefited from government-subsidized child care when the father studied Torah and the mother worked at least 24 hours per week. However, after Israeli Finance Minister Avigdor Liberman introduced a new policy in 2021, families in which the father is a full-time yeshiva student are no longer eligible for a daycare subsidy. Under this policy, fathers must also work at least part-time in order for the family to qualify for the subsidy. The move was denounced by Haredi leaders. A 2025 Israel Democracy Institute study found that although Haredim made up 14% of Israel's working-age population in 2023, they generated only 4% of national tax revenue. As a result, the average non-Haredi worker is projected to pay an extra 3,540 shekels in taxes in 2025. Only 23% of Haredim pay income tax, compared to 62% of non-Haredi Jewish men and 46% of women. Employment among Haredi men declined to 54% in 2024, while rates for Haredi women rose to 81% in 2023—just 2% below non-Haredi women. Due to a lack of secular education, many Haredi men are poorly equipped for the labor market, leading to lower household incomes. Despite contributing less in taxes, Haredi households consume more state services, receiving transportation and municipal tax discounts, housing aid, and other benefits; the Kohelet Policy Forum reported that 80% of Haredi households are net receivers of public funds. The IDI called this imbalance unsustainable. The Haredim in general are materially poorer than most other Israelis, but still represent an important market sector due to their bloc purchasing habits. For this reason, some companies and organizations in Israel refrain from including women or other images deemed immodest in their advertisements to avoid Haredi consumer boycotts. More than 50 percent of Haredim live below the poverty line, compared with 15 percent of the rest of the population. Their families are also larger, with Haredi women having an average of 6.7 children, while the average Jewish Israeli woman has 3 children. Families with many children often receive economic support through governmental child allowances, government assistance in housing, as well as specific funds by their own community institutions. In recent years, there has been a process of reconciliation and an attempt to merge Haredi Jews with Israeli society, although employment discrimination is widespread. Haredi Jews such as satirist Kobi Arieli, publicist Sehara Blau, and politician Israel Eichler write regularly for leading Israeli newspapers. Another important factor in the reconciliation process has been the activities of ZAKA, a Haredi organization known for providing emergency medical attention at the scene of suicide bombings, and Yad Sarah, the largest national volunteer organization in Israel established in 1977 by former Haredi mayor of Jerusalem, Uri Lupolianski. It is estimated that Yad Sarah saves the country's economy an estimated $320 million in hospital fees and long-term care costs each year. Present leadership and organizations Notwithstanding the authority of Chief Rabbis of Israel (Ashkenazi: David Lau, Sephardi: Yitzhak Yosef), or the wide acknowledgement of specific rabbis in Israel (for example, Rabbi Gershon Edelstein of the non-Hasidic Lithuanian Jews, and Yaakov Aryeh Alter, who heads the Ger Hasidic dynasty, the largest Hasidic group in Israel), Haredi and Hasidic factions generally align with the independent authority of their respective group leaders. Other representative associations may be linked to specific Haredi and Hasidic groups. For example: Haredi political parties in Israel include: Past leaders of Haredi Jewry Leaders of Haredi Jewry in America included: Leaders of Haredi Jewry in Israel included Controversies People who decide to leave Haredi communities are sometimes shunned and pressured or forced to abandon their children. Cases of pedophilia, sexual violence, assaults, and abuses against women and children occur in roughly the same rates in Haredi communities as in the general population; however, they are rarely discussed or reported to the authorities, and frequently downplayed by members of the communities. To receive a religious divorce, a Jewish woman needs her husband's consent in the form of a get (Jewish divorce document). Without this consent, any future offspring of the wife would be considered mamzerim (bastards/impure). If the circumstances truly warrant a divorce, and the husband is unwilling, a dayan (rabbinic judge) has the prerogative of instituting community shunning measures to "coerce him until he agrees", with physical force reserved only for the rarest of cases. The New York divorce coercion gang was a Haredi Jewish group that kidnapped, and in some cases tortured, Jewish men in the New York metropolitan area to force them to grant their wives gittin (religious divorces). The Federal Bureau of Investigation (FBI) broke up the group after conducting a sting operation against the gang in October 2013. The sting resulted in the prosecution of four men, three of whom were convicted in late 2015. In January 2023, the Times of Israel reported that Haredi citizens in Israel pay just 2% of the country's total income tax revenues, despite making up 13.9% of the nation's population. Furthermore, the article's author described their communities as an "epicenter of poverty", with over 60% of Haredi households classified as "poor" on the government's socio-economic index, with that figure remaining nearly constant in every Haredi community. While this disparity has been present in Israel for decades, it has garnered more attention since December 2022 for numerous reasons. First, Haredi families have the highest fertility rate in Israel, at 6.6 births per woman. In comparison, the average fertility rate in Israel is much lower, at 2.9 per woman. Current projections estimate that the Haredi population will double by 2036, and they will comprise 16% of the total population by 2030. The second aspect of the controversy surrounds their political connections to Israel's Religious Zionist alliance. Historically, they have remained politically uninvolved, but since the 1990s, they have continuously engaged more. Today, members of Israel's ultra-Orthodox community have long enjoyed benefits: exemption from army service for Torah students, government stipends for those choosing full-time religious study over work, and separate schools that receive state funds, even though their curriculums often do not fully teach government-mandated subjects. Today, many Israeli Haredi men do not work, preferring to study the Torah full-time, thus resulting in their high poverty rate. Hundreds of thousands of ultra-Orthodox men assembled in Jerusalem in October 2025 to demonstrate against conscription into the Israel Defence Forces (IDF). Hebrew media sources have highlighted that, in recent years, this issue has been the sole matter that has brought together all sects and factions within the ultra-Orthodox community. As per Israel's Channel 12, the last protest that similarly unified the Haredi community was another anti-conscription rally approximately a decade earlier. In media A Life Apart: Hasidism in America is a documentary film produced and directed by Menachem Daum and Oren Rudavsky, which aimed to portray the Hasidic Haredi world in more positive terms, stressing the close family ties as well as their rich traditions. Shtisel is an Israeli television drama series about a Haredi family in Jerusalem which has led to more favorable feelings about Haredi Jews. See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Polygons] | [TOKENS: 2153] |
Contents Polygon In geometry, a polygon (/ˈpɒlɪɡɒn/) is a plane figure made up of line segments connected to form a closed polygonal chain. The segments of a closed polygonal chain are called its edges or sides. The points where two edges meet are the polygon's vertices or corners. An n-gon is a polygon with n sides; for example, a triangle is a 3-gon. A simple polygon is one which does not intersect itself. More precisely, the only allowed intersections among the line segments that make up the polygon are the shared endpoints of consecutive segments in the polygonal chain. A simple polygon is the boundary of a region of the plane that is called a solid polygon. The interior of a solid polygon is its body, also known as a polygonal region or polygonal area. In contexts where one is concerned only with simple and solid polygons, a polygon may refer only to a simple polygon or to a solid polygon. A polygonal chain may cross over itself, creating star polygons and other self-intersecting polygons. Some sources also consider closed polygonal chains in Euclidean space to be a type of polygon (a skew polygon), even when the chain does not lie in a single plane. A polygon is a 2-dimensional example of the more general polytope in any number of dimensions. There are many more generalizations of polygons defined for different purposes. Etymology The word polygon derives from the Greek adjective πολύς (polús) 'much', 'many' and γωνία (gōnía) 'corner' or 'angle'. It has been suggested that γόνυ (gónu) 'knee' may be the origin of gon. Classification Polygons are primarily classified by the number of sides. Polygons may be characterized by their convexity or type of non-convexity: The property of regularity may be defined in other ways: a polygon is regular if and only if it is both isogonal and isotoxal, or equivalently it is both cyclic and equilateral. A non-convex regular polygon is called a regular star polygon. Properties and formulas Euclidean geometry is assumed throughout. Any polygon has as many corners as it has sides. Each corner has several angles. The two most important ones are: In this section, the vertices of the polygon under consideration are taken to be ( x 0 , y 0 ) , ( x 1 , y 1 ) , … , ( x n − 1 , y n − 1 ) {\displaystyle (x_{0},y_{0}),(x_{1},y_{1}),\ldots ,(x_{n-1},y_{n-1})} in order. For convenience in some formulas, the notation (xn, yn) = (x0, y0) will also be used. If the polygon is non-self-intersecting (that is, simple), the signed area is or, using determinants where Q i , j {\displaystyle Q_{i,j}} is the squared distance between ( x i , y i ) {\displaystyle (x_{i},y_{i})} and ( x j , y j ) . {\displaystyle (x_{j},y_{j}).} The signed area depends on the ordering of the vertices and of the orientation of the plane. Commonly, the positive orientation is defined by the (counterclockwise) rotation that maps the positive x-axis to the positive y-axis. If the vertices are ordered counterclockwise (that is, according to positive orientation), the signed area is positive; otherwise, it is negative. In either case, the area formula is correct in absolute value. This is commonly called the shoelace formula or surveyor's formula. The area A of a simple polygon can also be computed if the lengths of the sides, a1, a2, ..., an and the exterior angles, θ1, θ2, ..., θn are known, from: The formula was described by Lopshits in 1963. If the polygon can be drawn on an equally spaced grid such that all its vertices are grid points, Pick's theorem gives a simple formula for the polygon's area based on the numbers of interior and boundary grid points: the former number plus one-half the latter number, minus 1. In every polygon with perimeter p and area A , the isoperimetric inequality p 2 > 4 π A {\displaystyle p^{2}>4\pi A} holds. For any two simple polygons of equal area, the Bolyai–Gerwien theorem asserts that the first can be cut into polygonal pieces which can be reassembled to form the second polygon. The lengths of the sides of a polygon do not in general determine its area. However, if the polygon is simple and cyclic then the sides do determine the area. Of all n-gons with given side lengths, the one with the largest area is cyclic. Of all n-gons with a given perimeter, the one with the largest area is regular (and therefore cyclic). Many specialized formulas apply to the areas of regular polygons. The area of a regular polygon is given in terms of the radius r of its inscribed circle and its perimeter p by This radius is also termed its apothem and is often represented as a. The area of a regular n-gon can be expressed in terms of the radius R of its circumscribed circle (the unique circle passing through all vertices of the regular n-gon) as follows: The area of a self-intersecting polygon can be defined in two different ways, giving different answers: Using the same convention for vertex coordinates as in the previous section, the coordinates of the centroid of a solid simple polygon are In these formulas, the signed value of area A {\displaystyle A} must be used. For triangles (n = 3), the centroids of the vertices and of the solid shape are the same, but, in general, this is not true for n > 3. The centroid of the vertex set of a polygon with n vertices has the coordinates Generalizations The idea of a polygon has been generalized in various ways. Some of the more important include: Naming The word polygon comes from Late Latin polygōnum (a noun), from Greek πολύγωνον (polygōnon/polugōnon), noun use of neuter of πολύγωνος (polygōnos/polugōnos, the masculine adjective), meaning "many-angled". Individual polygons are named (and sometimes classified) according to the number of sides, combining a Greek-derived numerical prefix with the suffix -gon, e.g. pentagon, dodecagon. The triangle, quadrilateral and nonagon are exceptions. Beyond decagons (10-sided) and dodecagons (12-sided), mathematicians generally use numerical notation, for example 17-gon and 257-gon. Exceptions exist for side counts that are easily expressed in verbal form (e.g. 20 and 30), or are used by non-mathematicians. Some special polygons also have their own names; for example the regular star pentagon is also known as the pentagram. To construct the name of a polygon with more than 20 and fewer than 100 edges, combine the prefixes as follows. The "kai" term applies to 13-gons and higher and was used by Kepler, and advocated by John H. Conway for clarity of concatenated prefix numbers in the naming of quasiregular polyhedra, though not all sources use it. History Polygons have been known since ancient times. The regular polygons were known to the ancient Greeks, with the pentagram, a non-convex regular polygon (star polygon), appearing as early as the 7th century B.C. on a krater by Aristophanes, found at Caere and now in the Capitoline Museum. The first known systematic study of non-convex polygons in general was made by Thomas Bradwardine in the 14th century. In 1952, Geoffrey Colin Shephard generalized the idea of polygons to the complex plane, where each real dimension is accompanied by an imaginary one, to create complex polygons. In nature Polygons appear in rock formations, most commonly as the flat facets of crystals, where the angles between the sides depend on the type of mineral from which the crystal is made. Regular hexagons can occur when the cooling of lava forms areas of tightly packed columns of basalt, which may be seen at the Giant's Causeway in Northern Ireland, or at the Devil's Postpile in California. In biology, the surface of the wax honeycomb made by bees is an array of hexagons, and the sides and base of each cell are also polygons. Computer graphics In computer graphics, a polygon is a primitive used in modelling and rendering. They are defined in a database, containing arrays of vertices (the coordinates of the geometrical vertices, as well as other attributes of the polygon, such as color, shading and texture), connectivity information, and materials. Any surface is modelled as a tessellation called polygon mesh. If a square mesh has n + 1 points (vertices) per side, there are n squared squares in the mesh, or 2n squared triangles since there are two triangles in a square. There are (n + 1)2 / 2(n2) vertices per triangle. Where n is large, this approaches one half. Or, each vertex inside the square mesh connects four edges (lines). The imaging system calls up the structure of polygons needed for the scene to be created from the database. This is transferred to active memory and finally, to the display system (screen, TV monitors etc.) so that the scene can be viewed. During this process, the imaging system renders polygons in correct perspective ready for transmission of the processed data to the display system. Although polygons are two-dimensional, through the system computer they are placed in a visual scene in the correct three-dimensional orientation. In computer graphics and computational geometry, it is often necessary to determine whether a given point P = ( x 0 , y 0 ) {\displaystyle P=(x_{0},y_{0})} lies inside a simple polygon given by a sequence of line segments. This is called the point in polygon test. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Black_hole#cite_note-riazuelo-3] | [TOKENS: 13839] |
Contents Black hole A black hole is an astronomical body so compact that its gravity prevents anything, including light, from escaping. Albert Einstein's theory of general relativity predicts that a sufficiently compact mass will form a black hole. The boundary of no escape is called the event horizon. In general relativity, a black hole's event horizon seals an object's fate but produces no locally detectable change when crossed. General relativity also predicts that every black hole should have a central singularity, where the curvature of spacetime is infinite. In many ways, a black hole acts like an ideal black body, as it reflects no light. Quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly. Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterise a black hole. Due to his influential research, the Schwarzschild metric is named after him. David Finkelstein, in 1958, first interpreted Schwarzschild's model as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971. Black holes typically form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses may form by absorbing other stars and merging with other black holes, or via direct collapse of gas clouds. There is consensus that supermassive black holes exist in the centres of most galaxies. The presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Matter falling toward a black hole can form an accretion disk of infalling plasma, heated by friction and emitting light. In extreme cases, this creates a quasar, some of the brightest objects in the universe. Merging black holes can also be detected by observation of the gravitational waves they emit. If other stars are orbiting a black hole, their orbits can be used to determine the black hole's mass and location. Such observations can be used to exclude possible alternatives such as neutron stars. In this way, astronomers have identified numerous stellar black hole candidates in binary systems and established that the radio source known as Sagittarius A*, at the core of the Milky Way galaxy, contains a supermassive black hole of about 4.3 million solar masses. History The idea of a body so massive that even light could not escape was first proposed in the late 18th century by English astronomer and clergyman John Michell and independently by French scientist Pierre-Simon Laplace. Both scholars proposed very large stars in contrast to the modern concept of an extremely dense object. Michell's idea, in a short part of a letter published in 1784, calculated that a star with the same density but 500 times the radius of the sun would not let any emitted light escape; the surface escape velocity would exceed the speed of light.: 122 Michell correctly hypothesized that such supermassive but non-radiating bodies might be detectable through their gravitational effects on nearby visible bodies. In 1796, Laplace mentioned that a star could be invisible if it were sufficiently large while speculating on the origin of the Solar System in his book Exposition du Système du Monde. Franz Xaver von Zach asked Laplace for a mathematical analysis, which Laplace provided and published in a journal edited by von Zach. In 1905, Albert Einstein showed that the laws of electromagnetism would be invariant under a Lorentz transformation: they would be identical for observers travelling at different velocities relative to each other. This discovery became known as the principle of special relativity. Although the laws of mechanics had already been shown to be invariant, gravity remained yet to be included.: 19 In 1907, Einstein published a paper proposing his equivalence principle, the hypothesis that inertial mass and gravitational mass have a common cause. Using the principle, Einstein predicted the redshift and half of the lensing effect of gravity on light; the full prediction of gravitational lensing required development of general relativity.: 19 By 1915, Einstein refined these ideas into his general theory of relativity, which explained how matter affects spacetime, which in turn affects the motion of other matter. This formed the basis for black hole physics. Only a few months after Einstein published the field equations describing general relativity, astrophysicist Karl Schwarzschild set out to apply the idea to stars. He assumed spherical symmetry with no spin and found a solution to Einstein's equations.: 124 A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution. At a certain radius from the center of the mass, the Schwarzschild solution became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this radius, which later became known as the Schwarzschild radius, was not understood at the time. Many physicists of the early 20th century were skeptical of the existence of black holes. In a 1926 popular science book, Arthur Eddington critiqued the idea of a star with mass compressed to its Schwarzschild radius as a flaw in the then-poorly-understood theory of general relativity.: 134 In 1939, Einstein himself used his theory of general relativity in an attempt to prove that black holes were impossible. His work relied on increasing pressure or increasing centrifugal force balancing the force of gravity so that the object would not collapse beyond its Schwarzschild radius. He missed the possibility that implosion would drive the system below this critical value.: 135 By the 1920s, astronomers had classified a number of white dwarf stars as too cool and dense to be explained by the gradual cooling of ordinary stars. In 1926, Ralph Fowler showed that quantum-mechanical degeneracy pressure was larger than thermal pressure at these densities.: 145 In 1931, Subrahmanyan Chandrasekhar calculated that a non-rotating body of electron-degenerate matter below a certain limiting mass is stable, and by 1934 he showed that this explained the catalog of white dwarf stars.: 151 When Chandrasekhar announced his results, Eddington pointed out that stars above this limit would radiate until they were sufficiently dense to prevent light from exiting, a conclusion he considered absurd. Eddington and, later, Lev Landau argued that some yet unknown mechanism would stop the collapse. In the 1930s, Fritz Zwicky and Walter Baade studied stellar novae, focusing on exceptionally bright ones they called supernovae. Zwicky promoted the idea that supernovae produced stars with the density of atomic nuclei—neutron stars—but this idea was largely ignored.: 171 In 1939, based on Chandrasekhar's reasoning, J. Robert Oppenheimer and George Volkoff predicted that neutron stars below a certain mass limit, later called the Tolman–Oppenheimer–Volkoff limit, would be stable due to neutron degeneracy pressure. Above that limit, they reasoned that either their model would not apply or that gravitational contraction would not stop.: 380 John Archibald Wheeler and two of his students resolved questions about the model behind the Tolman–Oppenheimer–Volkoff (TOV) limit. Harrison and Wheeler developed the equations of state relating density to pressure for cold matter all the way through electron degeneracy and neutron degeneracy. Masami Wakano and Wheeler then used the equations to compute the equilibrium curve for stars, relating mass to circumference. They found no additional features that would invalidate the TOV limit. This meant that the only thing that could prevent black holes from forming was a dynamic process ejecting sufficient mass from a star as it cooled.: 205 The modern concept of black holes was formulated by Robert Oppenheimer and his student Hartland Snyder in 1939.: 80 In the paper, Oppenheimer and Snyder solved Einstein's equations of general relativity for an idealized imploding star, in a model later called the Oppenheimer–Snyder model, then described the results from far outside the star. The implosion starts as one might expect: the star material rapidly collapses inward. However, as the density of the star increases, gravitational time dilation increases and the collapse, viewed from afar, seems to slow down further and further until the star reaches its Schwarzschild radius, where it appears frozen in time.: 217 In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, calling it "a perfect unidirectional membrane: causal influences can cross it in only one direction". In this sense, events that occur inside of the black hole cannot affect events that occur outside of the black hole. Finkelstein created a new reference frame to include the point of view of infalling observers.: 103 Finkelstein's new frame of reference allowed events at the surface of an imploding star to be related to events far away. By 1962 the two points of view were reconciled, convincing many skeptics that implosion into a black hole made physical sense.: 226 The era from the mid-1960s to the mid-1970s was the "golden age of black hole research", when general relativity and black holes became mainstream subjects of research.: 258 In this period, more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the cylindrically symmetric solution for a black hole that is both rotating and electrically charged. In 1967, Werner Israel found that the Schwarzschild solution was the only possible solution for a nonspinning, uncharged black hole, meaning that a Schwarzschild black hole would be defined by its mass alone. Similar identities were later found for Reissner-Nordstrom and Kerr black holes, defined only by their mass and their charge or spin respectively. Together, these findings became known as the no-hair theorem, which states that a stationary black hole is completely described by the three parameters of the Kerr–Newman metric: mass, angular momentum, and electric charge. At first, it was suspected that the strange mathematical singularities found in each of the black hole solutions only appeared due to the assumption that a black hole would be perfectly spherically symmetric, and therefore the singularities would not appear in generic situations where black holes would not necessarily be symmetric. This view was held in particular by Vladimir Belinski, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions, although they would later reverse their positions. However, in 1965, Roger Penrose proved that general relativity without quantum mechanics requires that singularities appear in all black holes. Astronomical observations also made great strides during this era. In 1967, Antony Hewish and Jocelyn Bell Burnell discovered pulsars and by 1969, these were shown to be rapidly rotating neutron stars. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities, but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse. Based on observations in Greenwich and Toronto in the early 1970s, Cygnus X-1, a galactic X-ray source discovered in 1964, became the first astronomical object commonly accepted to be a black hole. Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed: 442 when Hawking, in 1974, showed that quantum field theory implies that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole, predicting the effect now known as Hawking radiation. While Cygnus X-1, a stellar-mass black hole, was generally accepted by the scientific community as a black hole by the end of 1973, it would be decades before a supermassive black hole would gain the same broad recognition. Although, as early as the 1960s, physicists such as Donald Lynden-Bell and Martin Rees had suggested that powerful quasars in the center of galaxies were powered by accreting supermassive black holes, little observational proof existed at the time. However, the Hubble Space Telescope, launched decades later, found that supermassive black holes were not only present in these active galactic nuclei, but that supermassive black holes in the center of galaxies were ubiquitous: Almost every galaxy had a supermassive black hole at its center, many of which were quiescent. In 1999, David Merritt proposed the M–sigma relation, which related the dispersion of the velocity of matter in the center bulge of a galaxy to the mass of the supermassive black hole at its core. Subsequent studies confirmed this correlation. Around the same time, based on telescope observations of the velocities of stars at the center of the Milky Way galaxy, independent work groups led by Andrea Ghez and Reinhard Genzel concluded that the compact radio source in the center of the galaxy, Sagittarius A*, was likely a supermassive black hole. On 11 February 2016, the LIGO Scientific Collaboration and Virgo Collaboration announced the first direct detection of gravitational waves, named GW150914, representing the first observation of a black hole merger. At the time of the merger, the black holes were approximately 1.4 billion light-years away from Earth and had masses of 30 and 35 solar masses.: 6 In 2017, Rainer Weiss, Kip Thorne, and Barry Barish, who had spearheaded the project, were awarded the Nobel Prize in Physics for their work. Since the initial discovery in 2015, hundreds more gravitational waves have been observed by LIGO and another interferometer, Virgo. On 10 April 2019, the first direct image of a black hole and its vicinity was published, following observations made by the Event Horizon Telescope (EHT) in 2017 of the supermassive black hole in Messier 87's galactic centre. In 2022, the Event Horizon Telescope collaboration released an image of the black hole in the center of the Milky Way galaxy, Sagittarius A*; The data had been collected in 2017. In 2020, the Nobel Prize in Physics was awarded for work on black holes. Andrea Ghez and Reinhard Genzel shared one-half for their discovery that Sagittarius A* is a supermassive black hole. Penrose received the other half for his work showing that the mathematics of general relativity requires the formation of black holes. Cosmologists lamented that Hawking's extensive theoretical work on black holes would not be honored since he died in 2018. In December 1967, a student reportedly suggested the phrase black hole at a lecture by John Wheeler; Wheeler adopted the term for its brevity and "advertising value", and Wheeler's stature in the field ensured it quickly caught on, leading some to credit Wheeler with coining the phrase. However, the term was used by others around that time. Science writer Marcia Bartusiak traces the term black hole to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive. The term was used in print by Life and Science News magazines in 1963, and by science journalist Ann Ewing in her article "'Black Holes' in Space", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science held in Cleveland, Ohio. Definition A black hole is generally defined as a region of spacetime from which no information-carrying signals or objects can escape. However, verifying an object as a black hole by this definition would require waiting for an infinite time and at an infinite distance from the black hole to verify that indeed, nothing has escaped, and thus cannot be used to identify a physical black hole. Broadly, physicists do not have a precisely-agreed-upon definition of a black hole. Among astrophysicists, a black hole is a compact object with a mass larger than four solar masses. A black hole may also be defined as a reservoir of information: 142 or a region where space is falling inwards faster than the speed of light. Properties The no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. If the conjecture is true, any two black holes that share the same values for these properties, or parameters, are indistinguishable from one another. The degree to which the conjecture is true for real black holes is currently an unsolved problem. The simplest static black holes have mass but neither electric charge nor angular momentum. According to Birkhoff's theorem, these Schwarzschild black holes are the only vacuum solution that is spherically symmetric. Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum. The simplest static black holes have mass but neither electric charge nor angular momentum. Contrary to the popular notion of a black hole "sucking in everything" in its surroundings, from far away, the external gravitational field of a black hole is identical to that of any other body of the same mass. While a black hole can theoretically have any positive mass, the charge and angular momentum are constrained by the mass. The total electric charge Q and the total angular momentum J are expected to satisfy the inequality Q 2 4 π ϵ 0 + c 2 J 2 G M 2 ≤ G M 2 {\displaystyle {\frac {Q^{2}}{4\pi \epsilon _{0}}}+{\frac {c^{2}J^{2}}{GM^{2}}}\leq GM^{2}} for a black hole of mass M. Black holes with the maximum possible charge or spin satisfying this inequality are called extremal black holes. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These are so-called naked singularities that can be observed from the outside. Because these singularities make the universe inherently unpredictable, many physicists believe they could not exist. The weak cosmic censorship hypothesis, proposed by Sir Roger Penrose, rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. However, this theory has not yet been proven, and some physicists believe that naked singularities could exist. It is also unknown whether black holes could even become extremal, forming naked singularities, since natural processes counteract increasing spin and charge when a black hole becomes near-extremal. The total mass of a black hole can be estimated by analyzing the motion of objects near the black hole, such as stars or gas. All black holes spin, often fast—One supermassive black hole, GRS 1915+105 has been estimated to spin at over 1,000 revolutions per second. The Milky Way's central black hole Sagittarius A* rotates at about 90% of the maximum rate. The spin rate can be inferred from measurements of atomic spectral lines in the X-ray range. As gas near the black hole plunges inward, high energy X-ray emission from electron-positron pairs illuminates the gas further out, appearing red-shifted due to relativistic effects. Depending on the spin of the black hole, this plunge happens at different radii from the hole, with different degrees of redshift. Astronomers can use the gap between the x-ray emission of the outer disk and the redshifted emission from plunging material to determine the spin of the black hole. A newer way to estimate spin is based on the temperature of gasses accreting onto the black hole. The method requires an independent measurement of the black hole mass and inclination angle of the accretion disk followed by computer modeling. Gravitational waves from coalescing binary black holes can also provide the spin of both progenitor black holes and the merged hole, but such events are rare. A spinning black hole has angular momentum. The supermassive black hole in the center of the Messier 87 (M87) galaxy appears to have an angular momentum very close to the maximum theoretical value. That uncharged limit is J ≤ G M 2 c , {\displaystyle J\leq {\frac {GM^{2}}{c}},} allowing definition of a dimensionless spin magnitude such that 0 ≤ c J G M 2 ≤ 1. {\displaystyle 0\leq {\frac {cJ}{GM^{2}}}\leq 1.} Most black holes are believed to have an approximately neutral charge. For example, Michal Zajaček, Arman Tursunov, Andreas Eckart, and Silke Britzen found the electric charge of Sagittarius A* to be at least ten orders of magnitude below the theoretical maximum. A charged black hole repels other like charges just like any other charged object. If a black hole were to become charged, particles with an opposite sign of charge would be pulled in by the extra electromagnetic force, while particles with the same sign of charge would be repelled, neutralizing the black hole. This effect may not be as strong if the black hole is also spinning. The presence of charge can reduce the diameter of the black hole by up to 38%. The charge Q for a nonspinning black hole is bounded by Q ≤ G M , {\displaystyle Q\leq {\sqrt {G}}M,} where G is the gravitational constant and M is the black hole's mass. Classification Black holes can have a wide range of masses. The minimum mass of a black hole formed by stellar gravitational collapse is governed by the maximum mass of a neutron star and is believed to be approximately two-to-four solar masses. However, theoretical primordial black holes, believed to have formed soon after the Big Bang, could be far smaller, with masses as little as 10−5 grams at formation. These very small black holes are sometimes called micro black holes. Black holes formed by stellar collapse are called stellar black holes. Estimates of their maximum mass at formation vary, but generally range from 10 to 100 solar masses, with higher estimates for black holes progenated by low-metallicity stars. The mass of a black hole formed via a supernova has a lower bound: If the progenitor star is too small, the collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. Degeneracy pressure occurs from the Pauli exclusion principle—Particles will resist being in the same place as each other. Smaller progenitor stars, with masses less than about 8 M☉, will be held together by the degeneracy pressure of electrons and will become a white dwarf. For more massive progenitor stars, electron degeneracy pressure is no longer strong enough to resist the force of gravity and the star will be held together by neutron degeneracy pressure, which can occur at much higher densities, forming a neutron star. If the star is still too massive, even neutron degeneracy pressure will not be able to resist the force of gravity and the star will collapse into a black hole.: 5.8 Stellar black holes can also gain mass via accretion of nearby matter, often from a companion object such as a star. Black holes that are larger than stellar black holes but smaller than supermassive black holes are called intermediate-mass black holes, with masses of approximately 102 to 105 solar masses. These black holes seem to be rarer than their stellar and supermassive counterparts, with relatively few candidates having been observed. Physicists have speculated that such black holes may form from collisions in globular and star clusters or at the center of low-mass galaxies. They may also form as the result of mergers of smaller black holes, with several LIGO observations finding merged black holes within the 110-350 solar mass range. The black holes with the largest masses are called supermassive black holes, with masses more than 106 times that of the Sun. These black holes are believed to exist at the centers of almost every large galaxy, including the Milky Way. Some scientists have proposed a subcategory of even larger black holes, called ultramassive black holes, with masses greater than 109-1010 solar masses. Theoretical models predict that the accretion disc that feeds black holes will be unstable once a black hole reaches 50-100 billion times the mass of the Sun, setting a rough upper limit to black hole mass. Structure While black holes are conceptually invisible sinks of all matter and light, in astronomical settings, their enormous gravity alters the motion of surrounding objects and pulls nearby gas inwards at near-light speed, making the area around black holes the brightest objects in the universe. Some black holes have relativistic jets—thin streams of plasma travelling away from the black hole at more than one-tenth of the speed of light. A small faction of the matter falling towards the black hole gets accelerated away along the hole rotation axis. These jets can extend as far as millions of parsecs from the black hole itself. Black holes of any mass can have jets. However, they are typically observed around spinning black holes with strongly-magnetized accretion disks. Relativistic jets were more common in the early universe, when galaxies and their corresponding supermassive black holes were rapidly gaining mass. All black holes with jets also have an accretion disk, but the jets are usually brighter than the disk. Quasars, typically found in other galaxies, are believed to be supermassive black holes with jets; microquasars are believed to be stellar-mass objects with jets, typically observed in the Milky Way. The mechanism of formation of jets is not yet known, but several options have been proposed. One method proposed to fuel these jets is the Blandford-Znajek process, which suggests that the dragging of magnetic field lines by a black hole's rotation could launch jets of matter into space. The Penrose process, which involves extraction of a black hole's rotational energy, has also been proposed as a potential mechanism of jet propulsion. Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disk-like structure around the object.: 242 As the disk's angular momentum is transferred outward due to internal processes, its matter falls farther inward, converting its gravitational energy into heat and releasing a large flux of x-rays. The temperature of these disks can range from thousands to millions of Kelvin, and temperatures can differ throughout a single accretion disk. Accretion disks can also emit in other parts of the electromagnetic spectrum, depending on the disk's turbulence and magnetization and the black hole's mass and angular momentum. Accretion disks can be defined as geometrically thin or geometrically thick. Geometrically thin disks are mostly confined to the black hole's equatorial plane and have a well-defined edge at the innermost stable circular orbit (ISCO), while geometrically thick disks are supported by internal pressure and temperature and can extend inside the ISCO. Disks with high rates of electron scattering and absorption, appearing bright and opaque, are called optically thick; optically thin disks are more translucent and produce fainter images when viewed from afar. Accretion disks of black holes accreting beyond the Eddington limit are often referred to as polish donuts due to their thick, toroidal shape that resembles that of a donut. Quasar accretion disks are expected to usually appear blue in color. The disk for a stellar black hole, on the other hand, would likely look orange, yellow, or red, with its inner regions being the brightest. Theoretical research suggests that the hotter a disk is, the bluer it should be, although this is not always supported by observations of real astronomical objects. Accretion disk colors may also be altered by the Doppler effect, with the part of the disk travelling towards an observer appearing bluer and brighter and the part of the disk travelling away from the observer appearing redder and dimmer. In Newtonian gravity, test particles can stably orbit at arbitrary distances from a central object. In general relativity, however, there exists a smallest possible radius for which a massive particle can orbit stably. Any infinitesimal inward perturbations to this orbit will lead to the particle spiraling into the black hole, and any outward perturbations will, depending on the energy, cause the particle to spiral in, move to a stable orbit further from the black hole, or escape to infinity. This orbit is called the innermost stable circular orbit, or ISCO. The location of the ISCO depends on the spin of the black hole and the spin of the particle itself. In the case of a Schwarzschild black hole (spin zero) and a particle without spin, the location of the ISCO is: r I S C O = 3 r s = 6 G M c 2 , {\displaystyle r_{\rm {ISCO}}=3\,r_{\text{s}}={\frac {6\,GM}{c^{2}}},} where r I S C O {\displaystyle r_{\rm {_{ISCO}}}} is the radius of the ISCO, r s {\displaystyle r_{\text{s}}} is the Schwarzschild radius of the black hole, G {\displaystyle G} is the gravitational constant, and c {\displaystyle c} is the speed of light. The radius of this orbit changes slightly based on particle spin. For charged black holes, the ISCO moves inwards. For spinning black holes, the ISCO is moved inwards for particles orbiting in the same direction that the black hole is spinning (prograde) and outwards for particles orbiting in the opposite direction (retrograde). For example, the ISCO for a particle orbiting retrograde can be as far out as about 9 r s {\displaystyle 9r_{\text{s}}} , while the ISCO for a particle orbiting prograde can be as close as at the event horizon itself. The photon sphere is a spherical boundary for which photons moving on tangents to that sphere are bent completely around the black hole, possibly orbiting multiple times. Light rays with impact parameters less than the radius of the photon sphere enter the black hole. For Schwarzschild black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius; the radius for non-Schwarzschild black holes is at least 1.5 times the radius of the event horizon. When viewed from a great distance, the photon sphere creates an observable black hole shadow. Since no light emerges from within the black hole, this shadow is the limit for possible observations.: 152 The shadow of colliding black holes should have characteristic warped shapes, allowing scientists to detect black holes that are about to merge. While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Therefore, any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon. Light emitted towards the photon sphere may also curve around the black hole and return to the emitter. For a rotating, uncharged black hole, the radius of the photon sphere depends on the spin parameter and whether the photon is orbiting prograde or retrograde. For a photon orbiting prograde, the photon sphere will be 1-3 Schwarzschild radii from the center of the black hole, while for a photon orbiting retrograde, the photon sphere will be between 3-5 Schwarzschild radii from the center of the black hole. The exact location of the photon sphere depends on the magnitude of the black hole's rotation. For a charged, nonrotating black hole, there will only be one photon sphere, and the radius of the photon sphere will decrease for increasing black hole charge. For non-extremal, charged, rotating black holes, there will always be two photon spheres, with the exact radii depending on the parameters of the black hole. Near a rotating black hole, spacetime rotates similar to a vortex. The rotating spacetime will drag any matter and light into rotation around the spinning black hole. This effect of general relativity, called frame dragging, gets stronger closer to the spinning mass. The region of spacetime in which it is impossible to stay still is called the ergosphere. The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but bulges out from it around the equator. Matter and radiation can escape from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole, slowing down the rotation of the black hole.: 268 A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process, is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei. The observable region of spacetime around a black hole closest to its event horizon is called the plunging region. In this area it is no longer possible for free falling matter to follow circular orbits or stop a final descent into the black hole. Instead, it will rapidly plunge toward the black hole at close to the speed of light, growing increasingly hot and producing a characteristic, detectable thermal emission. However, light and radiation emitted from this region can still escape from the black hole's gravitational pull. For a nonspinning, uncharged black hole, the radius of the event horizon, or Schwarzschild radius, is proportional to the mass, M, through r s = 2 G M c 2 ≈ 2.95 M M ⊙ k m , {\displaystyle r_{\mathrm {s} }={\frac {2GM}{c^{2}}}\approx 2.95\,{\frac {M}{M_{\odot }}}~\mathrm {km,} } where rs is the Schwarzschild radius and M☉ is the mass of the Sun.: 124 For a black hole with nonzero spin or electric charge, the radius is smaller,[Note 1] until an extremal black hole could have an event horizon close to r + = G M c 2 , {\displaystyle r_{\mathrm {+} }={\frac {GM}{c^{2}}},} half the radius of a nonspinning, uncharged black hole of the same mass. Since the volume within the Schwarzschild radius increase with the cube of the radius, average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass: supermassive black holes are much less dense than stellar black holes. The average density of a 108 M☉ black hole is comparable to that of water. The defining feature of a black hole is the existence of an event horizon, a boundary in spacetime through which matter and light can pass only inward towards the center of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach or affect an outside observer, making it impossible to determine whether such an event occurred.: 179 For non-rotating black holes, the geometry of the event horizon is precisely spherical, while for rotating black holes, the event horizon is oblate. To a distant observer, a clock near a black hole would appear to tick more slowly than one further from the black hole.: 217 This effect, known as gravitational time dilation, would also cause an object falling into a black hole to appear to slow as it approached the event horizon, never quite reaching the horizon from the perspective of an outside observer.: 218 All processes on this object would appear to slow down, and any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. An object falling from half of a Schwarzschild radius above the event horizon would fade away until it could no longer be seen, disappearing from view within one hundredth of a second. It would also appear to flatten onto the black hole, joining all other material that had ever fallen into the hole. On the other hand, an observer falling into a black hole would not notice any of these effects as they cross the event horizon. Their own clocks appear to them to tick normally, and they cross the event horizon after a finite time without noting any singular behaviour. In general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle.: 222 Black holes that are rotating and/or charged have an inner horizon, often called the Cauchy horizon, inside of the black hole. The inner horizon is divided up into two segments: an ingoing section and an outgoing section. At the ingoing section of the Cauchy horizon, radiation and matter that fall into the black hole would build up at the horizon, causing the curvature of spacetime to go to infinity. This would cause an observer falling in to experience tidal forces. This phenomenon is often called mass inflation, since it is associated with a parameter dictating the black hole's internal mass growing exponentially, and the buildup of tidal forces is called the mass-inflation singularity or Cauchy horizon singularity. Some physicists have argued that in realistic black holes, accretion and Hawking radiation would stop mass inflation from occurring. At the outgoing section of the inner horizon, infalling radiation would backscatter off of the black hole's spacetime curvature and travel outward, building up at the outgoing Cauchy horizon. This would cause an infalling observer to experience a gravitational shock wave and tidal forces as the spacetime curvature at the horizon grew to infinity. This buildup of tidal forces is called the shock singularity. Both of these singularities are weak, meaning that an object crossing them would only be deformed a finite amount by tidal forces, even though the spacetime curvature would still be infinite at the singularity. This is as opposed to a strong singularity, where an object hitting the singularity would be stretched and squeezed by an infinite amount. They are also null singularities, meaning that a photon could travel parallel to the them without ever being intercepted. Ignoring quantum effects, every black hole has a singularity inside, points where the curvature of spacetime becomes infinite, and geodesics terminate within a finite proper time.: 205 For a non-rotating black hole, this region takes the shape of a single point; for a rotating black hole it is smeared out to form a ring singularity that lies in the plane of rotation.: 264 In both cases, the singular region has zero volume. All of the mass of the black hole ends up in the singularity.: 252 Since the singularity has nonzero mass in an infinitely small space, it can be thought of as having infinite density. Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity once they cross the event horizon. As they fall further into the black hole, they will be torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the noodle effect. Eventually, they will reach the singularity and be crushed into an infinitely small point.: 182 However any perturbations, such as those caused by matter or radiation falling in, would cause space to oscillate chaotically near the singularity. Any matter falling in would experience intense tidal forces rapidly changing in direction, all while being compressed into an increasingly small volume. Alternative forms of general relativity, including addition of some quatum effects, can lead to regular, or nonsingular, black holes without singularities. For example, the fuzzball model, based on string theory, states that black holes are actually made up of quantum microstates and need not have a singularity or an event horizon. The theory of loop quantum gravity proposes that the curvature and density at the center of a black hole is large, but not infinite. Formation Black holes are formed by gravitational collapse of massive stars, either by direct collapse or during a supernova explosion in a process called fallback. Black holes can result from the merger of two neutron stars or a neutron star and a black hole. Other more speculative mechanisms include primordial black holes created from density fluctuations in the early universe, the collapse of dark stars, a hypothetical object powered by annihilation of dark matter, or from hypothetical self-interacting dark matter. Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. At the end of a star's life, it will run out of hydrogen to fuse, and will start fusing more and more massive elements, until it gets to iron. Since the fusion of elements heavier than iron would require more energy than it would release, nuclear fusion ceases. If the iron core of the star is too massive, the star will no longer be able to support itself and will undergo gravitational collapse. While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer would see the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the delay growing to infinity as the emitting material reaches the event horizon. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away. Observations of quasars at redshift z ∼ 7 {\displaystyle z\sim 7} , less than a billion years after the Big Bang, has led to investigations of other ways to form black holes. The accretion process to build supermassive black holes has a limiting rate of mass accumulation and a billion years is not enough time to reach quasar status. One suggestion is direct collapse of nearly pure hydrogen gas (low metalicity) clouds characteristic of the young universe, forming a supermassive star which collapses into a black hole. It has been suggested that seed black holes with typical masses of ~105 M☉ could have formed in this way which then could grow to ~109 M☉. However, the very large amount of gas required for direct collapse is not typically stable to fragmentation to form multiple stars. Thus another approach suggests massive star formation followed by collisions that seed massive black holes which ultimately merge to create a quasar.: 85 A neutron star in a common envelope with a regular star can accrete sufficient material to collapse to a black hole or two neutron stars can merge. These avenues for the formation of black holes are considered relatively rare. In the current epoch of the universe, conditions needed to form black holes are rare and are mostly only found in stars. However, in the early universe, conditions may have allowed for black hole formations via other means. Fluctuations of spacetime soon after the Big Bang may have formed areas that were denser then their surroundings. Initially, these regions would not have been compact enough to form a black hole, but eventually, the curvature of spacetime in the regions become large enough to cause them to collapse into a black hole. Different models for the early universe vary widely in their predictions of the scale of these fluctuations. Various models predict the creation of primordial black holes ranging from a Planck mass (~2.2×10−8 kg) to hundreds of thousands of solar masses. Primordial black holes with masses less than 1015 g would have evaporated by now due to Hawking radiation. Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the universe was expanding rapidly and did not have the gravitational differential necessary for black hole formation. Models for the gravitational collapse of objects of relatively constant size, such as stars, do not necessarily apply in the same way to rapidly expanding space such as the Big Bang. In principle, black holes could be formed in high-energy particle collisions that achieve sufficient density, although no such events have been detected. These hypothetical micro black holes, which could form from the collision of cosmic rays and Earth's atmosphere or in particle accelerators like the Large Hadron Collider, would not be able to aggregate additional mass. Instead, they would evaporate in about 10−25 seconds, posing no threat to the Earth. Evolution Black holes can also merge with other objects such as stars or even other black holes. This is thought to have been important, especially in the early growth of supermassive black holes, which could have formed from the aggregation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes. Mergers of supermassive black holes may take a long time: As a binary of supermassive black holes approach each other, most nearby stars are ejected, leaving little for the remaining black holes to gravitationally interact with that would allow them to get closer to each other. This phenomenon has been called the final parsec problem, as the distance at which this happens is usually around one parsec. When a black hole accretes matter, the gas in the inner accretion disk orbits at very high speeds because of its proximity to the black hole. The resulting friction heats the inner disk to temperatures at which it emits vast amounts of electromagnetic radiation (mainly X-rays) detectable by telescopes. By the time the matter of the disk reaches the ISCO, between 5.7% and 42% of its mass will have been converted to energy, depending on the black hole's spin. About 90% of this energy is released within about 20 black hole radii. In many cases, accretion disks are accompanied by relativistic jets that are emitted along the black hole's poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood, in part due to insufficient data. Many of the universe's most energetic phenomena have been attributed to the accretion of matter on black holes. Active galactic nuclei and quasars are believed to be the accretion disks of supermassive black holes. X-ray binaries are generally accepted to be binary systems in which one of the two objects is a compact object accreting matter from its companion. Ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes. At a certain rate of accretion, the outward radiation pressure will become as strong as the inward gravitational force, and the black hole should unable to accrete any faster. This limit is called the Eddington limit. However, many black holes accrete beyond this rate due to their non-spherical geometry or instabilities in the accretion disk. Accretion beyond the limit is called Super-Eddington accretion and may have been commonplace in the early universe. Stars have been observed to get torn apart by tidal forces in the immediate vicinity of supermassive black holes in galaxy nuclei, in what is known as a tidal disruption event (TDE). Some of the material from the disrupted star forms an accretion disk around the black hole, which emits observable electromagnetic radiation. The correlation between the masses of supermassive black holes in the centres of galaxies with the velocity dispersion and mass of stars in their host bulges suggests that the formation of galaxies and the formation of their central black holes are related. Black hole winds from rapid accretion, particularly when the galaxy itself is still accreting matter, can compress gas nearby, accelerating star formation. However, if the winds become too strong, the black hole may blow nearly all of the gas out of the galaxy, quenching star formation. Black hole jets may also energize nearby cavities of plasma and eject low-entropy gas from out of the galactic core, causing gas in galactic centers to be hotter than expected. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.: Ch. 9.6 A stellar black hole of 1 M☉ has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimetre. The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent limits on the possibility of existence of low mass primordial black holes, with modern research predicting that primordial black holes must make up less than a fraction of 10−7 of the universe's total mass. NASA's Fermi Gamma-ray Space Telescope, launched in 2008, has searched for these flashes, but has not yet found any. The properties of a black hole are constrained and interrelated by the theories that predict these properties. When based on general relativity, these relationships are called the laws of black hole mechanics. For a black hole that is not still forming or accreting matter, the zeroth law of black hole mechanics states the black hole's surface gravity is constant across the event horizon. The first law relates changes in the black hole's surface area, angular momentum, and charge to changes in its energy. The second law says the surface area of a black hole never decreases on its own. Finally, the third law says that the surface gravity of a black hole is never zero. These laws are mathematical analogs of the laws of thermodynamics. They are not equivalent, however, because, according to general relativity without quantum mechanics, a black hole can never emit radiation, and thus its temperature must always be zero.: 11 Quantum mechanics predicts that a black hole will continuously emit thermal Hawking radiation, and therefore must always have a nonzero temperature. It also predicts that all black holes have entropy which scales with their surface area. When quantum mechanics is accounted for, the laws of black hole mechanics become equivalent to the classical laws of thermodynamics. However, these conclusions are derived without a complete theory of quantum gravity, although many potential theories do predict black holes having entropy and temperature. Thus, the true quantum nature of black hole thermodynamics continues to be debated.: 29 Observational evidence Millions of black holes with around 30 solar masses derived from stellar collapse are expected to exist in the Milky Way. Even a dwarf galaxy like Draco should have hundreds. Only a few of these have been detected. By nature, black holes do not themselves emit any electromagnetic radiation other than the hypothetical Hawking radiation, so astrophysicists searching for black holes must generally rely on indirect observations. The defining characteristic of a black hole is its event horizon. The horizon itself cannot be imaged, so all other possible explanations for these indirect observations must be considered and eliminated before concluding that a black hole has been observed.: 11 The Event Horizon Telescope (EHT) is a global system of radio telescopes capable of directly observing a black hole shadow. The angular resolution of a telescope is based on its aperture and the wavelengths it is observing. Because the angular diameters of Sagittarius A* and Messier 87* in the sky are very small, a single telescope would need to be about the size of the Earth to clearly distinguish their horizons using radio wavelengths. By combining data from several different radio telescopes around the world, the Event Horizon Telescope creates an effective aperture the diameter size of the Earth. The EHT team used imaging algorithms to compute the most probable image from the data in its observations of Sagittarius A* and M87*. Gravitational-wave interferometry can be used to detect merging black holes and other compact objects. In this method, a laser beam is split down two long arms of a tunnel. The laser beams reflect off of mirrors in the tunnels and converge at the intersection of the arms, cancelling each other out. However, when a gravitational wave passes, it warps spacetime, changing the lengths of the arms themselves. Since each laser beam is now travelling a slightly different distance, they do not cancel out and produce a recognizable signal. Analysis of the signal can give scientists information about what caused the gravitational waves. Since gravitational waves are very weak, gravitational-wave observatories such as LIGO must have arms several kilometers long and carefully control for noise from Earth to be able to detect these gravitational waves. Since the first measurements in 2016, multiple gravitational waves from black holes have been detected and analyzed. The proper motions of stars near the centre of the Milky Way provide strong observational evidence that these stars are orbiting a supermassive black hole. Since 1995, astronomers have tracked the motions of 90 stars orbiting an invisible object coincident with the radio source Sagittarius A*. In 1998, by fitting the motions of the stars to Keplerian orbits, the astronomers were able to infer that Sagittarius A* must be a 2.6×106 M☉ object must be contained within a radius of 0.02 light-years. Since then, one of the stars—called S2—has completed a full orbit. From the orbital data, astronomers were able to refine the calculations of the mass of Sagittarius A* to 4.3×106 M☉, with a radius of less than 0.002 light-years. This upper limit radius is larger than the Schwarzschild radius for the estimated mass, so the combination does not prove Sagittarius A* is a black hole. Nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for confining so much invisible mass into such a small volume. Additionally, there is some observational evidence that this object might possess an event horizon, a feature unique to black holes. The Event Horizon Telescope image of Sagittarius A*, released in 2022, provided further confirmation that it is indeed a black hole. X-ray binaries are binary systems that emit a majority of their radiation in the X-ray part of the electromagnetic spectrum. These X-ray emissions result when a compact object accretes matter from an ordinary star. The presence of an ordinary star in such a system provides an opportunity for studying the central object and to determine if it might be a black hole. By measuring the orbital period of the binary, the distance to the binary from Earth, and the mass of the companion star, scientists can estimate the mass of the compact object. The Tolman-Oppenheimer-Volkoff limit (TOV limit) dictates the largest mass a nonrotating neutron star can be, and is estimated to be about two solar masses. While a rotating neutron star can be slightly more massive, if the compact object is much more massive than the TOV limit, it cannot be a neutron star and is generally expected to be a black hole. The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. Observations of rotation broadening of the optical star reported in 1986 lead to a compact object mass estimate of 16 solar masses, with 7 solar masses as the lower bound. In 2011, this estimate was updated to 14.1±1.0 M☉ for the black hole and 19.2±1.9 M☉ for the optical stellar companion. X-ray binaries can be categorized as either low-mass or high-mass; This classification is based on the mass of the companion star, not the compact object itself. In a class of X-ray binaries called soft X-ray transients, the companion star is of relatively low mass, allowing for more accurate estimates of the black hole mass. These systems actively emit X-rays for only several months once every 10–50 years. During the period of low X-ray emission, called quiescence, the accretion disk is extremely faint, allowing detailed observation of the companion star. Numerous black hole candidates have been measured by this method. Black holes are also sometimes found in binaries with other compact objects, such as white dwarfs, neutron stars, and other black holes. The centre of nearly every galaxy contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M–sigma relation, strongly suggests a connection between the formation of the black hole and that of the galaxy itself. Astronomers use the term active galaxy to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the high levels of activity in the centers of these galaxies, regions called active galactic nuclei (AGN), may be explained by accretion onto supermassive black holes. These AGN consist of a central black hole that may be millions or billions of times more massive than the Sun, a disk of interstellar gas and dust called an accretion disk, and two jets perpendicular to the accretion disk. Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, Messier 32, Messier 87, the Sombrero Galaxy, and the Milky Way itself. Another way black holes can be detected is through observation of effects caused by their strong gravitational field. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected, making objects behind them appear distorted. When the lensing object is a black hole, this effect can be strong enough to create multiple images of a star or other luminous source. However, the distance between the lensed images may be too small for contemporary telescopes to resolve—this phenomenon is called microlensing. Instead of seeing two images of a lensed star, astronomers see the star brighten slightly as the black hole moves towards the line of sight between the star and Earth and then return to its normal luminosity as the black hole moves away. The turn of the millennium saw the first 3 candidate detections of black holes in this way, and in January 2022, astronomers reported the first confirmed detection of a microlensing event from an isolated black hole. This was also the first determination of an isolated black hole mass, 7.1±1.3 M☉. Alternatives While there is a strong case for supermassive black holes, the model for stellar-mass black holes assumes of an upper limit for the mass of a neutron star: objects observed to have more mass are assumed to be black holes. However, the properties of extremely dense matter are poorly understood. New exotic phases of matter could allow other kinds of massive objects. Quark stars would be made up of quark matter and supported by quark degeneracy pressure, a form of degeneracy pressure even stronger than neutron degeneracy pressure. This would halt gravitational collapse at a higher mass than for a neutron star. Even stronger stars called electroweak stars would convert quarks in their cores into leptons, providing additional pressure to stop the star from collapsing. If, as some extensions of the Standard Model posit, quarks and leptons are made up of the even-smaller fundamental particles called preons, a very compact star could be supported by preon degeneracy pressure. While none of these hypothetical models can explain all of the observations of stellar black hole candidates, a Q star is the only alternative which could significantly exceed the mass limit for neutron stars and thus provide an alternative for supermassive black holes.: 12 A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. A dark energy star would convert infalling matter into vacuum energy; This vacuum energy would be much larger than the vacuum energy of outside space, exerting outwards pressure and preventing a singularity from forming. A black star would be gravitationally collapsing slowly enough that quantum effects would keep it just on the cusp of fully collapsing into a black hole. A gravastar would consist of a very thin shell and a dark-energy interior providing outward pressure to stop the collapse into a black hole or formation of a singularity; It could even have another gravastar inside, called a 'nestar'. Open questions According to the no-hair theorem, a black hole is defined by only three parameters: its mass, charge, and angular momentum. This seems to mean that all other information about the matter that went into forming the black hole is lost, as there is no way to determine anything about the black hole from outside other than those three parameters. When black holes were thought to persist forever, this information loss was not problematic, as the information can be thought of as existing inside the black hole. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information is seemingly gone forever. This is called the black hole information paradox. Theoretical studies analyzing the paradox have led to both further paradoxes and new ideas about the intersection of quantum mechanics and general relativity. While there is no consensus on the resolution of the paradox, work on the problem is expected to be important for a theory of quantum gravity.: 126 Observations of faraway galaxies have found that ultraluminous quasars, powered by supermassive black holes, existed in the early universe as far as redshift z ≥ 7 {\displaystyle z\geq 7} . These black holes have been assumed to be the products of the gravitational collapse of large population III stars. However, these stellar remnants were not massive enough to produce the quasars observed at early times without accreting beyond the Eddington limit, the theoretical maximum rate of black hole accretion. Physicists have suggested a variety of different mechanisms by which these supermassive black holes may have formed. It has been proposed that smaller black holes may have also undergone mergers to produce the observed supermassive black holes. It is also possible that they were seeded by direct-collapse black holes, in which a large cloud of hot gas avoids fragmentation that would lead to multiple stars, due to low angular momentum or heating from a nearby galaxy. Given the right circumstances, a single supermassive star forms and collapses directly into a black hole without undergoing typical stellar evolution. Additionally, these supermassive black holes in the early universe may be high-mass primordial black holes, which could have accreted further matter in the centers of galaxies. Finally, certain mechanisms allow black holes to grow faster than the theoretical Eddington limit, such as dense gas in the accretion disk limiting outward radiation pressure that prevents the black hole from accreting. However, the formation of bipolar jets prevent super-Eddington rates. In fiction Black holes have been portrayed in science fiction in a variety of ways. Even before the advent of the term itself, objects with characteristics of black holes appeared in stories such as the 1928 novel The Skylark of Space with its "black Sun" and the "hole in space" in the 1935 short story Starship Invincible. As black holes grew to public recognition in the 1960s and 1970s, they began to be featured in films as well as novels, such as Disney's The Black Hole. Black holes have also been used in works of the 21st century, such as Christopher Nolan's science fiction epic Interstellar. Authors and screenwriters have exploited the relativistic effects of black holes, particularly gravitational time dilation. For example, Interstellar features a black hole planet with a time dilation factor of over 60,000:1, while the 1977 novel Gateway depicts a spaceship approaching but never crossing the event horizon of a black hole from the perspective of an outside observer due to time dilation effects. Black holes have also been appropriated as wormholes or other methods of faster-than-light travel, such as in the 1974 novel The Forever War, where a network of black holes is used for interstellar travel. Additionally, black holes can feature as hazards to spacefarers and planets: A black hole threatens a deep-space outpost in 1978 short story The Black Hole Passes, and a binary black hole dangerously alters the orbit of a planet in the 2018 Netflix reboot of Lost in Space. Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-FOOTNOTEKent2001560–561-149] | [TOKENS: 10728] |
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.