text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Ticker_symbol] | [TOKENS: 1908] |
Contents Ticker symbol A ticker symbol or stock symbol is an abbreviation used to uniquely identify publicly traded shares of a particular stock or security on a particular stock exchange. Ticker symbols are arrangements of symbols or characters (generally Latin letters or digits) which provide a shorthand for investors to refer to, purchase, and research securities. Some exchanges include ticker extensions, which encode additional information such as share class, bankruptcy status, or voting rights into the ticker. The first ticker symbol was used in 1867, following the invention of the ticker tape machine by Edward Calahan. It was used to identify shares of the Union Pacific Railroad Company. Interpreting the symbol Stock symbols are unique identifiers assigned to each security traded on a particular market. A stock symbol can consist of letters, numbers, or a combination of both, and is a way to uniquely identify that stock. The symbols were kept as short as possible to reduce the number of characters that had to be printed on the ticker tape, and to make it easy to recognize by traders and investors. The allocation of symbols and formatting conventions is specific to each stock exchange. In the US, for example, stock tickers are typically between 1 and 4 letters and represent the company name where possible. For example, US-based computer company stock Apple Inc. traded on the NASDAQ exchange has the symbol AAPL, while the motor company Ford's stock that is traded on the New York Stock Exchange has the single-letter ticker F. In Europe, most exchanges use three-letter codes; for example, British-Dutch consumer goods company Unilever traded on the Amsterdam Euronext exchange has the symbol UNA and London Stock Exchange has the symbol ULVR. In Asia, numbers are often used as stock tickers to avoid issues for international investors when using non-Latin scripts. For example, the bank HSBC's stock traded on the Hong Kong Stock Exchange has the ticker symbol 5, New York Stock Exchange has the ticker symbol HSBC (bank abbreviation) and London Stock Exchange has the ticker HSBA. Symbols sometimes change to reflect mergers. Prior to the 1999 merger with Mobil Oil, Exxon used a phonetic spelling of the company "XON" as its ticker symbol. The symbol of the firm after the merger was "XOM". Symbols are sometimes reused. In the US the single-letter symbols are particularly sought after as vanity symbols. For example, since March 2008 Visa Inc. has used the symbol V that had previously been used by Vivendi which had delisted and given up the symbol. To fully qualify a stock, both the ticker and the exchange or country of listing needs to be known. On many systems both must be specified to uniquely identify the security. This is often done by appending the location or exchange code to the ticker. Although stock tickers identify a security, they are exchange-dependent, generally limited to stocks, and can change. These limitations have led to the development of other codes in financial markets to identify securities for settlement purposes. The most prevalent of these is the International Securities Identifying Number (ISIN). An ISIN uniquely identifies a security and its structure is defined in ISO 6166. Securities for which ISINs are issued include bonds, commercial paper, stocks, and warrants. The ISIN code is a 12-character alpha-numerical code that does not contain information characterizing financial instruments, but serves for uniform identification of a security at trading and settlement. The ISIN identifies the security, not the exchange (if any) on which it trades; it is, therefore, not a replacement for the ticker symbol. For instance, Mercedes-Benz Group stock trades on twenty-two different stock exchanges worldwide and is priced in five foreign currencies; it has the same ISIN on each (DE0007100000), though not the same ticker symbol. ISIN cannot specify a particular trade in this case, and another identifier, typically the three- or four-letter exchange code (such as the Market Identifier Code), will have to be specified in addition to the ISIN. While usually a stock ticker identifies a security that can be traded, stock market indices are also sometimes assigned a symbol, even though they can generally not be traded. Symbols for indices are usually distinguished by adding a symbol in front of the name, such as a circumflex (or 'caret') ^ or a dot. For example, Reuters lists the Nasdaq Composite index under the symbol .IXIC. Symbols by country In Australia the Australian Securities Exchange uses the following conventions: Three character base symbol with the first and third character being alphanumeric and the second alphabetic. ETFs and ETMFs can be either 3 or 4 characters. Exchange-traded warrants and exchange-traded options are six characters. ETOs can have numbers in the sixth character. In Canada the Toronto Stock Exchange TSX and the TSXV use the following special codes after the ticker symbol: In the United Kingdom, prior to 1996, stock codes were known as EPICs, named after the London Stock Exchange's Exchange Price Information Computer (e.g.: "MKS" for Marks and Spencer). Following the introduction of the Sequence trading platform in 1996, EPICs were renamed Tradable Instrument Display Mnemonics (TIDM), but they are still widely referred to as EPICs. Stocks can also be identified using their SEDOL (Stock Exchange Daily Official List) number or their ISIN (International Securities Identification Number). In the United States, modern letter-only ticker symbols were developed by Standard & Poor's (S&P) to bring a national standard to investing. Previously, a single company could have many ticker symbols as they varied between the dozens of individual stock markets. The term ticker refers to the noise made by the ticker tape machines once widely used by stock exchanges. The S&P system was later standardized by the securities industry and modified as the years passed. Stock symbols for preferred stock have not been standardized. Some companies use a well-known product as their ticker symbol. Belgian brewer AB InBev, the brewer of Budweiser beer, uses "BUD" (symbolizing its premier product in the United States) as its three-letter ticker for American Depository Receipts. Its rival, the Molson Coors Brewing Company, uses a similarly beer-related symbol, "TAP". Likewise, Southwest Airlines pays tribute to its headquarters at Love Field in Dallas through its "LUV" symbol. Six Flags Entertainment Corporation, which operates large amusement parks in the United States, uses "FUN" as its symbol. Acushnet Company uses "GOLF," as the company sells products related with golf. Harley-Davidson uses "HOG", an abbreviation for the corporate-sponsored Harley Owners Group. Yamana Gold uses "AUY", because on the periodic table of elements, "Au" is the symbol for gold. Sotheby's, an auction house, previously used the symbol "BID". Petco uses the symbol "WOOF," referencing a dog's bark (even though the corporate logo features both a dog and a cat). While most symbols come from the company's name, sometimes it happens the other way around. Tricon Global, owner of KFC, Pizza Hut and Taco Bell, adopted the symbol "YUM" to represent its corporate mission when the company was spun out of PepsiCo in 1997. In 2002, the company changed its name to match its symbol, adopting the name Yum! Brands. Symbols sometimes change to reflect mergers. Before the 1999 merger with Mobil, Exxon used a phonetic spelling of the company "XON" as its ticker symbol. The symbol of the firm after the merger was "XOM". After Hewlett-Packard merged with Compaq, the new firm took on the ticker symbol "HPQ". (The former symbols were HWP and CPQ.) AT&T's ticker symbol is "T"; accordingly, the company is referred to simply as "Telephone" on Wall Street (the T symbol is so well known that when SBC purchased the company, it took the AT&T name, capitalizing on its history and keeping the desired single letter symbol).[citation needed] Some examples of US Stock symbols include: Formerly, a glance at a U.S. stock symbol and its appended codes would allow an investor to determine where a stock trades; however, in July 2007, the SEC approved a rule change allowing companies moving from the New York Stock Exchange to the Nasdaq to retain their three-letter symbols; DirecTV was one of the first companies to make this move. When first implemented, the rule change did not apply to companies with one or two-letter symbols, but subsequently any stock was able to move from the NYSE to the Nasdaq without changing its symbol. CA Technologies, which traded under the symbol CA before it was acquired by Broadcom Inc. in 2018, moved from the NYSE to the Nasdaq in April 2008 and kept its two-letter symbol. Unassigned letters: Unassigned letters: In countries where Arabic script is used, and in East Asia, transliterated Latin script versions of company names may be confusing to an unpracticed Western reader; stock symbols provide a simple means of clear communication in the workplace. Many Asian countries use numerical or alphanumerical ticker symbols of only digits and Roman letters to facilitate international trade. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Online_participation] | [TOKENS: 4069] |
Contents Online participation Online participation is used to describe the interaction between users and online communities on the web. Online communities often involve members to provide content to the website or contribute in some way. Examples of such include wikis, blogs, online multiplayer games, and other types of social platforms. Online participation is currently a heavily researched field. It provides insight into fields such as web design, online marketing, crowdsourcing, and many areas of psychology. Some subcategories that fall under online participation are: commitment to online communities, coordination and interaction, and member recruitment. Knowledge sharing infrastructures Some key examples of online knowledge sharing infrastructures include the following: In the past important online knowledge sharing infrastructures included: Motivations Many online communities (e.g. Blogs, Chat rooms, Electronic mailing lists, Internet forums, Imageboards, Wikis), are not only knowledge-sharing resources but also fads. Studies have shown that committed members of online communities have reasons to remain active. As long as members feel the need to contribute, there is a mutual dependence between the community and the member. Although many researchers have come up with several motivational factors behind online contribution, these theories can all be categorized under instrinsic and extrinsic motivations. Intrinsic motivation refers to an action that is driven by personal interests and internal emotions in the task itself while extrinsic motivation refers to an action that is influenced by external factors, often for a certain outcome, reward or recognition. The two types of motivation contradict each other but often go hand-in-hand in cases where continual contribution is observed. Several motivational factors lead people to continue their participation to these online communities and remain loyal. Peter Kollock researched motivations for contributing to online communities. Kollock (1999, p. 227) outlines three motivations that do not rely on altruistic behavior on the part of the contributor: anticipated reciprocity; increased recognition; and sense of efficacy. Another motivation, in which Marc Smith mentions in his 1992 thesis Voices from the WELL: The Logic of the Virtual Commons is "Communion"—a "sense of community" as it is referred to in social psychology. In a simple sentence we can say it is made by people for the people. A person is motivated to contribute valuable information to the group in the expectation that one will receive useful help and information in return. Indeed, there is evidence that active participants in online communities get more responses faster to questions than unknown participants. The higher the expectation of reciprocity, the greater the chance of there being high knowledge contribution intent in an online community. Reciprocity represents a sense of fairness where individuals usually reciprocate the positive feedback they receive from others so that they can in return get more useful knowledge from others in the future. Research has shown that self esteem needs of recognition from others lead to expectations of reciprocity. Self-esteem plays such an important role in the need for reciprocity because contributing to online communities can be an ego booster for many types of users. The more positive feedback contributors get from other members of their community, the closer they may feel to being considered an expert in the knowledge they are sharing. Because of this, contributing to online communities can lead to a sense of self-value and respect, based on the level of positive feedback reciprocated from the community In addition, there is evidence that active participants in online communities get more responses faster to questions than unknown participants. A study on the participation in eBay's reputation system demonstrated that the expectation of reciprocal behavior from partners increases participation from self-interested eBay buyers and sellers. Standard economic theory predicts that people are not inclined to contribute voluntarily to the provision of such public goods but, rather, they tend to free ride on the contributions of others. Nevertheless, empirical results from eBay show that buyers submit ratings to more than 50% of transactions. The main takeaways from their conclusion were that they found that experienced users tend to rate more frequently, and motivation for leaving comments is not strongly motivated by pure altruism targeted towards the specific transaction partner, but from self-interest and reciprocity to "warm glow" feeling of contribution. Some theories support altruism as being a key motivator in online participation and reciprocity. Although evidence from sociology, economics, political science, and social psychology shows that altruism is part of human nature, recent research reveals that the pure altruism model lacks predictive power in many situations. Several authors have proposed combining a "joy-of-giving" (sometimes also referred to as "warm glow") motive with altruism to create a model of impure altruism. Different from altruism, reciprocity represents a pattern of behavior where people respond to friendly or hostile actions with similar actions even if no material gains are expected. Voluntary participation in online feedback mechanisms seems to be largely motivated by self-interest. Because their reputation is on the line, the eBay study showed that some partners using eBay's feedback mechanism had selfish motivations to rate others. For example, data showed that some eBay users exhibited reciprocity towards partners who rated them first. This caused them to only rate partners with hopes the increase the probability of eliciting a reciprocal response. Recognition is important to online contributors such that, in general, individuals want recognition for their contributions. Some have called this Egoboo. Kollock outlines the importance of reputation online: "Rheingold (1993) in his discussion of the WELL (an early online community) lists the desire for prestige as one of the key motivations of individuals' contributions to the group. To the extent this is the concern of an individual, contributions will likely be increased to the degree that the contribution is visible to the community as a whole and to the extent there is some recognition of the person's contributions. ... the powerful effects of seemingly trivial markers of recognition (e.g. being designated as an 'official helper') has been commented on in a number of online communities..." One of the key ingredients of encouraging a reputation is to allow contributors to be known or not to be anonymous. The following example, from Meyers (1989) harvtxt error: no target: CITEREFMeyers1989 (help) study of the computer underground illustrates the power of reputation. When involved in illegal activities, computer hackers must protect their personal identities with pseudonyms. If hackers use the same nicknames repeatedly, this can help the authorities to trace them. Nevertheless, hackers are reluctant to change their pseudonyms regularly because the status and fame associated with a particular nickname would be lost. On the importance of online identity: Profiles and reputation are clearly evident in online communities today. Amazon.com is a case in point, as all contributors are allowed to create profiles about themselves and as their contributions are measured by the community, their reputation increases. Myspace.com encourages elaborate profiles for members where they can share all kinds of information about themselves including what music they like, their heroes, etc. Displaying photos and information about individual members and their recent activities on social networking websites can promote bonds-based commitment. Because social interaction is the primary basis for building and maintaining social bonds, we can gain appreciation for other users once we interact with them. This appreciation turns into increased recognition for the contributors, which would in turn give them the incentive to contribute more. In addition to this, many communities give incentives for contributing. For example, many forums award Members points for posting. Members can spend these points in a virtual store. eBay is an example of an online marketplace where reputation is very important because it is used to measure the trustworthiness of someone you potentially will do business with. This type of community is known as a reputation system, which is a type of collaborative filtering algorithm which attempts to collect, distribute, and aggregate ratings about all users' past behavior within an online community in an effort to strike a balance between the democratic principles of open publishing and maintaining standards of quality. These systems, like eBay's, promote the idea of trust that relates to expectations of reciprocity which can help increase the sense of reputation for each member. With eBay, you have the opportunity to rate your experience with someone and they, likewise, can rate you. This has an effect on the reputation score. The participants may therefore be encouraged to manage their online identity in order to make a good impression on the other members of the community. Other successful online communities have reputation systems that do not exactly provide any concrete incentive. For example, Reddit is an online social content-aggregation community which serves as a "front page of the Internet" and allows its users to submit content (e.g. text, photos, links, news-articles, blog-posts, music or videos) under sometimes ambiguous usernames. It features a reputation system by which users can rate the quality of submissions and comments. The total votecount of a users' submissions are not of any practical value—however when users feel that their content is generally appreciated by the rest of the Reddit-community (or its sub-communities called "subreddits") they may be motivated to contribute more. Individuals may contribute valuable information because the act results in a sense of efficacy, that is, a sense that they are capable of achieving their desired outcome and have some effect on this environment. There is well-developed research literature that has shown how important a person's sense of efficacy is (e.g. Bandura 1995). Studies have shown that increasing the user's sense of efficacy boosts their intrinsic motivation and therefore makes them more likely to stay in an online community. According to Wang and Fesenmaier's research, efficacy is the biggest factor in affecting active contribution online. Of the many sub-factors, it was discovered that "satisfying other members' needs" is the biggest reason behind the increase of efficacy in a member followed by "being helpful to others" (Wang and Fesenmaier). Features such as the task progress bars and an attempt to reduce some difficulty of completing a general task can easily enhance the feeling of self-worth in the community. "Creating immersive experiences with clear goals, feedback and challenge that exercise peoples' skills to the limits but still leave them in control causes the experiences to be intrinsically interesting. Positive but constructive and sincere feedbacks also produce similar effects and increase motivation to complete more tasks. A competitive setting—which may or may not have been intended to be competitive can also increase a person's self-esteem if quality performance is assumed" (Kraut 2012)). People, in general, are social beings and are motivated by receiving direct responses to their contributions. Most online communities enable this by allowing people to reply back to others' contributions (e.g. many Blogs allow comments from readers, one can reply back to forum posts, etc.). Granted, there is some overlap between improving one's reputation and gaining a sense of community, and it seems safe to say that there are also some overlapping areas between all four motivators. While some people are active contributors to online discussion, others join virtual communities and do not actively participate, a concept referred to as lurking (Preece 2009). There are several reasons why people choose not to participate online. For instance, users may get the information they wanted without actively participating, think they are helpful by not posting, want to learn more about the community before becoming an active member, be unable to use the software provided, or dislike the dynamics they observe within the group (Preece, Nonnecke & Andrews 2004). When online communities have lurking members, the amount of participation within the group decreases and the sense of community for these lurking members also diminishes. Online participation increases the sense of community for all members, as well as gives them a motivation to continue participating. Other problems regarding a sense of community arise when the online community attempts to attract and retain newcomers. These problems include difficulty of recruiting newcomers, making them stay committed early on, and controlling their possible inappropriate behavior. If an online community is able to solve these problems with their newcomers, then they can increase the sense of community within the entire group. A sense of community is also heightened in online communities when each person has a willingness to participate due to intrinsic and extrinsic motivations. Findings also show that newcomers may be unaware that an online social networking website even has a community. As these users build their own profiles and get used to the culture of the group over time, they eventually self-identify with the community and develop a sense of belonging to the community. Another motivation for participance may also come from self-expression by what is being shared in or created for online communities. Self-discovery may be another motivation as many online-communities allow for feedback on personal beliefs, artistic creations, ideas and the like which may provide grounds to develop new perspectives on the self. Depending on the online-platform content being shared on them can be perceived by millions around the world which gives participants a certain influence which can serve as a motivation for participation. Additionally high participation may provide a user with special rights within a community (such as modship) which can be inbuilt into the technical platform, be granted by the community (e.g. via voting) or certain users. Online-participation may be motivated by an instrumental purpose such as providing specific information. The entertainment of playing or otherwise interacting with other users may be a major motivation for participants in certain communities. Users of social networks have various reasons that motivate them to join particular networks. In general "communication technologies open up new pathways between individuals who would not otherwise connect". The ability to have synchronous communication arrived with the development of online social networks. Facebook is one example of an online social network that people choose to openly participate in. Although there are a number of different social networking platforms available, there exists a large community of people who choose to actively engage on Facebook. Although Facebook is commonly known as a method of communication, there are a variety of reasons why users prefer to use Facebook, over other platforms, as their social networking platform. For some users, interactivity between themselves and other user is a matter of fidelity. For many, it is important to maintain a sense of community. Through participation on online social networks it becomes easier for users to find and communicate with people within their community. Facebook often has friend recommendations based on the geography of the user. This allows users to quickly connect with people in their area whom they may not see often, and stay in contact with them. For students, Facebook is an effective network to participate in when building and maintaining social capital. By adding family, friends, acquaintances, and colleagues who use the network, students can expand their social capital. The online connections they make can later prove to be of benefit later on. Due to the competitive nature of the job market "[i]t is particularly important for university students to build social capital with the industry". Since Facebook has a large number of active users it is easier for students to find out about opportunities relating to jobs through their friends online. Facebook's interface allows users to share content, such as status updates, photos, links, and keep in contact with people they may not be able to see on a day-to-day basis. The messenger application allows friends to have conversations privately from their other friends. Users can also create groups and events through Facebook in order to share information with specific people on the network. "Facebook encourages users to engage in self-promoting". Facebook allows users to engage in self-promotion in a positive way; it allows friends to like and/or comment on posts and statuses. Facebook users are also able to "follow" people whom they may not be friends with, such as public figures, companies, or celebrities. This allows users to keep up to date with things that interest them like music, sports, and promotions from their favorite companies, and share them with their Facebook friends. Aside from features such as email, the photo album, and status updates, Facebook provides various additional features which help to individualize each users experience. Some social networks have a specific interface that users cannot individualize to their specific interests, Facebook allows users to control certain preferences. Users can use "add-in functions (e.g., virtual pets, online games, the wall, virtual gifts) that facilitate users to customize their own interface on Facebook". Psychology Studies have found that the nature and the level of participation in online social networking sites have been directly correlated with the personality of the participants. The Department of Psychology in the University of Windsor site their findings regarding this correlation in the articles "Personality and motivations associated with Facebook use" and "The Influence of Shyness on the Use of Facebook in an Undergraduate Sample". The articles state that people who have high levels of anxiety, stress, or shyness are more likely to favor socializing through the Internet than in-person socialization. The reason for this is because they are able to communicate with others without being face-to-face, and mediums such as chat rooms give a sense of anonymity which make them feel more comfortable when participating in discussions with others. Studies also show that in order to increase online participation, the contributors must feel unique, useful, and be given challenging and specific goals. These findings fall in line with the social psychology theories of social loafing and goal setting. Social loafing claims that when people are involved in a group setting, they tend to not contribute as much and depend on the work of others. Goal setting is the theory stating that people will work harder if given a specific goal rather than a broad or general problem. However, other social psychology theories have been disproven to help with online participation. For instance, one study found that users will contribute more to an online group project than an individual one. Additionally, although users enjoy when their contributions are unique, they want a sense of similarity within the online community. Finding similarities with other members of a community encourage new users to participate more and become more active within the community. So, new users must be able to find and recognize similar users already participating in the community. Also, the online community must give a method of analyzing and quantifying the contribution made by any user to visualize their contributions to users and help convince them that they are unique and useful. However, these and other psychological motivations behind online participation are still being researched today. Sociology Research has shown that social characteristics, such as socioeconomic status, gender, and age affect users' propensity to participate online. Following sociological research on the digital divide, newer studies indicate a participation divide in the United States (Correa 2010)(Hargittai & Walejko 2008)(Schradie 2011) and the United Kingdom (Blank 2013). Age is the strongest demographic predictor of online participation, while gender differentiates forms of online participation. The effect of socioeconomic status is not found to be strong in all studies (Correa 2010) and (partly) mediated through online skills (Hargittai & Walejko 2008) and self-efficacy. Furthermore, existing social science research on online participation has heavily focused on the political sphere, neglecting other areas, such as education, health or cultural participation (Lutz, Hoffmann & Meckel 2014). Participation in the social web Online participation is relevant in different systems of the social web such as: Nielsen's 90-9-1% rule: "In most online communities, 90% of users are lurkers who never contribute, 9% of users contribute a little, and 1% of users account for almost all the action". It is interesting to point out that the majority of the user population is in fact not contributing to the informational gain of online communities, which leads to the phenomenon of contribution inequality. Often, feedbacks, opinions and editorials are posted from those users who have stronger feelings towards the matter than most others; thus it is often the case that some posts online are not in fact representative of the entire population leading to what is called the Survivorship bias. Therefore, it is important to ease the process of contribution as well as to promote quality contribution to address this concern. Lior Zalmanson and Gal Oestreicher-Singer showed that participation in the social websites can help boost subscription and conversion rates in these websites. See also Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Oshri_Cohen] | [TOKENS: 597] |
Contents Oshri Cohen Oshri Cohen (Hebrew: אושרי כהן; born 11 January 1984) is an Israeli film, television and stage actor. He has received four Ophir Award nominations for his roles in Bonjour Monsieur Shlomi (2003), Beaufort (2007), Los Islands (2008) and Working Woman (2018). Early life He was born and raised in Lod in Israel to parents Yardena, a school principal and Nisim, owner of a spare parts business. Cohen's family is Jewish of Bulgarian and Sephardic descent. At the age of 16, he moved with his family to Rishon LeZion, where he studied at Mekif Chet High School, majoring in business administration and economics. For his military service with the Israel Defense Forces, he served for six months as a printer with the filming unit of the Israeli Air Force. Career He has appeared in a number of plays, including: "The Jungle Book", "Erfal", "The Concert", "News Flash" (2000), "Letter to Noa" (2001), "The Stories of the Stage", "Wife, Husband, Home" (2003 ), "The Indian Patient" at the Beit Lessin Theater (2005), "Moarim" (2006) and "All Life Ahead" at the Habima Theater (2007). He had an early film role, playing the titular character in Bonjour Monsieur Shlomi (2003). He received an Ophir Award nomination for his portrayal of Shlomi. In 2005, he starred in Campfire. The story is set in 1981, telling the story of woman seeking to join an Israeli settlement on the West Bank, despite the protests of her teenage daughter. In 2006, he joined the cast of the popular Israeli telenovela, HaShir Shelanu alongside stars such as Ran Danker. In 2007 Cohen starred in the Israeli war film Beaufort, which tells the true story of the last unit of soldiers on the legendary Beaufort outpost. He received an Ophir Award nomination for Best Actor for his role. He has also starred in Lost Islands (2008) and Lebanon (2009), which won the Golden Lion at the 66th Venice International Film Festival. Cohen play guest role of Igal in the fifth season of the American TV series Homeland. In 2018 he starred as Joseph in the BBC/AMC drama McMafia. In the same year he had a supporting role in the Israeli drama film, Working Woman. He earned an Ophir Award nomination for Best Supporting Actor, for his role as Ofer. Filmography Awards and nominations References External links This article about an Israeli actor is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/S%26P_500] | [TOKENS: 1561] |
Contents S&P 500 S&P 500 (Standard and Poor's 500) is a stock market index tracking the stock performance of 500 leading companies listed on stock exchanges in the United States. It is one of the most commonly followed equity indices and includes approximately 80% of the total market capitalization of U.S. public companies, with an aggregate market cap of more than $61.1 trillion as of December 31, 2025. The S&P 500 index is a public float weighted/capitalization-weighted index. The ten largest companies on the list of S&P 500 companies account for approximately 38% of the market capitalization of the index and the 50 largest components account for 60% of the index. As of January 2026, the 10 largest components are, in order of highest to lowest weighting: Nvidia (7.17%), Alphabet (6.39%, including both class A & C shares), Apple (5.86%), Microsoft (5.33%), Amazon (3.98%), Broadcom (2.51%), Meta Platforms (2.49%), Tesla (2.31%), Berkshire Hathaway (1.68%), and Lilly (Eli) (1.55%). The components that have increased their dividends in 25 consecutive years are known as the S&P 500 Dividend Aristocrats. Companies in the S&P 500 derive a collective 72% of revenues from the United States and 28% from other countries. The index is one of the factors in the computation of the Conference Board Leading Economic Index, which is used to forecast the direction of the economy. The index is associated with many ticker symbols, including ^GSPC, .INX, and SPX, depending on market or website. The S&P 500 is maintained by S&P Dow Jones Indices, a joint venture majority-owned by S&P Global, and its components are selected by a committee. Investing in the S&P 500 Index funds, including mutual funds and exchange-traded funds (ETFs), can replicate, before fees and expenses, the performance of the index by holding the same stocks as the index in the same proportions. ETFs that replicate the performance of the index are issued by The Vanguard Group (NYSE Arca: VOO), iShares (NYSE Arca: IVV), and State Street Corporation (SPDR S&P 500 ETF Trust, NYSE Arca: SPY and NYSE Arca: SPYM). The most liquid based on average daily volume is (NYSE Arca: SPY), although SPY has a higher annual expense ratio of 0.09% compared to 0.03% for VOO and IVV, and 0.02% for SPYM. Mutual funds that track the index are offered by Fidelity Investments, T. Rowe Price, and Charles Schwab Corporation. Direxion offers leveraged ETFs which attempt to produce 3x the daily return of either investing in (NYSE Arca: SPXL) or shorting (NYSE Arca: SPXS) the S&P 500. ProShares offers 2x daily return (NYSE Arca: SSO) and 3x daily return (NYSE Arca: UPRO). In the derivatives market, the Chicago Mercantile Exchange (CME) offers futures contracts that track the index and trade on the exchange floor in an open outcry auction, or on CME's Globex platform, and are the exchange's most popular product. The Chicago Board Options Exchange (CBOE) offers options on the S&P 500 as well as on S&P 500 ETFs, inverse ETFs, and leveraged ETFs. History In 1860, Henry Varnum Poor formed Poor's Publishing, which published an investor's guide to the railroad industry. In 1923, Standard Statistics Company (founded in 1906 as the Standard Statistics Bureau) began rating mortgage bonds and developed its first stock market index consisting of the stocks of 233 U.S. companies, computed weekly. Three years later, it developed a 90-stock index, computed daily. In 1941, Poor's Publishing merged with Standard Statistics Company to form Standard & Poor's. On Monday, March 4, 1957, the index was expanded to its current extent of 500 companies and was renamed the S&P 500 Stock Composite Index. In 1962, Ultronic Systems became the compiler of the S&P indices including the S&P 500 Stock Composite Index, the 425 Stock Industrial Index, the 50 Stock Utility Index, and the 25 Stock Rail Index. On August 31, 1976, The Vanguard Group offered the first mutual fund to retail investors that tracked the index. On April 21, 1982, the Chicago Mercantile Exchange began trading futures based on the index. On July 1, 1983, Chicago Board Options Exchange began trading options based on the index. Beginning in 1986, the index value was updated every 15 seconds, or 1,559 times per trading day, with price updates disseminated by Reuters. Prior to this, it had been updated once every minute. On January 22, 1993, the Standard & Poor's Depositary Receipts exchange-traded fund issued by State Street Corporation began trading. On September 9, 1997, CME Group introduced the S&P E-mini futures contract. In 2005, the index transitioned to a public float-adjusted capitalization-weighting. Friday, September 17, 2021, was the final trading date for the original SP big contract which began trading in 1982. Selection criteria Like other indices managed by S&P Dow Jones Indices, but unlike indices such as the Russell 1000 Index which are strictly rule-based, the components of the S&P 500 are selected by a committee. When considering the eligibility of a new addition, the committee assesses the company's merit using the following primary criteria: A stock may rise in value when it is added to the index since index funds must purchase that stock to continue tracking the index. A study published by the National Bureau of Economic Research in October 2021 alleged that companies' purchases of ratings services from S&P Global appear to improve their chance of entering the S&P 500, even if they are not the best fit per the rules. Performance Since its inception in 1926, the index's compound annual growth rate—including dividends—has been approximately 9.8% (6% after inflation), with the standard deviation of the return, calculated on a monthly basis, over the same time period being 20.81%. While the index has declined in several years by over 30%, it has posted annual increases 70% of the time, with 5% of all trading days resulting in record highs. Returns are generally quoted as price returns (excluding returns from dividends). However, they can also be quoted as total return, which includes returns from dividends and the reinvestment thereof, and "net total return", which reflects the effects of dividend reinvestment after the deduction of withholding tax. The S&P 500’s record closing high of 6,932.05 was set on December 24, 2025. The index had experienced an intra-year correction, typically defined as a decline of 10 to 20%, and it fell to a low of 4,982.77 on April 8 before staging a sharp recovery. The S&P 500 rose above 7,000 points during trading, for the first time in history on January 28, 2026. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Social_network#cite_note-29] | [TOKENS: 5247] |
Contents Social network 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias A social network is a social structure consisting of a set of social actors (such as individuals or organizations), networks of dyadic ties, and other social interactions between actors. The social network perspective provides a set of methods for analyzing the structure of whole social entities along with a variety of theories explaining the patterns observed in these structures. The study of these structures uses social network analysis to identify local and global patterns, locate influential entities, and examine dynamics of networks. For instance, social network analysis has been used in studying the spread of misinformation on social media platforms or analyzing the influence of key figures in social networks. Social networks and the analysis of them is an inherently interdisciplinary academic field which emerged from social psychology, sociology, statistics, and graph theory. Georg Simmel authored early structural theories in sociology emphasizing the dynamics of triads and "web of group affiliations". Jacob Moreno is credited with developing the first sociograms in the 1930s to study interpersonal relationships. These approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in the social and behavioral sciences by the 1980s. Social network analysis is now one of the major paradigms in contemporary sociology, and is also employed in a number of other social and formal sciences. Together with other complex networks, it forms part of the nascent field of network science. Overview The social network is a theoretical construct useful in the social sciences to study relationships between individuals, groups, organizations, or even entire societies (social units, see differentiation). The term is used to describe a social structure determined by such interactions. The ties through which any given social unit connects represent the convergence of the various social contacts of that unit. This theoretical approach is, necessarily, relational. An axiom of the social network approach to understanding social interaction is that social phenomena should be primarily conceived and investigated through the properties of relations between and within units, instead of the properties of these units themselves. Thus, one common criticism of social network theory is that individual agency is often ignored although this may not be the case in practice (see agent-based modeling). Precisely because many different types of relations, singular or in combination, form these network configurations, network analytics are useful to a broad range of research enterprises. In social science, these fields of study include, but are not limited to anthropology, biology, communication studies, economics, geography, information science, organizational studies, social psychology, sociology, and sociolinguistics. History In the late 1890s, both Émile Durkheim and Ferdinand Tönnies foreshadowed the idea of social networks in their theories and research of social groups. Tönnies argued that social groups can exist as personal and direct social ties that either link individuals who share values and belief (Gemeinschaft, German, commonly translated as "community") or impersonal, formal, and instrumental social links (Gesellschaft, German, commonly translated as "society"). Durkheim gave a non-individualistic explanation of social facts, arguing that social phenomena arise when interacting individuals constitute a reality that can no longer be accounted for in terms of the properties of individual actors. Georg Simmel, writing at the turn of the twentieth century, pointed to the nature of networks and the effect of network size on interaction and examined the likelihood of interaction in loosely knit networks rather than groups. Major developments in the field can be seen in the 1930s by several groups in psychology, anthropology, and mathematics working independently. In psychology, in the 1930s, Jacob L. Moreno began systematic recording and analysis of social interaction in small groups, especially classrooms and work groups (see sociometry). In anthropology, the foundation for social network theory is the theoretical and ethnographic work of Bronislaw Malinowski, Alfred Radcliffe-Brown, and Claude Lévi-Strauss. A group of social anthropologists associated with Max Gluckman and the Manchester School, including John A. Barnes, J. Clyde Mitchell and Elizabeth Bott Spillius, often are credited with performing some of the first fieldwork from which network analyses were performed, investigating community networks in southern Africa, India and the United Kingdom. Concomitantly, British anthropologist S. F. Nadel codified a theory of social structure that was influential in later network analysis. In sociology, the early (1930s) work of Talcott Parsons set the stage for taking a relational approach to understanding social structure. Later, drawing upon Parsons' theory, the work of sociologist Peter Blau provides a strong impetus for analyzing the relational ties of social units with his work on social exchange theory. By the 1970s, a growing number of scholars worked to combine the different tracks and traditions. One group consisted of sociologist Harrison White and his students at the Harvard University Department of Social Relations. Also independently active in the Harvard Social Relations department at the time were Charles Tilly, who focused on networks in political and community sociology and social movements, and Stanley Milgram, who developed the "six degrees of separation" thesis. Mark Granovetter and Barry Wellman are among the former students of White who elaborated and championed the analysis of social networks. Beginning in the late 1990s, social network analysis experienced work by sociologists, political scientists, and physicists such as Duncan J. Watts, Albert-László Barabási, Peter Bearman, Nicholas A. Christakis, James H. Fowler, and others, developing and applying new models and methods to emerging data available about online social networks, as well as "digital traces" regarding face-to-face networks. Levels of analysis In general, social networks are self-organizing, emergent, and complex, such that a globally coherent pattern appears from the local interaction of the elements that make up the system. These patterns become more apparent as network size increases. However, a global network analysis of, for example, all interpersonal relationships in the world is not feasible and is likely to contain so much information as to be uninformative. Practical limitations of computing power, ethics and participant recruitment and payment also limit the scope of a social network analysis. The nuances of a local system may be lost in a large network analysis, hence the quality of information may be more important than its scale for understanding network properties. Thus, social networks are analyzed at the scale relevant to the researcher's theoretical question. Although levels of analysis are not necessarily mutually exclusive, there are three general levels into which networks may fall: micro-level, meso-level, and macro-level. At the micro-level, social network research typically begins with an individual, snowballing as social relationships are traced, or may begin with a small group of individuals in a particular social context. Dyadic level: A dyad is a social relationship between two individuals. Network research on dyads may concentrate on structure of the relationship (e.g. multiplexity, strength), social equality, and tendencies toward reciprocity/mutuality. Triadic level: Add one individual to a dyad, and you have a triad. Research at this level may concentrate on factors such as balance and transitivity, as well as social equality and tendencies toward reciprocity/mutuality. In the balance theory of Fritz Heider the triad is the key to social dynamics. The discord in a rivalrous love triangle is an example of an unbalanced triad, likely to change to a balanced triad by a change in one of the relations. The dynamics of social friendships in society has been modeled by balancing triads. The study is carried forward with the theory of signed graphs. Actor level: The smallest unit of analysis in a social network is an individual in their social setting, i.e., an "actor" or "ego." Egonetwork analysis focuses on network characteristics, such as size, relationship strength, density, centrality, prestige and roles such as isolates, liaisons, and bridges. Such analyses, are most commonly used in the fields of psychology or social psychology, ethnographic kinship analysis or other genealogical studies of relationships between individuals. Subset level: Subset levels of network research problems begin at the micro-level, but may cross over into the meso-level of analysis. Subset level research may focus on distance and reachability, cliques, cohesive subgroups, or other group actions or behavior. In general, meso-level theories begin with a population size that falls between the micro- and macro-levels. However, meso-level may also refer to analyses that are specifically designed to reveal connections between micro- and macro-levels. Meso-level networks are low density and may exhibit causal processes distinct from interpersonal micro-level networks. Organizations: Formal organizations are social groups that distribute tasks for a collective goal. Network research on organizations may focus on either intra-organizational or inter-organizational ties in terms of formal or informal relationships. Intra-organizational networks themselves often contain multiple levels of analysis, especially in larger organizations with multiple branches, franchises or semi-autonomous departments. In these cases, research is often conducted at a work group level and organization level, focusing on the interplay between the two structures. Experiments with networked groups online have documented ways to optimize group-level coordination through diverse interventions, including the addition of autonomous agents to the groups. Randomly distributed networks: Exponential random graph models of social networks became state-of-the-art methods of social network analysis in the 1980s. This framework has the capacity to represent social-structural effects commonly observed in many human social networks, including general degree-based structural effects commonly observed in many human social networks as well as reciprocity and transitivity, and at the node-level, homophily and attribute-based activity and popularity effects, as derived from explicit hypotheses about dependencies among network ties. Parameters are given in terms of the prevalence of small subgraph configurations in the network and can be interpreted as describing the combinations of local social processes from which a given network emerges. These probability models for networks on a given set of actors allow generalization beyond the restrictive dyadic independence assumption of micro-networks, allowing models to be built from theoretical structural foundations of social behavior. Scale-free networks: A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. In network theory a scale-free ideal network is a random network with a degree distribution that unravels the size distribution of social groups. Specific characteristics of scale-free networks vary with the theories and analytical tools used to create them, however, in general, scale-free networks have some common characteristics. One notable characteristic in a scale-free network is the relative commonness of vertices with a degree that greatly exceeds the average. The highest-degree nodes are often called "hubs", and may serve specific purposes in their networks, although this depends greatly on the social context. Another general characteristic of scale-free networks is the clustering coefficient distribution, which decreases as the node degree increases. This distribution also follows a power law. The Barabási model of network evolution shown above is an example of a scale-free network. Rather than tracing interpersonal interactions, macro-level analyses generally trace the outcomes of interactions, such as economic or other resource transfer interactions over a large population. Large-scale networks: Large-scale network is a term somewhat synonymous with "macro-level." It is primarily used in social and behavioral sciences, and in economics. Originally, the term was used extensively in the computer sciences (see large-scale network mapping). Complex networks: Most larger social networks display features of social complexity, which involves substantial non-trivial features of network topology, with patterns of complex connections between elements that are neither purely regular nor purely random (see, complexity science, dynamical system and chaos theory), as do biological, and technological networks. Such complex network features include a heavy tail in the degree distribution, a high clustering coefficient, assortativity or disassortativity among vertices, community structure (see stochastic block model), and hierarchical structure. In the case of agency-directed networks these features also include reciprocity, triad significance profile (TSP, see network motif), and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such as lattices and random graphs, do not show these features. Theoretical links Various theoretical frameworks have been imported for the use of social network analysis. The most prominent of these are Graph theory, Balance theory, Social comparison theory, and more recently, the Social identity approach. Few complete theories have been produced from social network analysis. Two that have are structural role theory and heterophily theory. The basis of Heterophily Theory was the finding in one study that more numerous weak ties can be important in seeking information and innovation, as cliques have a tendency to have more homogeneous opinions as well as share many common traits. This homophilic tendency was the reason for the members of the cliques to be attracted together in the first place. However, being similar, each member of the clique would also know more or less what the other members knew. To find new information or insights, members of the clique will have to look beyond the clique to its other friends and acquaintances. This is what Granovetter called "the strength of weak ties". Structural holes In the context of networks, social capital exists where people have an advantage because of their location in a network. Contacts in a network provide information, opportunities and perspectives that can be beneficial to the central player in the network. Most social structures tend to be characterized by dense clusters of strong connections. Information within these clusters tends to be rather homogeneous and redundant. Non-redundant information is most often obtained through contacts in different clusters. When two separate clusters possess non-redundant information, there is said to be a structural hole between them. Thus, a network that bridges structural holes will provide network benefits that are in some degree additive, rather than overlapping. An ideal network structure has a vine and cluster structure, providing access to many different clusters and structural holes. Networks rich in structural holes are a form of social capital in that they offer information benefits. The main player in a network that bridges structural holes is able to access information from diverse sources and clusters. For example, in business networks, this is beneficial to an individual's career because he is more likely to hear of job openings and opportunities if his network spans a wide range of contacts in different industries/sectors. This concept is similar to Mark Granovetter's theory of weak ties, which rests on the basis that having a broad range of contacts is most effective for job attainment. Structural holes have been widely applied in social network analysis, resulting in applications in a wide range of practical scenarios as well as machine learning-based social prediction. Research clusters Research has used network analysis to examine networks created when artists are exhibited together in museum exhibition. Such networks have been shown to affect an artist's recognition in history and historical narratives, even when controlling for individual accomplishments of the artist. Other work examines how network grouping of artists can affect an individual artist's auction performance. An artist's status has been shown to increase when associated with higher status networks, though this association has diminishing returns over an artist's career. In J.A. Barnes' day, a "community" referred to a specific geographic location and studies of community ties had to do with who talked, associated, traded, and attended church with whom. Today, however, there are extended "online" communities developed through telecommunications devices and social network services. Such devices and services require extensive and ongoing maintenance and analysis, often using network science methods. Community development studies, today, also make extensive use of such methods. Complex networks require methods specific to modelling and interpreting social complexity and complex adaptive systems, including techniques of dynamic network analysis. Mechanisms such as Dual-phase evolution explain how temporal changes in connectivity contribute to the formation of structure in social networks. The study of social networks is being used to examine the nature of interdependencies between actors and the ways in which these are related to outcomes of conflict and cooperation. Areas of study include cooperative behavior among participants in collective actions such as protests; promotion of peaceful behavior, social norms, and public goods within communities through networks of informal governance; the role of social networks in both intrastate conflict and interstate conflict; and social networking among politicians, constituents, and bureaucrats. In criminology and urban sociology, much attention has been paid to the social networks among criminal actors. For example, murders can be seen as a series of exchanges between gangs. Murders can be seen to diffuse outwards from a single source, because weaker gangs cannot afford to kill members of stronger gangs in retaliation, but must commit other violent acts to maintain their reputation for strength. Diffusion of ideas and innovations studies focus on the spread and use of ideas from one actor to another or one culture and another. This line of research seeks to explain why some become "early adopters" of ideas and innovations, and links social network structure with facilitating or impeding the spread of an innovation. A case in point is the social diffusion of linguistic innovation such as neologisms. Experiments and large-scale field trials (e.g., by Nicholas Christakis and collaborators) have shown that cascades of desirable behaviors can be induced in social groups, in settings as diverse as Honduras villages, Indian slums, or in the lab. Still other experiments have documented the experimental induction of social contagion of voting behavior, emotions, risk perception, and commercial products. In demography, the study of social networks has led to new sampling methods for estimating and reaching populations that are hard to enumerate (for example, homeless people or intravenous drug users.) For example, respondent driven sampling is a network-based sampling technique that relies on respondents to a survey recommending further respondents. The field of sociology focuses almost entirely on networks of outcomes of social interactions. More narrowly, economic sociology considers behavioral interactions of individuals and groups through social capital and social "markets". Sociologists, such as Mark Granovetter, have developed core principles about the interactions of social structure, information, ability to punish or reward, and trust that frequently recur in their analyses of political, economic and other institutions. Granovetter examines how social structures and social networks can affect economic outcomes like hiring, price, productivity and innovation and describes sociologists' contributions to analyzing the impact of social structure and networks on the economy. Analysis of social networks is increasingly incorporated into health care analytics, not only in epidemiological studies but also in models of patient communication and education, disease prevention, mental health diagnosis and treatment, and in the study of health care organizations and systems. Human ecology is an interdisciplinary and transdisciplinary study of the relationship between humans and their natural, social, and built environments. The scientific philosophy of human ecology has a diffuse history with connections to geography, sociology, psychology, anthropology, zoology, and natural ecology. In the study of literary systems, network analysis has been applied by Anheier, Gerhards and Romo, De Nooy, Senekal, and Lotker, to study various aspects of how literature functions. The basic premise is that polysystem theory, which has been around since the writings of Even-Zohar, can be integrated with network theory and the relationships between different actors in the literary network, e.g. writers, critics, publishers, literary histories, etc., can be mapped using visualization from SNA. Research studies of formal or informal organization relationships, organizational communication, economics, economic sociology, and other resource transfers. Social networks have also been used to examine how organizations interact with each other, characterizing the many informal connections that link executives together, as well as associations and connections between individual employees at different organizations. Many organizational social network studies focus on teams. Within team network studies, research assesses, for example, the predictors and outcomes of centrality and power, density and centralization of team instrumental and expressive ties, and the role of between-team networks. Intra-organizational networks have been found to affect organizational commitment, organizational identification, interpersonal citizenship behaviour. Social capital is a form of economic and cultural capital in which social networks are central, transactions are marked by reciprocity, trust, and cooperation, and market agents produce goods and services not mainly for themselves, but for a common good. Social capital is split into three dimensions: the structural, the relational and the cognitive dimension. The structural dimension describes how partners interact with each other and which specific partners meet in a social network. Also, the structural dimension of social capital indicates the level of ties among organizations. This dimension is highly connected to the relational dimension which refers to trustworthiness, norms, expectations and identifications of the bonds between partners. The relational dimension explains the nature of these ties which is mainly illustrated by the level of trust accorded to the network of organizations. The cognitive dimension analyses the extent to which organizations share common goals and objectives as a result of their ties and interactions. Social capital is a sociological concept about the value of social relations and the role of cooperation and confidence to achieve positive outcomes. The term refers to the value one can get from their social ties. For example, newly arrived immigrants can make use of their social ties to established migrants to acquire jobs they may otherwise have trouble getting (e.g., because of unfamiliarity with the local language). A positive relationship exists between social capital and the intensity of social network use. In a dynamic framework, higher activity in a network feeds into higher social capital which itself encourages more activity. This particular cluster focuses on brand-image and promotional strategy effectiveness, taking into account the impact of customer participation on sales and brand-image. This is gauged through techniques such as sentiment analysis which rely on mathematical areas of study such as data mining and analytics. This area of research produces vast numbers of commercial applications as the main goal of any study is to understand consumer behaviour and drive sales. In many organizations, members tend to focus their activities inside their own groups, which stifles creativity and restricts opportunities. A player whose network bridges structural holes has an advantage in detecting and developing rewarding opportunities. Such a player can mobilize social capital by acting as a "broker" of information between two clusters that otherwise would not have been in contact, thus providing access to new ideas, opinions and opportunities. British philosopher and political economist John Stuart Mill, writes, "it is hardly possible to overrate the value of placing human beings in contact with persons dissimilar to themselves.... Such communication [is] one of the primary sources of progress." Thus, a player with a network rich in structural holes can add value to an organization through new ideas and opportunities. This in turn, helps an individual's career development and advancement. A social capital broker also reaps control benefits of being the facilitator of information flow between contacts. Full communication with exploratory mindsets and information exchange generated by dynamically alternating positions in a social network promotes creative and deep thinking. In the case of consulting firm Eden McCallum, the founders were able to advance their careers by bridging their connections with former big three consulting firm consultants and mid-size industry firms. By bridging structural holes and mobilizing social capital, players can advance their careers by executing new opportunities between contacts. There has been research that both substantiates and refutes the benefits of information brokerage. A study of high tech Chinese firms by Zhixing Xiao found that the control benefits of structural holes are "dissonant to the dominant firm-wide spirit of cooperation and the information benefits cannot materialize due to the communal sharing values" of such organizations. However, this study only analyzed Chinese firms, which tend to have strong communal sharing values. Information and control benefits of structural holes are still valuable in firms that are not quite as inclusive and cooperative on the firm-wide level. In 2004, Ronald Burt studied 673 managers who ran the supply chain for one of America's largest electronics companies. He found that managers who often discussed issues with other groups were better paid, received more positive job evaluations and were more likely to be promoted. Thus, bridging structural holes can be beneficial to an organization, and in turn, to an individual's career. Computer networks combined with social networking software produce a new medium for social interaction. A relationship over a computerized social networking service can be characterized by context, direction, and strength. The content of a relation refers to the resource that is exchanged. In a computer-mediated communication context, social pairs exchange different kinds of information, including sending a data file or a computer program as well as providing emotional support or arranging a meeting. With the rise of electronic commerce, information exchanged may also correspond to exchanges of money, goods or services in the "real" world. Social network analysis methods have become essential to examining these types of computer mediated communication. In addition, the sheer size and the volatile nature of social media has given rise to new network metrics. A key concern with networks extracted from social media is the lack of robustness of network metrics given missing data. Based on the pattern of homophily, ties between people are most likely to occur between nodes that are most similar to each other, or within neighbourhood segregation, individuals are most likely to inhabit the same regional areas as other individuals who are like them. Therefore, social networks can be used as a tool to measure the degree of segregation or homophily within a social network. Social Networks can both be used to simulate the process of homophily but it can also serve as a measure of level of exposure of different groups to each other within a current social network of individuals in a certain area. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-338] | [TOKENS: 12858] |
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-117] | [TOKENS: 10728] |
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_note-77] | [TOKENS: 10628] |
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_note-88] | [TOKENS: 9291] |
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Black_hole#Binaries] | [TOKENS: 13839] |
Contents Black hole A black hole is an astronomical body so compact that its gravity prevents anything, including light, from escaping. Albert Einstein's theory of general relativity predicts that a sufficiently compact mass will form a black hole. The boundary of no escape is called the event horizon. In general relativity, a black hole's event horizon seals an object's fate but produces no locally detectable change when crossed. General relativity also predicts that every black hole should have a central singularity, where the curvature of spacetime is infinite. In many ways, a black hole acts like an ideal black body, as it reflects no light. Quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly. Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterise a black hole. Due to his influential research, the Schwarzschild metric is named after him. David Finkelstein, in 1958, first interpreted Schwarzschild's model as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971. Black holes typically form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses may form by absorbing other stars and merging with other black holes, or via direct collapse of gas clouds. There is consensus that supermassive black holes exist in the centres of most galaxies. The presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Matter falling toward a black hole can form an accretion disk of infalling plasma, heated by friction and emitting light. In extreme cases, this creates a quasar, some of the brightest objects in the universe. Merging black holes can also be detected by observation of the gravitational waves they emit. If other stars are orbiting a black hole, their orbits can be used to determine the black hole's mass and location. Such observations can be used to exclude possible alternatives such as neutron stars. In this way, astronomers have identified numerous stellar black hole candidates in binary systems and established that the radio source known as Sagittarius A*, at the core of the Milky Way galaxy, contains a supermassive black hole of about 4.3 million solar masses. History The idea of a body so massive that even light could not escape was first proposed in the late 18th century by English astronomer and clergyman John Michell and independently by French scientist Pierre-Simon Laplace. Both scholars proposed very large stars in contrast to the modern concept of an extremely dense object. Michell's idea, in a short part of a letter published in 1784, calculated that a star with the same density but 500 times the radius of the sun would not let any emitted light escape; the surface escape velocity would exceed the speed of light.: 122 Michell correctly hypothesized that such supermassive but non-radiating bodies might be detectable through their gravitational effects on nearby visible bodies. In 1796, Laplace mentioned that a star could be invisible if it were sufficiently large while speculating on the origin of the Solar System in his book Exposition du Système du Monde. Franz Xaver von Zach asked Laplace for a mathematical analysis, which Laplace provided and published in a journal edited by von Zach. In 1905, Albert Einstein showed that the laws of electromagnetism would be invariant under a Lorentz transformation: they would be identical for observers travelling at different velocities relative to each other. This discovery became known as the principle of special relativity. Although the laws of mechanics had already been shown to be invariant, gravity remained yet to be included.: 19 In 1907, Einstein published a paper proposing his equivalence principle, the hypothesis that inertial mass and gravitational mass have a common cause. Using the principle, Einstein predicted the redshift and half of the lensing effect of gravity on light; the full prediction of gravitational lensing required development of general relativity.: 19 By 1915, Einstein refined these ideas into his general theory of relativity, which explained how matter affects spacetime, which in turn affects the motion of other matter. This formed the basis for black hole physics. Only a few months after Einstein published the field equations describing general relativity, astrophysicist Karl Schwarzschild set out to apply the idea to stars. He assumed spherical symmetry with no spin and found a solution to Einstein's equations.: 124 A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution. At a certain radius from the center of the mass, the Schwarzschild solution became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this radius, which later became known as the Schwarzschild radius, was not understood at the time. Many physicists of the early 20th century were skeptical of the existence of black holes. In a 1926 popular science book, Arthur Eddington critiqued the idea of a star with mass compressed to its Schwarzschild radius as a flaw in the then-poorly-understood theory of general relativity.: 134 In 1939, Einstein himself used his theory of general relativity in an attempt to prove that black holes were impossible. His work relied on increasing pressure or increasing centrifugal force balancing the force of gravity so that the object would not collapse beyond its Schwarzschild radius. He missed the possibility that implosion would drive the system below this critical value.: 135 By the 1920s, astronomers had classified a number of white dwarf stars as too cool and dense to be explained by the gradual cooling of ordinary stars. In 1926, Ralph Fowler showed that quantum-mechanical degeneracy pressure was larger than thermal pressure at these densities.: 145 In 1931, Subrahmanyan Chandrasekhar calculated that a non-rotating body of electron-degenerate matter below a certain limiting mass is stable, and by 1934 he showed that this explained the catalog of white dwarf stars.: 151 When Chandrasekhar announced his results, Eddington pointed out that stars above this limit would radiate until they were sufficiently dense to prevent light from exiting, a conclusion he considered absurd. Eddington and, later, Lev Landau argued that some yet unknown mechanism would stop the collapse. In the 1930s, Fritz Zwicky and Walter Baade studied stellar novae, focusing on exceptionally bright ones they called supernovae. Zwicky promoted the idea that supernovae produced stars with the density of atomic nuclei—neutron stars—but this idea was largely ignored.: 171 In 1939, based on Chandrasekhar's reasoning, J. Robert Oppenheimer and George Volkoff predicted that neutron stars below a certain mass limit, later called the Tolman–Oppenheimer–Volkoff limit, would be stable due to neutron degeneracy pressure. Above that limit, they reasoned that either their model would not apply or that gravitational contraction would not stop.: 380 John Archibald Wheeler and two of his students resolved questions about the model behind the Tolman–Oppenheimer–Volkoff (TOV) limit. Harrison and Wheeler developed the equations of state relating density to pressure for cold matter all the way through electron degeneracy and neutron degeneracy. Masami Wakano and Wheeler then used the equations to compute the equilibrium curve for stars, relating mass to circumference. They found no additional features that would invalidate the TOV limit. This meant that the only thing that could prevent black holes from forming was a dynamic process ejecting sufficient mass from a star as it cooled.: 205 The modern concept of black holes was formulated by Robert Oppenheimer and his student Hartland Snyder in 1939.: 80 In the paper, Oppenheimer and Snyder solved Einstein's equations of general relativity for an idealized imploding star, in a model later called the Oppenheimer–Snyder model, then described the results from far outside the star. The implosion starts as one might expect: the star material rapidly collapses inward. However, as the density of the star increases, gravitational time dilation increases and the collapse, viewed from afar, seems to slow down further and further until the star reaches its Schwarzschild radius, where it appears frozen in time.: 217 In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, calling it "a perfect unidirectional membrane: causal influences can cross it in only one direction". In this sense, events that occur inside of the black hole cannot affect events that occur outside of the black hole. Finkelstein created a new reference frame to include the point of view of infalling observers.: 103 Finkelstein's new frame of reference allowed events at the surface of an imploding star to be related to events far away. By 1962 the two points of view were reconciled, convincing many skeptics that implosion into a black hole made physical sense.: 226 The era from the mid-1960s to the mid-1970s was the "golden age of black hole research", when general relativity and black holes became mainstream subjects of research.: 258 In this period, more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the cylindrically symmetric solution for a black hole that is both rotating and electrically charged. In 1967, Werner Israel found that the Schwarzschild solution was the only possible solution for a nonspinning, uncharged black hole, meaning that a Schwarzschild black hole would be defined by its mass alone. Similar identities were later found for Reissner-Nordstrom and Kerr black holes, defined only by their mass and their charge or spin respectively. Together, these findings became known as the no-hair theorem, which states that a stationary black hole is completely described by the three parameters of the Kerr–Newman metric: mass, angular momentum, and electric charge. At first, it was suspected that the strange mathematical singularities found in each of the black hole solutions only appeared due to the assumption that a black hole would be perfectly spherically symmetric, and therefore the singularities would not appear in generic situations where black holes would not necessarily be symmetric. This view was held in particular by Vladimir Belinski, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions, although they would later reverse their positions. However, in 1965, Roger Penrose proved that general relativity without quantum mechanics requires that singularities appear in all black holes. Astronomical observations also made great strides during this era. In 1967, Antony Hewish and Jocelyn Bell Burnell discovered pulsars and by 1969, these were shown to be rapidly rotating neutron stars. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities, but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse. Based on observations in Greenwich and Toronto in the early 1970s, Cygnus X-1, a galactic X-ray source discovered in 1964, became the first astronomical object commonly accepted to be a black hole. Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed: 442 when Hawking, in 1974, showed that quantum field theory implies that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole, predicting the effect now known as Hawking radiation. While Cygnus X-1, a stellar-mass black hole, was generally accepted by the scientific community as a black hole by the end of 1973, it would be decades before a supermassive black hole would gain the same broad recognition. Although, as early as the 1960s, physicists such as Donald Lynden-Bell and Martin Rees had suggested that powerful quasars in the center of galaxies were powered by accreting supermassive black holes, little observational proof existed at the time. However, the Hubble Space Telescope, launched decades later, found that supermassive black holes were not only present in these active galactic nuclei, but that supermassive black holes in the center of galaxies were ubiquitous: Almost every galaxy had a supermassive black hole at its center, many of which were quiescent. In 1999, David Merritt proposed the M–sigma relation, which related the dispersion of the velocity of matter in the center bulge of a galaxy to the mass of the supermassive black hole at its core. Subsequent studies confirmed this correlation. Around the same time, based on telescope observations of the velocities of stars at the center of the Milky Way galaxy, independent work groups led by Andrea Ghez and Reinhard Genzel concluded that the compact radio source in the center of the galaxy, Sagittarius A*, was likely a supermassive black hole. On 11 February 2016, the LIGO Scientific Collaboration and Virgo Collaboration announced the first direct detection of gravitational waves, named GW150914, representing the first observation of a black hole merger. At the time of the merger, the black holes were approximately 1.4 billion light-years away from Earth and had masses of 30 and 35 solar masses.: 6 In 2017, Rainer Weiss, Kip Thorne, and Barry Barish, who had spearheaded the project, were awarded the Nobel Prize in Physics for their work. Since the initial discovery in 2015, hundreds more gravitational waves have been observed by LIGO and another interferometer, Virgo. On 10 April 2019, the first direct image of a black hole and its vicinity was published, following observations made by the Event Horizon Telescope (EHT) in 2017 of the supermassive black hole in Messier 87's galactic centre. In 2022, the Event Horizon Telescope collaboration released an image of the black hole in the center of the Milky Way galaxy, Sagittarius A*; The data had been collected in 2017. In 2020, the Nobel Prize in Physics was awarded for work on black holes. Andrea Ghez and Reinhard Genzel shared one-half for their discovery that Sagittarius A* is a supermassive black hole. Penrose received the other half for his work showing that the mathematics of general relativity requires the formation of black holes. Cosmologists lamented that Hawking's extensive theoretical work on black holes would not be honored since he died in 2018. In December 1967, a student reportedly suggested the phrase black hole at a lecture by John Wheeler; Wheeler adopted the term for its brevity and "advertising value", and Wheeler's stature in the field ensured it quickly caught on, leading some to credit Wheeler with coining the phrase. However, the term was used by others around that time. Science writer Marcia Bartusiak traces the term black hole to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive. The term was used in print by Life and Science News magazines in 1963, and by science journalist Ann Ewing in her article "'Black Holes' in Space", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science held in Cleveland, Ohio. Definition A black hole is generally defined as a region of spacetime from which no information-carrying signals or objects can escape. However, verifying an object as a black hole by this definition would require waiting for an infinite time and at an infinite distance from the black hole to verify that indeed, nothing has escaped, and thus cannot be used to identify a physical black hole. Broadly, physicists do not have a precisely-agreed-upon definition of a black hole. Among astrophysicists, a black hole is a compact object with a mass larger than four solar masses. A black hole may also be defined as a reservoir of information: 142 or a region where space is falling inwards faster than the speed of light. Properties The no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. If the conjecture is true, any two black holes that share the same values for these properties, or parameters, are indistinguishable from one another. The degree to which the conjecture is true for real black holes is currently an unsolved problem. The simplest static black holes have mass but neither electric charge nor angular momentum. According to Birkhoff's theorem, these Schwarzschild black holes are the only vacuum solution that is spherically symmetric. Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum. The simplest static black holes have mass but neither electric charge nor angular momentum. Contrary to the popular notion of a black hole "sucking in everything" in its surroundings, from far away, the external gravitational field of a black hole is identical to that of any other body of the same mass. While a black hole can theoretically have any positive mass, the charge and angular momentum are constrained by the mass. The total electric charge Q and the total angular momentum J are expected to satisfy the inequality Q 2 4 π ϵ 0 + c 2 J 2 G M 2 ≤ G M 2 {\displaystyle {\frac {Q^{2}}{4\pi \epsilon _{0}}}+{\frac {c^{2}J^{2}}{GM^{2}}}\leq GM^{2}} for a black hole of mass M. Black holes with the maximum possible charge or spin satisfying this inequality are called extremal black holes. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These are so-called naked singularities that can be observed from the outside. Because these singularities make the universe inherently unpredictable, many physicists believe they could not exist. The weak cosmic censorship hypothesis, proposed by Sir Roger Penrose, rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. However, this theory has not yet been proven, and some physicists believe that naked singularities could exist. It is also unknown whether black holes could even become extremal, forming naked singularities, since natural processes counteract increasing spin and charge when a black hole becomes near-extremal. The total mass of a black hole can be estimated by analyzing the motion of objects near the black hole, such as stars or gas. All black holes spin, often fast—One supermassive black hole, GRS 1915+105 has been estimated to spin at over 1,000 revolutions per second. The Milky Way's central black hole Sagittarius A* rotates at about 90% of the maximum rate. The spin rate can be inferred from measurements of atomic spectral lines in the X-ray range. As gas near the black hole plunges inward, high energy X-ray emission from electron-positron pairs illuminates the gas further out, appearing red-shifted due to relativistic effects. Depending on the spin of the black hole, this plunge happens at different radii from the hole, with different degrees of redshift. Astronomers can use the gap between the x-ray emission of the outer disk and the redshifted emission from plunging material to determine the spin of the black hole. A newer way to estimate spin is based on the temperature of gasses accreting onto the black hole. The method requires an independent measurement of the black hole mass and inclination angle of the accretion disk followed by computer modeling. Gravitational waves from coalescing binary black holes can also provide the spin of both progenitor black holes and the merged hole, but such events are rare. A spinning black hole has angular momentum. The supermassive black hole in the center of the Messier 87 (M87) galaxy appears to have an angular momentum very close to the maximum theoretical value. That uncharged limit is J ≤ G M 2 c , {\displaystyle J\leq {\frac {GM^{2}}{c}},} allowing definition of a dimensionless spin magnitude such that 0 ≤ c J G M 2 ≤ 1. {\displaystyle 0\leq {\frac {cJ}{GM^{2}}}\leq 1.} Most black holes are believed to have an approximately neutral charge. For example, Michal Zajaček, Arman Tursunov, Andreas Eckart, and Silke Britzen found the electric charge of Sagittarius A* to be at least ten orders of magnitude below the theoretical maximum. A charged black hole repels other like charges just like any other charged object. If a black hole were to become charged, particles with an opposite sign of charge would be pulled in by the extra electromagnetic force, while particles with the same sign of charge would be repelled, neutralizing the black hole. This effect may not be as strong if the black hole is also spinning. The presence of charge can reduce the diameter of the black hole by up to 38%. The charge Q for a nonspinning black hole is bounded by Q ≤ G M , {\displaystyle Q\leq {\sqrt {G}}M,} where G is the gravitational constant and M is the black hole's mass. Classification Black holes can have a wide range of masses. The minimum mass of a black hole formed by stellar gravitational collapse is governed by the maximum mass of a neutron star and is believed to be approximately two-to-four solar masses. However, theoretical primordial black holes, believed to have formed soon after the Big Bang, could be far smaller, with masses as little as 10−5 grams at formation. These very small black holes are sometimes called micro black holes. Black holes formed by stellar collapse are called stellar black holes. Estimates of their maximum mass at formation vary, but generally range from 10 to 100 solar masses, with higher estimates for black holes progenated by low-metallicity stars. The mass of a black hole formed via a supernova has a lower bound: If the progenitor star is too small, the collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. Degeneracy pressure occurs from the Pauli exclusion principle—Particles will resist being in the same place as each other. Smaller progenitor stars, with masses less than about 8 M☉, will be held together by the degeneracy pressure of electrons and will become a white dwarf. For more massive progenitor stars, electron degeneracy pressure is no longer strong enough to resist the force of gravity and the star will be held together by neutron degeneracy pressure, which can occur at much higher densities, forming a neutron star. If the star is still too massive, even neutron degeneracy pressure will not be able to resist the force of gravity and the star will collapse into a black hole.: 5.8 Stellar black holes can also gain mass via accretion of nearby matter, often from a companion object such as a star. Black holes that are larger than stellar black holes but smaller than supermassive black holes are called intermediate-mass black holes, with masses of approximately 102 to 105 solar masses. These black holes seem to be rarer than their stellar and supermassive counterparts, with relatively few candidates having been observed. Physicists have speculated that such black holes may form from collisions in globular and star clusters or at the center of low-mass galaxies. They may also form as the result of mergers of smaller black holes, with several LIGO observations finding merged black holes within the 110-350 solar mass range. The black holes with the largest masses are called supermassive black holes, with masses more than 106 times that of the Sun. These black holes are believed to exist at the centers of almost every large galaxy, including the Milky Way. Some scientists have proposed a subcategory of even larger black holes, called ultramassive black holes, with masses greater than 109-1010 solar masses. Theoretical models predict that the accretion disc that feeds black holes will be unstable once a black hole reaches 50-100 billion times the mass of the Sun, setting a rough upper limit to black hole mass. Structure While black holes are conceptually invisible sinks of all matter and light, in astronomical settings, their enormous gravity alters the motion of surrounding objects and pulls nearby gas inwards at near-light speed, making the area around black holes the brightest objects in the universe. Some black holes have relativistic jets—thin streams of plasma travelling away from the black hole at more than one-tenth of the speed of light. A small faction of the matter falling towards the black hole gets accelerated away along the hole rotation axis. These jets can extend as far as millions of parsecs from the black hole itself. Black holes of any mass can have jets. However, they are typically observed around spinning black holes with strongly-magnetized accretion disks. Relativistic jets were more common in the early universe, when galaxies and their corresponding supermassive black holes were rapidly gaining mass. All black holes with jets also have an accretion disk, but the jets are usually brighter than the disk. Quasars, typically found in other galaxies, are believed to be supermassive black holes with jets; microquasars are believed to be stellar-mass objects with jets, typically observed in the Milky Way. The mechanism of formation of jets is not yet known, but several options have been proposed. One method proposed to fuel these jets is the Blandford-Znajek process, which suggests that the dragging of magnetic field lines by a black hole's rotation could launch jets of matter into space. The Penrose process, which involves extraction of a black hole's rotational energy, has also been proposed as a potential mechanism of jet propulsion. Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disk-like structure around the object.: 242 As the disk's angular momentum is transferred outward due to internal processes, its matter falls farther inward, converting its gravitational energy into heat and releasing a large flux of x-rays. The temperature of these disks can range from thousands to millions of Kelvin, and temperatures can differ throughout a single accretion disk. Accretion disks can also emit in other parts of the electromagnetic spectrum, depending on the disk's turbulence and magnetization and the black hole's mass and angular momentum. Accretion disks can be defined as geometrically thin or geometrically thick. Geometrically thin disks are mostly confined to the black hole's equatorial plane and have a well-defined edge at the innermost stable circular orbit (ISCO), while geometrically thick disks are supported by internal pressure and temperature and can extend inside the ISCO. Disks with high rates of electron scattering and absorption, appearing bright and opaque, are called optically thick; optically thin disks are more translucent and produce fainter images when viewed from afar. Accretion disks of black holes accreting beyond the Eddington limit are often referred to as polish donuts due to their thick, toroidal shape that resembles that of a donut. Quasar accretion disks are expected to usually appear blue in color. The disk for a stellar black hole, on the other hand, would likely look orange, yellow, or red, with its inner regions being the brightest. Theoretical research suggests that the hotter a disk is, the bluer it should be, although this is not always supported by observations of real astronomical objects. Accretion disk colors may also be altered by the Doppler effect, with the part of the disk travelling towards an observer appearing bluer and brighter and the part of the disk travelling away from the observer appearing redder and dimmer. In Newtonian gravity, test particles can stably orbit at arbitrary distances from a central object. In general relativity, however, there exists a smallest possible radius for which a massive particle can orbit stably. Any infinitesimal inward perturbations to this orbit will lead to the particle spiraling into the black hole, and any outward perturbations will, depending on the energy, cause the particle to spiral in, move to a stable orbit further from the black hole, or escape to infinity. This orbit is called the innermost stable circular orbit, or ISCO. The location of the ISCO depends on the spin of the black hole and the spin of the particle itself. In the case of a Schwarzschild black hole (spin zero) and a particle without spin, the location of the ISCO is: r I S C O = 3 r s = 6 G M c 2 , {\displaystyle r_{\rm {ISCO}}=3\,r_{\text{s}}={\frac {6\,GM}{c^{2}}},} where r I S C O {\displaystyle r_{\rm {_{ISCO}}}} is the radius of the ISCO, r s {\displaystyle r_{\text{s}}} is the Schwarzschild radius of the black hole, G {\displaystyle G} is the gravitational constant, and c {\displaystyle c} is the speed of light. The radius of this orbit changes slightly based on particle spin. For charged black holes, the ISCO moves inwards. For spinning black holes, the ISCO is moved inwards for particles orbiting in the same direction that the black hole is spinning (prograde) and outwards for particles orbiting in the opposite direction (retrograde). For example, the ISCO for a particle orbiting retrograde can be as far out as about 9 r s {\displaystyle 9r_{\text{s}}} , while the ISCO for a particle orbiting prograde can be as close as at the event horizon itself. The photon sphere is a spherical boundary for which photons moving on tangents to that sphere are bent completely around the black hole, possibly orbiting multiple times. Light rays with impact parameters less than the radius of the photon sphere enter the black hole. For Schwarzschild black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius; the radius for non-Schwarzschild black holes is at least 1.5 times the radius of the event horizon. When viewed from a great distance, the photon sphere creates an observable black hole shadow. Since no light emerges from within the black hole, this shadow is the limit for possible observations.: 152 The shadow of colliding black holes should have characteristic warped shapes, allowing scientists to detect black holes that are about to merge. While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Therefore, any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon. Light emitted towards the photon sphere may also curve around the black hole and return to the emitter. For a rotating, uncharged black hole, the radius of the photon sphere depends on the spin parameter and whether the photon is orbiting prograde or retrograde. For a photon orbiting prograde, the photon sphere will be 1-3 Schwarzschild radii from the center of the black hole, while for a photon orbiting retrograde, the photon sphere will be between 3-5 Schwarzschild radii from the center of the black hole. The exact location of the photon sphere depends on the magnitude of the black hole's rotation. For a charged, nonrotating black hole, there will only be one photon sphere, and the radius of the photon sphere will decrease for increasing black hole charge. For non-extremal, charged, rotating black holes, there will always be two photon spheres, with the exact radii depending on the parameters of the black hole. Near a rotating black hole, spacetime rotates similar to a vortex. The rotating spacetime will drag any matter and light into rotation around the spinning black hole. This effect of general relativity, called frame dragging, gets stronger closer to the spinning mass. The region of spacetime in which it is impossible to stay still is called the ergosphere. The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but bulges out from it around the equator. Matter and radiation can escape from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole, slowing down the rotation of the black hole.: 268 A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process, is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei. The observable region of spacetime around a black hole closest to its event horizon is called the plunging region. In this area it is no longer possible for free falling matter to follow circular orbits or stop a final descent into the black hole. Instead, it will rapidly plunge toward the black hole at close to the speed of light, growing increasingly hot and producing a characteristic, detectable thermal emission. However, light and radiation emitted from this region can still escape from the black hole's gravitational pull. For a nonspinning, uncharged black hole, the radius of the event horizon, or Schwarzschild radius, is proportional to the mass, M, through r s = 2 G M c 2 ≈ 2.95 M M ⊙ k m , {\displaystyle r_{\mathrm {s} }={\frac {2GM}{c^{2}}}\approx 2.95\,{\frac {M}{M_{\odot }}}~\mathrm {km,} } where rs is the Schwarzschild radius and M☉ is the mass of the Sun.: 124 For a black hole with nonzero spin or electric charge, the radius is smaller,[Note 1] until an extremal black hole could have an event horizon close to r + = G M c 2 , {\displaystyle r_{\mathrm {+} }={\frac {GM}{c^{2}}},} half the radius of a nonspinning, uncharged black hole of the same mass. Since the volume within the Schwarzschild radius increase with the cube of the radius, average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass: supermassive black holes are much less dense than stellar black holes. The average density of a 108 M☉ black hole is comparable to that of water. The defining feature of a black hole is the existence of an event horizon, a boundary in spacetime through which matter and light can pass only inward towards the center of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach or affect an outside observer, making it impossible to determine whether such an event occurred.: 179 For non-rotating black holes, the geometry of the event horizon is precisely spherical, while for rotating black holes, the event horizon is oblate. To a distant observer, a clock near a black hole would appear to tick more slowly than one further from the black hole.: 217 This effect, known as gravitational time dilation, would also cause an object falling into a black hole to appear to slow as it approached the event horizon, never quite reaching the horizon from the perspective of an outside observer.: 218 All processes on this object would appear to slow down, and any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. An object falling from half of a Schwarzschild radius above the event horizon would fade away until it could no longer be seen, disappearing from view within one hundredth of a second. It would also appear to flatten onto the black hole, joining all other material that had ever fallen into the hole. On the other hand, an observer falling into a black hole would not notice any of these effects as they cross the event horizon. Their own clocks appear to them to tick normally, and they cross the event horizon after a finite time without noting any singular behaviour. In general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle.: 222 Black holes that are rotating and/or charged have an inner horizon, often called the Cauchy horizon, inside of the black hole. The inner horizon is divided up into two segments: an ingoing section and an outgoing section. At the ingoing section of the Cauchy horizon, radiation and matter that fall into the black hole would build up at the horizon, causing the curvature of spacetime to go to infinity. This would cause an observer falling in to experience tidal forces. This phenomenon is often called mass inflation, since it is associated with a parameter dictating the black hole's internal mass growing exponentially, and the buildup of tidal forces is called the mass-inflation singularity or Cauchy horizon singularity. Some physicists have argued that in realistic black holes, accretion and Hawking radiation would stop mass inflation from occurring. At the outgoing section of the inner horizon, infalling radiation would backscatter off of the black hole's spacetime curvature and travel outward, building up at the outgoing Cauchy horizon. This would cause an infalling observer to experience a gravitational shock wave and tidal forces as the spacetime curvature at the horizon grew to infinity. This buildup of tidal forces is called the shock singularity. Both of these singularities are weak, meaning that an object crossing them would only be deformed a finite amount by tidal forces, even though the spacetime curvature would still be infinite at the singularity. This is as opposed to a strong singularity, where an object hitting the singularity would be stretched and squeezed by an infinite amount. They are also null singularities, meaning that a photon could travel parallel to the them without ever being intercepted. Ignoring quantum effects, every black hole has a singularity inside, points where the curvature of spacetime becomes infinite, and geodesics terminate within a finite proper time.: 205 For a non-rotating black hole, this region takes the shape of a single point; for a rotating black hole it is smeared out to form a ring singularity that lies in the plane of rotation.: 264 In both cases, the singular region has zero volume. All of the mass of the black hole ends up in the singularity.: 252 Since the singularity has nonzero mass in an infinitely small space, it can be thought of as having infinite density. Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity once they cross the event horizon. As they fall further into the black hole, they will be torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the noodle effect. Eventually, they will reach the singularity and be crushed into an infinitely small point.: 182 However any perturbations, such as those caused by matter or radiation falling in, would cause space to oscillate chaotically near the singularity. Any matter falling in would experience intense tidal forces rapidly changing in direction, all while being compressed into an increasingly small volume. Alternative forms of general relativity, including addition of some quatum effects, can lead to regular, or nonsingular, black holes without singularities. For example, the fuzzball model, based on string theory, states that black holes are actually made up of quantum microstates and need not have a singularity or an event horizon. The theory of loop quantum gravity proposes that the curvature and density at the center of a black hole is large, but not infinite. Formation Black holes are formed by gravitational collapse of massive stars, either by direct collapse or during a supernova explosion in a process called fallback. Black holes can result from the merger of two neutron stars or a neutron star and a black hole. Other more speculative mechanisms include primordial black holes created from density fluctuations in the early universe, the collapse of dark stars, a hypothetical object powered by annihilation of dark matter, or from hypothetical self-interacting dark matter. Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. At the end of a star's life, it will run out of hydrogen to fuse, and will start fusing more and more massive elements, until it gets to iron. Since the fusion of elements heavier than iron would require more energy than it would release, nuclear fusion ceases. If the iron core of the star is too massive, the star will no longer be able to support itself and will undergo gravitational collapse. While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer would see the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the delay growing to infinity as the emitting material reaches the event horizon. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away. Observations of quasars at redshift z ∼ 7 {\displaystyle z\sim 7} , less than a billion years after the Big Bang, has led to investigations of other ways to form black holes. The accretion process to build supermassive black holes has a limiting rate of mass accumulation and a billion years is not enough time to reach quasar status. One suggestion is direct collapse of nearly pure hydrogen gas (low metalicity) clouds characteristic of the young universe, forming a supermassive star which collapses into a black hole. It has been suggested that seed black holes with typical masses of ~105 M☉ could have formed in this way which then could grow to ~109 M☉. However, the very large amount of gas required for direct collapse is not typically stable to fragmentation to form multiple stars. Thus another approach suggests massive star formation followed by collisions that seed massive black holes which ultimately merge to create a quasar.: 85 A neutron star in a common envelope with a regular star can accrete sufficient material to collapse to a black hole or two neutron stars can merge. These avenues for the formation of black holes are considered relatively rare. In the current epoch of the universe, conditions needed to form black holes are rare and are mostly only found in stars. However, in the early universe, conditions may have allowed for black hole formations via other means. Fluctuations of spacetime soon after the Big Bang may have formed areas that were denser then their surroundings. Initially, these regions would not have been compact enough to form a black hole, but eventually, the curvature of spacetime in the regions become large enough to cause them to collapse into a black hole. Different models for the early universe vary widely in their predictions of the scale of these fluctuations. Various models predict the creation of primordial black holes ranging from a Planck mass (~2.2×10−8 kg) to hundreds of thousands of solar masses. Primordial black holes with masses less than 1015 g would have evaporated by now due to Hawking radiation. Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the universe was expanding rapidly and did not have the gravitational differential necessary for black hole formation. Models for the gravitational collapse of objects of relatively constant size, such as stars, do not necessarily apply in the same way to rapidly expanding space such as the Big Bang. In principle, black holes could be formed in high-energy particle collisions that achieve sufficient density, although no such events have been detected. These hypothetical micro black holes, which could form from the collision of cosmic rays and Earth's atmosphere or in particle accelerators like the Large Hadron Collider, would not be able to aggregate additional mass. Instead, they would evaporate in about 10−25 seconds, posing no threat to the Earth. Evolution Black holes can also merge with other objects such as stars or even other black holes. This is thought to have been important, especially in the early growth of supermassive black holes, which could have formed from the aggregation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes. Mergers of supermassive black holes may take a long time: As a binary of supermassive black holes approach each other, most nearby stars are ejected, leaving little for the remaining black holes to gravitationally interact with that would allow them to get closer to each other. This phenomenon has been called the final parsec problem, as the distance at which this happens is usually around one parsec. When a black hole accretes matter, the gas in the inner accretion disk orbits at very high speeds because of its proximity to the black hole. The resulting friction heats the inner disk to temperatures at which it emits vast amounts of electromagnetic radiation (mainly X-rays) detectable by telescopes. By the time the matter of the disk reaches the ISCO, between 5.7% and 42% of its mass will have been converted to energy, depending on the black hole's spin. About 90% of this energy is released within about 20 black hole radii. In many cases, accretion disks are accompanied by relativistic jets that are emitted along the black hole's poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood, in part due to insufficient data. Many of the universe's most energetic phenomena have been attributed to the accretion of matter on black holes. Active galactic nuclei and quasars are believed to be the accretion disks of supermassive black holes. X-ray binaries are generally accepted to be binary systems in which one of the two objects is a compact object accreting matter from its companion. Ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes. At a certain rate of accretion, the outward radiation pressure will become as strong as the inward gravitational force, and the black hole should unable to accrete any faster. This limit is called the Eddington limit. However, many black holes accrete beyond this rate due to their non-spherical geometry or instabilities in the accretion disk. Accretion beyond the limit is called Super-Eddington accretion and may have been commonplace in the early universe. Stars have been observed to get torn apart by tidal forces in the immediate vicinity of supermassive black holes in galaxy nuclei, in what is known as a tidal disruption event (TDE). Some of the material from the disrupted star forms an accretion disk around the black hole, which emits observable electromagnetic radiation. The correlation between the masses of supermassive black holes in the centres of galaxies with the velocity dispersion and mass of stars in their host bulges suggests that the formation of galaxies and the formation of their central black holes are related. Black hole winds from rapid accretion, particularly when the galaxy itself is still accreting matter, can compress gas nearby, accelerating star formation. However, if the winds become too strong, the black hole may blow nearly all of the gas out of the galaxy, quenching star formation. Black hole jets may also energize nearby cavities of plasma and eject low-entropy gas from out of the galactic core, causing gas in galactic centers to be hotter than expected. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.: Ch. 9.6 A stellar black hole of 1 M☉ has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimetre. The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent limits on the possibility of existence of low mass primordial black holes, with modern research predicting that primordial black holes must make up less than a fraction of 10−7 of the universe's total mass. NASA's Fermi Gamma-ray Space Telescope, launched in 2008, has searched for these flashes, but has not yet found any. The properties of a black hole are constrained and interrelated by the theories that predict these properties. When based on general relativity, these relationships are called the laws of black hole mechanics. For a black hole that is not still forming or accreting matter, the zeroth law of black hole mechanics states the black hole's surface gravity is constant across the event horizon. The first law relates changes in the black hole's surface area, angular momentum, and charge to changes in its energy. The second law says the surface area of a black hole never decreases on its own. Finally, the third law says that the surface gravity of a black hole is never zero. These laws are mathematical analogs of the laws of thermodynamics. They are not equivalent, however, because, according to general relativity without quantum mechanics, a black hole can never emit radiation, and thus its temperature must always be zero.: 11 Quantum mechanics predicts that a black hole will continuously emit thermal Hawking radiation, and therefore must always have a nonzero temperature. It also predicts that all black holes have entropy which scales with their surface area. When quantum mechanics is accounted for, the laws of black hole mechanics become equivalent to the classical laws of thermodynamics. However, these conclusions are derived without a complete theory of quantum gravity, although many potential theories do predict black holes having entropy and temperature. Thus, the true quantum nature of black hole thermodynamics continues to be debated.: 29 Observational evidence Millions of black holes with around 30 solar masses derived from stellar collapse are expected to exist in the Milky Way. Even a dwarf galaxy like Draco should have hundreds. Only a few of these have been detected. By nature, black holes do not themselves emit any electromagnetic radiation other than the hypothetical Hawking radiation, so astrophysicists searching for black holes must generally rely on indirect observations. The defining characteristic of a black hole is its event horizon. The horizon itself cannot be imaged, so all other possible explanations for these indirect observations must be considered and eliminated before concluding that a black hole has been observed.: 11 The Event Horizon Telescope (EHT) is a global system of radio telescopes capable of directly observing a black hole shadow. The angular resolution of a telescope is based on its aperture and the wavelengths it is observing. Because the angular diameters of Sagittarius A* and Messier 87* in the sky are very small, a single telescope would need to be about the size of the Earth to clearly distinguish their horizons using radio wavelengths. By combining data from several different radio telescopes around the world, the Event Horizon Telescope creates an effective aperture the diameter size of the Earth. The EHT team used imaging algorithms to compute the most probable image from the data in its observations of Sagittarius A* and M87*. Gravitational-wave interferometry can be used to detect merging black holes and other compact objects. In this method, a laser beam is split down two long arms of a tunnel. The laser beams reflect off of mirrors in the tunnels and converge at the intersection of the arms, cancelling each other out. However, when a gravitational wave passes, it warps spacetime, changing the lengths of the arms themselves. Since each laser beam is now travelling a slightly different distance, they do not cancel out and produce a recognizable signal. Analysis of the signal can give scientists information about what caused the gravitational waves. Since gravitational waves are very weak, gravitational-wave observatories such as LIGO must have arms several kilometers long and carefully control for noise from Earth to be able to detect these gravitational waves. Since the first measurements in 2016, multiple gravitational waves from black holes have been detected and analyzed. The proper motions of stars near the centre of the Milky Way provide strong observational evidence that these stars are orbiting a supermassive black hole. Since 1995, astronomers have tracked the motions of 90 stars orbiting an invisible object coincident with the radio source Sagittarius A*. In 1998, by fitting the motions of the stars to Keplerian orbits, the astronomers were able to infer that Sagittarius A* must be a 2.6×106 M☉ object must be contained within a radius of 0.02 light-years. Since then, one of the stars—called S2—has completed a full orbit. From the orbital data, astronomers were able to refine the calculations of the mass of Sagittarius A* to 4.3×106 M☉, with a radius of less than 0.002 light-years. This upper limit radius is larger than the Schwarzschild radius for the estimated mass, so the combination does not prove Sagittarius A* is a black hole. Nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for confining so much invisible mass into such a small volume. Additionally, there is some observational evidence that this object might possess an event horizon, a feature unique to black holes. The Event Horizon Telescope image of Sagittarius A*, released in 2022, provided further confirmation that it is indeed a black hole. X-ray binaries are binary systems that emit a majority of their radiation in the X-ray part of the electromagnetic spectrum. These X-ray emissions result when a compact object accretes matter from an ordinary star. The presence of an ordinary star in such a system provides an opportunity for studying the central object and to determine if it might be a black hole. By measuring the orbital period of the binary, the distance to the binary from Earth, and the mass of the companion star, scientists can estimate the mass of the compact object. The Tolman-Oppenheimer-Volkoff limit (TOV limit) dictates the largest mass a nonrotating neutron star can be, and is estimated to be about two solar masses. While a rotating neutron star can be slightly more massive, if the compact object is much more massive than the TOV limit, it cannot be a neutron star and is generally expected to be a black hole. The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. Observations of rotation broadening of the optical star reported in 1986 lead to a compact object mass estimate of 16 solar masses, with 7 solar masses as the lower bound. In 2011, this estimate was updated to 14.1±1.0 M☉ for the black hole and 19.2±1.9 M☉ for the optical stellar companion. X-ray binaries can be categorized as either low-mass or high-mass; This classification is based on the mass of the companion star, not the compact object itself. In a class of X-ray binaries called soft X-ray transients, the companion star is of relatively low mass, allowing for more accurate estimates of the black hole mass. These systems actively emit X-rays for only several months once every 10–50 years. During the period of low X-ray emission, called quiescence, the accretion disk is extremely faint, allowing detailed observation of the companion star. Numerous black hole candidates have been measured by this method. Black holes are also sometimes found in binaries with other compact objects, such as white dwarfs, neutron stars, and other black holes. The centre of nearly every galaxy contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M–sigma relation, strongly suggests a connection between the formation of the black hole and that of the galaxy itself. Astronomers use the term active galaxy to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the high levels of activity in the centers of these galaxies, regions called active galactic nuclei (AGN), may be explained by accretion onto supermassive black holes. These AGN consist of a central black hole that may be millions or billions of times more massive than the Sun, a disk of interstellar gas and dust called an accretion disk, and two jets perpendicular to the accretion disk. Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, Messier 32, Messier 87, the Sombrero Galaxy, and the Milky Way itself. Another way black holes can be detected is through observation of effects caused by their strong gravitational field. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected, making objects behind them appear distorted. When the lensing object is a black hole, this effect can be strong enough to create multiple images of a star or other luminous source. However, the distance between the lensed images may be too small for contemporary telescopes to resolve—this phenomenon is called microlensing. Instead of seeing two images of a lensed star, astronomers see the star brighten slightly as the black hole moves towards the line of sight between the star and Earth and then return to its normal luminosity as the black hole moves away. The turn of the millennium saw the first 3 candidate detections of black holes in this way, and in January 2022, astronomers reported the first confirmed detection of a microlensing event from an isolated black hole. This was also the first determination of an isolated black hole mass, 7.1±1.3 M☉. Alternatives While there is a strong case for supermassive black holes, the model for stellar-mass black holes assumes of an upper limit for the mass of a neutron star: objects observed to have more mass are assumed to be black holes. However, the properties of extremely dense matter are poorly understood. New exotic phases of matter could allow other kinds of massive objects. Quark stars would be made up of quark matter and supported by quark degeneracy pressure, a form of degeneracy pressure even stronger than neutron degeneracy pressure. This would halt gravitational collapse at a higher mass than for a neutron star. Even stronger stars called electroweak stars would convert quarks in their cores into leptons, providing additional pressure to stop the star from collapsing. If, as some extensions of the Standard Model posit, quarks and leptons are made up of the even-smaller fundamental particles called preons, a very compact star could be supported by preon degeneracy pressure. While none of these hypothetical models can explain all of the observations of stellar black hole candidates, a Q star is the only alternative which could significantly exceed the mass limit for neutron stars and thus provide an alternative for supermassive black holes.: 12 A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. A dark energy star would convert infalling matter into vacuum energy; This vacuum energy would be much larger than the vacuum energy of outside space, exerting outwards pressure and preventing a singularity from forming. A black star would be gravitationally collapsing slowly enough that quantum effects would keep it just on the cusp of fully collapsing into a black hole. A gravastar would consist of a very thin shell and a dark-energy interior providing outward pressure to stop the collapse into a black hole or formation of a singularity; It could even have another gravastar inside, called a 'nestar'. Open questions According to the no-hair theorem, a black hole is defined by only three parameters: its mass, charge, and angular momentum. This seems to mean that all other information about the matter that went into forming the black hole is lost, as there is no way to determine anything about the black hole from outside other than those three parameters. When black holes were thought to persist forever, this information loss was not problematic, as the information can be thought of as existing inside the black hole. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information is seemingly gone forever. This is called the black hole information paradox. Theoretical studies analyzing the paradox have led to both further paradoxes and new ideas about the intersection of quantum mechanics and general relativity. While there is no consensus on the resolution of the paradox, work on the problem is expected to be important for a theory of quantum gravity.: 126 Observations of faraway galaxies have found that ultraluminous quasars, powered by supermassive black holes, existed in the early universe as far as redshift z ≥ 7 {\displaystyle z\geq 7} . These black holes have been assumed to be the products of the gravitational collapse of large population III stars. However, these stellar remnants were not massive enough to produce the quasars observed at early times without accreting beyond the Eddington limit, the theoretical maximum rate of black hole accretion. Physicists have suggested a variety of different mechanisms by which these supermassive black holes may have formed. It has been proposed that smaller black holes may have also undergone mergers to produce the observed supermassive black holes. It is also possible that they were seeded by direct-collapse black holes, in which a large cloud of hot gas avoids fragmentation that would lead to multiple stars, due to low angular momentum or heating from a nearby galaxy. Given the right circumstances, a single supermassive star forms and collapses directly into a black hole without undergoing typical stellar evolution. Additionally, these supermassive black holes in the early universe may be high-mass primordial black holes, which could have accreted further matter in the centers of galaxies. Finally, certain mechanisms allow black holes to grow faster than the theoretical Eddington limit, such as dense gas in the accretion disk limiting outward radiation pressure that prevents the black hole from accreting. However, the formation of bipolar jets prevent super-Eddington rates. In fiction Black holes have been portrayed in science fiction in a variety of ways. Even before the advent of the term itself, objects with characteristics of black holes appeared in stories such as the 1928 novel The Skylark of Space with its "black Sun" and the "hole in space" in the 1935 short story Starship Invincible. As black holes grew to public recognition in the 1960s and 1970s, they began to be featured in films as well as novels, such as Disney's The Black Hole. Black holes have also been used in works of the 21st century, such as Christopher Nolan's science fiction epic Interstellar. Authors and screenwriters have exploited the relativistic effects of black holes, particularly gravitational time dilation. For example, Interstellar features a black hole planet with a time dilation factor of over 60,000:1, while the 1977 novel Gateway depicts a spaceship approaching but never crossing the event horizon of a black hole from the perspective of an outside observer due to time dilation effects. Black holes have also been appropriated as wormholes or other methods of faster-than-light travel, such as in the 1974 novel The Forever War, where a network of black holes is used for interstellar travel. Additionally, black holes can feature as hazards to spacefarers and planets: A black hole threatens a deep-space outpost in 1978 short story The Black Hole Passes, and a binary black hole dangerously alters the orbit of a planet in the 2018 Netflix reboot of Lost in Space. Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/File:MOSFET_Structure.svg] | [TOKENS: 94] |
File:MOSFET Structure.svg Summary Licensing File history Click on a date/time to view the file as it appeared at that time. File usage The following 2 pages use this file: Global file usage The following other wikis use this file: Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Black_hole#Galactic_nuclei] | [TOKENS: 13839] |
Contents Black hole A black hole is an astronomical body so compact that its gravity prevents anything, including light, from escaping. Albert Einstein's theory of general relativity predicts that a sufficiently compact mass will form a black hole. The boundary of no escape is called the event horizon. In general relativity, a black hole's event horizon seals an object's fate but produces no locally detectable change when crossed. General relativity also predicts that every black hole should have a central singularity, where the curvature of spacetime is infinite. In many ways, a black hole acts like an ideal black body, as it reflects no light. Quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly. Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterise a black hole. Due to his influential research, the Schwarzschild metric is named after him. David Finkelstein, in 1958, first interpreted Schwarzschild's model as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971. Black holes typically form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses may form by absorbing other stars and merging with other black holes, or via direct collapse of gas clouds. There is consensus that supermassive black holes exist in the centres of most galaxies. The presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Matter falling toward a black hole can form an accretion disk of infalling plasma, heated by friction and emitting light. In extreme cases, this creates a quasar, some of the brightest objects in the universe. Merging black holes can also be detected by observation of the gravitational waves they emit. If other stars are orbiting a black hole, their orbits can be used to determine the black hole's mass and location. Such observations can be used to exclude possible alternatives such as neutron stars. In this way, astronomers have identified numerous stellar black hole candidates in binary systems and established that the radio source known as Sagittarius A*, at the core of the Milky Way galaxy, contains a supermassive black hole of about 4.3 million solar masses. History The idea of a body so massive that even light could not escape was first proposed in the late 18th century by English astronomer and clergyman John Michell and independently by French scientist Pierre-Simon Laplace. Both scholars proposed very large stars in contrast to the modern concept of an extremely dense object. Michell's idea, in a short part of a letter published in 1784, calculated that a star with the same density but 500 times the radius of the sun would not let any emitted light escape; the surface escape velocity would exceed the speed of light.: 122 Michell correctly hypothesized that such supermassive but non-radiating bodies might be detectable through their gravitational effects on nearby visible bodies. In 1796, Laplace mentioned that a star could be invisible if it were sufficiently large while speculating on the origin of the Solar System in his book Exposition du Système du Monde. Franz Xaver von Zach asked Laplace for a mathematical analysis, which Laplace provided and published in a journal edited by von Zach. In 1905, Albert Einstein showed that the laws of electromagnetism would be invariant under a Lorentz transformation: they would be identical for observers travelling at different velocities relative to each other. This discovery became known as the principle of special relativity. Although the laws of mechanics had already been shown to be invariant, gravity remained yet to be included.: 19 In 1907, Einstein published a paper proposing his equivalence principle, the hypothesis that inertial mass and gravitational mass have a common cause. Using the principle, Einstein predicted the redshift and half of the lensing effect of gravity on light; the full prediction of gravitational lensing required development of general relativity.: 19 By 1915, Einstein refined these ideas into his general theory of relativity, which explained how matter affects spacetime, which in turn affects the motion of other matter. This formed the basis for black hole physics. Only a few months after Einstein published the field equations describing general relativity, astrophysicist Karl Schwarzschild set out to apply the idea to stars. He assumed spherical symmetry with no spin and found a solution to Einstein's equations.: 124 A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution. At a certain radius from the center of the mass, the Schwarzschild solution became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this radius, which later became known as the Schwarzschild radius, was not understood at the time. Many physicists of the early 20th century were skeptical of the existence of black holes. In a 1926 popular science book, Arthur Eddington critiqued the idea of a star with mass compressed to its Schwarzschild radius as a flaw in the then-poorly-understood theory of general relativity.: 134 In 1939, Einstein himself used his theory of general relativity in an attempt to prove that black holes were impossible. His work relied on increasing pressure or increasing centrifugal force balancing the force of gravity so that the object would not collapse beyond its Schwarzschild radius. He missed the possibility that implosion would drive the system below this critical value.: 135 By the 1920s, astronomers had classified a number of white dwarf stars as too cool and dense to be explained by the gradual cooling of ordinary stars. In 1926, Ralph Fowler showed that quantum-mechanical degeneracy pressure was larger than thermal pressure at these densities.: 145 In 1931, Subrahmanyan Chandrasekhar calculated that a non-rotating body of electron-degenerate matter below a certain limiting mass is stable, and by 1934 he showed that this explained the catalog of white dwarf stars.: 151 When Chandrasekhar announced his results, Eddington pointed out that stars above this limit would radiate until they were sufficiently dense to prevent light from exiting, a conclusion he considered absurd. Eddington and, later, Lev Landau argued that some yet unknown mechanism would stop the collapse. In the 1930s, Fritz Zwicky and Walter Baade studied stellar novae, focusing on exceptionally bright ones they called supernovae. Zwicky promoted the idea that supernovae produced stars with the density of atomic nuclei—neutron stars—but this idea was largely ignored.: 171 In 1939, based on Chandrasekhar's reasoning, J. Robert Oppenheimer and George Volkoff predicted that neutron stars below a certain mass limit, later called the Tolman–Oppenheimer–Volkoff limit, would be stable due to neutron degeneracy pressure. Above that limit, they reasoned that either their model would not apply or that gravitational contraction would not stop.: 380 John Archibald Wheeler and two of his students resolved questions about the model behind the Tolman–Oppenheimer–Volkoff (TOV) limit. Harrison and Wheeler developed the equations of state relating density to pressure for cold matter all the way through electron degeneracy and neutron degeneracy. Masami Wakano and Wheeler then used the equations to compute the equilibrium curve for stars, relating mass to circumference. They found no additional features that would invalidate the TOV limit. This meant that the only thing that could prevent black holes from forming was a dynamic process ejecting sufficient mass from a star as it cooled.: 205 The modern concept of black holes was formulated by Robert Oppenheimer and his student Hartland Snyder in 1939.: 80 In the paper, Oppenheimer and Snyder solved Einstein's equations of general relativity for an idealized imploding star, in a model later called the Oppenheimer–Snyder model, then described the results from far outside the star. The implosion starts as one might expect: the star material rapidly collapses inward. However, as the density of the star increases, gravitational time dilation increases and the collapse, viewed from afar, seems to slow down further and further until the star reaches its Schwarzschild radius, where it appears frozen in time.: 217 In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, calling it "a perfect unidirectional membrane: causal influences can cross it in only one direction". In this sense, events that occur inside of the black hole cannot affect events that occur outside of the black hole. Finkelstein created a new reference frame to include the point of view of infalling observers.: 103 Finkelstein's new frame of reference allowed events at the surface of an imploding star to be related to events far away. By 1962 the two points of view were reconciled, convincing many skeptics that implosion into a black hole made physical sense.: 226 The era from the mid-1960s to the mid-1970s was the "golden age of black hole research", when general relativity and black holes became mainstream subjects of research.: 258 In this period, more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the cylindrically symmetric solution for a black hole that is both rotating and electrically charged. In 1967, Werner Israel found that the Schwarzschild solution was the only possible solution for a nonspinning, uncharged black hole, meaning that a Schwarzschild black hole would be defined by its mass alone. Similar identities were later found for Reissner-Nordstrom and Kerr black holes, defined only by their mass and their charge or spin respectively. Together, these findings became known as the no-hair theorem, which states that a stationary black hole is completely described by the three parameters of the Kerr–Newman metric: mass, angular momentum, and electric charge. At first, it was suspected that the strange mathematical singularities found in each of the black hole solutions only appeared due to the assumption that a black hole would be perfectly spherically symmetric, and therefore the singularities would not appear in generic situations where black holes would not necessarily be symmetric. This view was held in particular by Vladimir Belinski, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions, although they would later reverse their positions. However, in 1965, Roger Penrose proved that general relativity without quantum mechanics requires that singularities appear in all black holes. Astronomical observations also made great strides during this era. In 1967, Antony Hewish and Jocelyn Bell Burnell discovered pulsars and by 1969, these were shown to be rapidly rotating neutron stars. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities, but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse. Based on observations in Greenwich and Toronto in the early 1970s, Cygnus X-1, a galactic X-ray source discovered in 1964, became the first astronomical object commonly accepted to be a black hole. Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed: 442 when Hawking, in 1974, showed that quantum field theory implies that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole, predicting the effect now known as Hawking radiation. While Cygnus X-1, a stellar-mass black hole, was generally accepted by the scientific community as a black hole by the end of 1973, it would be decades before a supermassive black hole would gain the same broad recognition. Although, as early as the 1960s, physicists such as Donald Lynden-Bell and Martin Rees had suggested that powerful quasars in the center of galaxies were powered by accreting supermassive black holes, little observational proof existed at the time. However, the Hubble Space Telescope, launched decades later, found that supermassive black holes were not only present in these active galactic nuclei, but that supermassive black holes in the center of galaxies were ubiquitous: Almost every galaxy had a supermassive black hole at its center, many of which were quiescent. In 1999, David Merritt proposed the M–sigma relation, which related the dispersion of the velocity of matter in the center bulge of a galaxy to the mass of the supermassive black hole at its core. Subsequent studies confirmed this correlation. Around the same time, based on telescope observations of the velocities of stars at the center of the Milky Way galaxy, independent work groups led by Andrea Ghez and Reinhard Genzel concluded that the compact radio source in the center of the galaxy, Sagittarius A*, was likely a supermassive black hole. On 11 February 2016, the LIGO Scientific Collaboration and Virgo Collaboration announced the first direct detection of gravitational waves, named GW150914, representing the first observation of a black hole merger. At the time of the merger, the black holes were approximately 1.4 billion light-years away from Earth and had masses of 30 and 35 solar masses.: 6 In 2017, Rainer Weiss, Kip Thorne, and Barry Barish, who had spearheaded the project, were awarded the Nobel Prize in Physics for their work. Since the initial discovery in 2015, hundreds more gravitational waves have been observed by LIGO and another interferometer, Virgo. On 10 April 2019, the first direct image of a black hole and its vicinity was published, following observations made by the Event Horizon Telescope (EHT) in 2017 of the supermassive black hole in Messier 87's galactic centre. In 2022, the Event Horizon Telescope collaboration released an image of the black hole in the center of the Milky Way galaxy, Sagittarius A*; The data had been collected in 2017. In 2020, the Nobel Prize in Physics was awarded for work on black holes. Andrea Ghez and Reinhard Genzel shared one-half for their discovery that Sagittarius A* is a supermassive black hole. Penrose received the other half for his work showing that the mathematics of general relativity requires the formation of black holes. Cosmologists lamented that Hawking's extensive theoretical work on black holes would not be honored since he died in 2018. In December 1967, a student reportedly suggested the phrase black hole at a lecture by John Wheeler; Wheeler adopted the term for its brevity and "advertising value", and Wheeler's stature in the field ensured it quickly caught on, leading some to credit Wheeler with coining the phrase. However, the term was used by others around that time. Science writer Marcia Bartusiak traces the term black hole to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive. The term was used in print by Life and Science News magazines in 1963, and by science journalist Ann Ewing in her article "'Black Holes' in Space", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science held in Cleveland, Ohio. Definition A black hole is generally defined as a region of spacetime from which no information-carrying signals or objects can escape. However, verifying an object as a black hole by this definition would require waiting for an infinite time and at an infinite distance from the black hole to verify that indeed, nothing has escaped, and thus cannot be used to identify a physical black hole. Broadly, physicists do not have a precisely-agreed-upon definition of a black hole. Among astrophysicists, a black hole is a compact object with a mass larger than four solar masses. A black hole may also be defined as a reservoir of information: 142 or a region where space is falling inwards faster than the speed of light. Properties The no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. If the conjecture is true, any two black holes that share the same values for these properties, or parameters, are indistinguishable from one another. The degree to which the conjecture is true for real black holes is currently an unsolved problem. The simplest static black holes have mass but neither electric charge nor angular momentum. According to Birkhoff's theorem, these Schwarzschild black holes are the only vacuum solution that is spherically symmetric. Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum. The simplest static black holes have mass but neither electric charge nor angular momentum. Contrary to the popular notion of a black hole "sucking in everything" in its surroundings, from far away, the external gravitational field of a black hole is identical to that of any other body of the same mass. While a black hole can theoretically have any positive mass, the charge and angular momentum are constrained by the mass. The total electric charge Q and the total angular momentum J are expected to satisfy the inequality Q 2 4 π ϵ 0 + c 2 J 2 G M 2 ≤ G M 2 {\displaystyle {\frac {Q^{2}}{4\pi \epsilon _{0}}}+{\frac {c^{2}J^{2}}{GM^{2}}}\leq GM^{2}} for a black hole of mass M. Black holes with the maximum possible charge or spin satisfying this inequality are called extremal black holes. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These are so-called naked singularities that can be observed from the outside. Because these singularities make the universe inherently unpredictable, many physicists believe they could not exist. The weak cosmic censorship hypothesis, proposed by Sir Roger Penrose, rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. However, this theory has not yet been proven, and some physicists believe that naked singularities could exist. It is also unknown whether black holes could even become extremal, forming naked singularities, since natural processes counteract increasing spin and charge when a black hole becomes near-extremal. The total mass of a black hole can be estimated by analyzing the motion of objects near the black hole, such as stars or gas. All black holes spin, often fast—One supermassive black hole, GRS 1915+105 has been estimated to spin at over 1,000 revolutions per second. The Milky Way's central black hole Sagittarius A* rotates at about 90% of the maximum rate. The spin rate can be inferred from measurements of atomic spectral lines in the X-ray range. As gas near the black hole plunges inward, high energy X-ray emission from electron-positron pairs illuminates the gas further out, appearing red-shifted due to relativistic effects. Depending on the spin of the black hole, this plunge happens at different radii from the hole, with different degrees of redshift. Astronomers can use the gap between the x-ray emission of the outer disk and the redshifted emission from plunging material to determine the spin of the black hole. A newer way to estimate spin is based on the temperature of gasses accreting onto the black hole. The method requires an independent measurement of the black hole mass and inclination angle of the accretion disk followed by computer modeling. Gravitational waves from coalescing binary black holes can also provide the spin of both progenitor black holes and the merged hole, but such events are rare. A spinning black hole has angular momentum. The supermassive black hole in the center of the Messier 87 (M87) galaxy appears to have an angular momentum very close to the maximum theoretical value. That uncharged limit is J ≤ G M 2 c , {\displaystyle J\leq {\frac {GM^{2}}{c}},} allowing definition of a dimensionless spin magnitude such that 0 ≤ c J G M 2 ≤ 1. {\displaystyle 0\leq {\frac {cJ}{GM^{2}}}\leq 1.} Most black holes are believed to have an approximately neutral charge. For example, Michal Zajaček, Arman Tursunov, Andreas Eckart, and Silke Britzen found the electric charge of Sagittarius A* to be at least ten orders of magnitude below the theoretical maximum. A charged black hole repels other like charges just like any other charged object. If a black hole were to become charged, particles with an opposite sign of charge would be pulled in by the extra electromagnetic force, while particles with the same sign of charge would be repelled, neutralizing the black hole. This effect may not be as strong if the black hole is also spinning. The presence of charge can reduce the diameter of the black hole by up to 38%. The charge Q for a nonspinning black hole is bounded by Q ≤ G M , {\displaystyle Q\leq {\sqrt {G}}M,} where G is the gravitational constant and M is the black hole's mass. Classification Black holes can have a wide range of masses. The minimum mass of a black hole formed by stellar gravitational collapse is governed by the maximum mass of a neutron star and is believed to be approximately two-to-four solar masses. However, theoretical primordial black holes, believed to have formed soon after the Big Bang, could be far smaller, with masses as little as 10−5 grams at formation. These very small black holes are sometimes called micro black holes. Black holes formed by stellar collapse are called stellar black holes. Estimates of their maximum mass at formation vary, but generally range from 10 to 100 solar masses, with higher estimates for black holes progenated by low-metallicity stars. The mass of a black hole formed via a supernova has a lower bound: If the progenitor star is too small, the collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. Degeneracy pressure occurs from the Pauli exclusion principle—Particles will resist being in the same place as each other. Smaller progenitor stars, with masses less than about 8 M☉, will be held together by the degeneracy pressure of electrons and will become a white dwarf. For more massive progenitor stars, electron degeneracy pressure is no longer strong enough to resist the force of gravity and the star will be held together by neutron degeneracy pressure, which can occur at much higher densities, forming a neutron star. If the star is still too massive, even neutron degeneracy pressure will not be able to resist the force of gravity and the star will collapse into a black hole.: 5.8 Stellar black holes can also gain mass via accretion of nearby matter, often from a companion object such as a star. Black holes that are larger than stellar black holes but smaller than supermassive black holes are called intermediate-mass black holes, with masses of approximately 102 to 105 solar masses. These black holes seem to be rarer than their stellar and supermassive counterparts, with relatively few candidates having been observed. Physicists have speculated that such black holes may form from collisions in globular and star clusters or at the center of low-mass galaxies. They may also form as the result of mergers of smaller black holes, with several LIGO observations finding merged black holes within the 110-350 solar mass range. The black holes with the largest masses are called supermassive black holes, with masses more than 106 times that of the Sun. These black holes are believed to exist at the centers of almost every large galaxy, including the Milky Way. Some scientists have proposed a subcategory of even larger black holes, called ultramassive black holes, with masses greater than 109-1010 solar masses. Theoretical models predict that the accretion disc that feeds black holes will be unstable once a black hole reaches 50-100 billion times the mass of the Sun, setting a rough upper limit to black hole mass. Structure While black holes are conceptually invisible sinks of all matter and light, in astronomical settings, their enormous gravity alters the motion of surrounding objects and pulls nearby gas inwards at near-light speed, making the area around black holes the brightest objects in the universe. Some black holes have relativistic jets—thin streams of plasma travelling away from the black hole at more than one-tenth of the speed of light. A small faction of the matter falling towards the black hole gets accelerated away along the hole rotation axis. These jets can extend as far as millions of parsecs from the black hole itself. Black holes of any mass can have jets. However, they are typically observed around spinning black holes with strongly-magnetized accretion disks. Relativistic jets were more common in the early universe, when galaxies and their corresponding supermassive black holes were rapidly gaining mass. All black holes with jets also have an accretion disk, but the jets are usually brighter than the disk. Quasars, typically found in other galaxies, are believed to be supermassive black holes with jets; microquasars are believed to be stellar-mass objects with jets, typically observed in the Milky Way. The mechanism of formation of jets is not yet known, but several options have been proposed. One method proposed to fuel these jets is the Blandford-Znajek process, which suggests that the dragging of magnetic field lines by a black hole's rotation could launch jets of matter into space. The Penrose process, which involves extraction of a black hole's rotational energy, has also been proposed as a potential mechanism of jet propulsion. Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disk-like structure around the object.: 242 As the disk's angular momentum is transferred outward due to internal processes, its matter falls farther inward, converting its gravitational energy into heat and releasing a large flux of x-rays. The temperature of these disks can range from thousands to millions of Kelvin, and temperatures can differ throughout a single accretion disk. Accretion disks can also emit in other parts of the electromagnetic spectrum, depending on the disk's turbulence and magnetization and the black hole's mass and angular momentum. Accretion disks can be defined as geometrically thin or geometrically thick. Geometrically thin disks are mostly confined to the black hole's equatorial plane and have a well-defined edge at the innermost stable circular orbit (ISCO), while geometrically thick disks are supported by internal pressure and temperature and can extend inside the ISCO. Disks with high rates of electron scattering and absorption, appearing bright and opaque, are called optically thick; optically thin disks are more translucent and produce fainter images when viewed from afar. Accretion disks of black holes accreting beyond the Eddington limit are often referred to as polish donuts due to their thick, toroidal shape that resembles that of a donut. Quasar accretion disks are expected to usually appear blue in color. The disk for a stellar black hole, on the other hand, would likely look orange, yellow, or red, with its inner regions being the brightest. Theoretical research suggests that the hotter a disk is, the bluer it should be, although this is not always supported by observations of real astronomical objects. Accretion disk colors may also be altered by the Doppler effect, with the part of the disk travelling towards an observer appearing bluer and brighter and the part of the disk travelling away from the observer appearing redder and dimmer. In Newtonian gravity, test particles can stably orbit at arbitrary distances from a central object. In general relativity, however, there exists a smallest possible radius for which a massive particle can orbit stably. Any infinitesimal inward perturbations to this orbit will lead to the particle spiraling into the black hole, and any outward perturbations will, depending on the energy, cause the particle to spiral in, move to a stable orbit further from the black hole, or escape to infinity. This orbit is called the innermost stable circular orbit, or ISCO. The location of the ISCO depends on the spin of the black hole and the spin of the particle itself. In the case of a Schwarzschild black hole (spin zero) and a particle without spin, the location of the ISCO is: r I S C O = 3 r s = 6 G M c 2 , {\displaystyle r_{\rm {ISCO}}=3\,r_{\text{s}}={\frac {6\,GM}{c^{2}}},} where r I S C O {\displaystyle r_{\rm {_{ISCO}}}} is the radius of the ISCO, r s {\displaystyle r_{\text{s}}} is the Schwarzschild radius of the black hole, G {\displaystyle G} is the gravitational constant, and c {\displaystyle c} is the speed of light. The radius of this orbit changes slightly based on particle spin. For charged black holes, the ISCO moves inwards. For spinning black holes, the ISCO is moved inwards for particles orbiting in the same direction that the black hole is spinning (prograde) and outwards for particles orbiting in the opposite direction (retrograde). For example, the ISCO for a particle orbiting retrograde can be as far out as about 9 r s {\displaystyle 9r_{\text{s}}} , while the ISCO for a particle orbiting prograde can be as close as at the event horizon itself. The photon sphere is a spherical boundary for which photons moving on tangents to that sphere are bent completely around the black hole, possibly orbiting multiple times. Light rays with impact parameters less than the radius of the photon sphere enter the black hole. For Schwarzschild black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius; the radius for non-Schwarzschild black holes is at least 1.5 times the radius of the event horizon. When viewed from a great distance, the photon sphere creates an observable black hole shadow. Since no light emerges from within the black hole, this shadow is the limit for possible observations.: 152 The shadow of colliding black holes should have characteristic warped shapes, allowing scientists to detect black holes that are about to merge. While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Therefore, any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon. Light emitted towards the photon sphere may also curve around the black hole and return to the emitter. For a rotating, uncharged black hole, the radius of the photon sphere depends on the spin parameter and whether the photon is orbiting prograde or retrograde. For a photon orbiting prograde, the photon sphere will be 1-3 Schwarzschild radii from the center of the black hole, while for a photon orbiting retrograde, the photon sphere will be between 3-5 Schwarzschild radii from the center of the black hole. The exact location of the photon sphere depends on the magnitude of the black hole's rotation. For a charged, nonrotating black hole, there will only be one photon sphere, and the radius of the photon sphere will decrease for increasing black hole charge. For non-extremal, charged, rotating black holes, there will always be two photon spheres, with the exact radii depending on the parameters of the black hole. Near a rotating black hole, spacetime rotates similar to a vortex. The rotating spacetime will drag any matter and light into rotation around the spinning black hole. This effect of general relativity, called frame dragging, gets stronger closer to the spinning mass. The region of spacetime in which it is impossible to stay still is called the ergosphere. The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but bulges out from it around the equator. Matter and radiation can escape from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole, slowing down the rotation of the black hole.: 268 A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process, is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei. The observable region of spacetime around a black hole closest to its event horizon is called the plunging region. In this area it is no longer possible for free falling matter to follow circular orbits or stop a final descent into the black hole. Instead, it will rapidly plunge toward the black hole at close to the speed of light, growing increasingly hot and producing a characteristic, detectable thermal emission. However, light and radiation emitted from this region can still escape from the black hole's gravitational pull. For a nonspinning, uncharged black hole, the radius of the event horizon, or Schwarzschild radius, is proportional to the mass, M, through r s = 2 G M c 2 ≈ 2.95 M M ⊙ k m , {\displaystyle r_{\mathrm {s} }={\frac {2GM}{c^{2}}}\approx 2.95\,{\frac {M}{M_{\odot }}}~\mathrm {km,} } where rs is the Schwarzschild radius and M☉ is the mass of the Sun.: 124 For a black hole with nonzero spin or electric charge, the radius is smaller,[Note 1] until an extremal black hole could have an event horizon close to r + = G M c 2 , {\displaystyle r_{\mathrm {+} }={\frac {GM}{c^{2}}},} half the radius of a nonspinning, uncharged black hole of the same mass. Since the volume within the Schwarzschild radius increase with the cube of the radius, average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass: supermassive black holes are much less dense than stellar black holes. The average density of a 108 M☉ black hole is comparable to that of water. The defining feature of a black hole is the existence of an event horizon, a boundary in spacetime through which matter and light can pass only inward towards the center of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach or affect an outside observer, making it impossible to determine whether such an event occurred.: 179 For non-rotating black holes, the geometry of the event horizon is precisely spherical, while for rotating black holes, the event horizon is oblate. To a distant observer, a clock near a black hole would appear to tick more slowly than one further from the black hole.: 217 This effect, known as gravitational time dilation, would also cause an object falling into a black hole to appear to slow as it approached the event horizon, never quite reaching the horizon from the perspective of an outside observer.: 218 All processes on this object would appear to slow down, and any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. An object falling from half of a Schwarzschild radius above the event horizon would fade away until it could no longer be seen, disappearing from view within one hundredth of a second. It would also appear to flatten onto the black hole, joining all other material that had ever fallen into the hole. On the other hand, an observer falling into a black hole would not notice any of these effects as they cross the event horizon. Their own clocks appear to them to tick normally, and they cross the event horizon after a finite time without noting any singular behaviour. In general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle.: 222 Black holes that are rotating and/or charged have an inner horizon, often called the Cauchy horizon, inside of the black hole. The inner horizon is divided up into two segments: an ingoing section and an outgoing section. At the ingoing section of the Cauchy horizon, radiation and matter that fall into the black hole would build up at the horizon, causing the curvature of spacetime to go to infinity. This would cause an observer falling in to experience tidal forces. This phenomenon is often called mass inflation, since it is associated with a parameter dictating the black hole's internal mass growing exponentially, and the buildup of tidal forces is called the mass-inflation singularity or Cauchy horizon singularity. Some physicists have argued that in realistic black holes, accretion and Hawking radiation would stop mass inflation from occurring. At the outgoing section of the inner horizon, infalling radiation would backscatter off of the black hole's spacetime curvature and travel outward, building up at the outgoing Cauchy horizon. This would cause an infalling observer to experience a gravitational shock wave and tidal forces as the spacetime curvature at the horizon grew to infinity. This buildup of tidal forces is called the shock singularity. Both of these singularities are weak, meaning that an object crossing them would only be deformed a finite amount by tidal forces, even though the spacetime curvature would still be infinite at the singularity. This is as opposed to a strong singularity, where an object hitting the singularity would be stretched and squeezed by an infinite amount. They are also null singularities, meaning that a photon could travel parallel to the them without ever being intercepted. Ignoring quantum effects, every black hole has a singularity inside, points where the curvature of spacetime becomes infinite, and geodesics terminate within a finite proper time.: 205 For a non-rotating black hole, this region takes the shape of a single point; for a rotating black hole it is smeared out to form a ring singularity that lies in the plane of rotation.: 264 In both cases, the singular region has zero volume. All of the mass of the black hole ends up in the singularity.: 252 Since the singularity has nonzero mass in an infinitely small space, it can be thought of as having infinite density. Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity once they cross the event horizon. As they fall further into the black hole, they will be torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the noodle effect. Eventually, they will reach the singularity and be crushed into an infinitely small point.: 182 However any perturbations, such as those caused by matter or radiation falling in, would cause space to oscillate chaotically near the singularity. Any matter falling in would experience intense tidal forces rapidly changing in direction, all while being compressed into an increasingly small volume. Alternative forms of general relativity, including addition of some quatum effects, can lead to regular, or nonsingular, black holes without singularities. For example, the fuzzball model, based on string theory, states that black holes are actually made up of quantum microstates and need not have a singularity or an event horizon. The theory of loop quantum gravity proposes that the curvature and density at the center of a black hole is large, but not infinite. Formation Black holes are formed by gravitational collapse of massive stars, either by direct collapse or during a supernova explosion in a process called fallback. Black holes can result from the merger of two neutron stars or a neutron star and a black hole. Other more speculative mechanisms include primordial black holes created from density fluctuations in the early universe, the collapse of dark stars, a hypothetical object powered by annihilation of dark matter, or from hypothetical self-interacting dark matter. Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. At the end of a star's life, it will run out of hydrogen to fuse, and will start fusing more and more massive elements, until it gets to iron. Since the fusion of elements heavier than iron would require more energy than it would release, nuclear fusion ceases. If the iron core of the star is too massive, the star will no longer be able to support itself and will undergo gravitational collapse. While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer would see the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the delay growing to infinity as the emitting material reaches the event horizon. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away. Observations of quasars at redshift z ∼ 7 {\displaystyle z\sim 7} , less than a billion years after the Big Bang, has led to investigations of other ways to form black holes. The accretion process to build supermassive black holes has a limiting rate of mass accumulation and a billion years is not enough time to reach quasar status. One suggestion is direct collapse of nearly pure hydrogen gas (low metalicity) clouds characteristic of the young universe, forming a supermassive star which collapses into a black hole. It has been suggested that seed black holes with typical masses of ~105 M☉ could have formed in this way which then could grow to ~109 M☉. However, the very large amount of gas required for direct collapse is not typically stable to fragmentation to form multiple stars. Thus another approach suggests massive star formation followed by collisions that seed massive black holes which ultimately merge to create a quasar.: 85 A neutron star in a common envelope with a regular star can accrete sufficient material to collapse to a black hole or two neutron stars can merge. These avenues for the formation of black holes are considered relatively rare. In the current epoch of the universe, conditions needed to form black holes are rare and are mostly only found in stars. However, in the early universe, conditions may have allowed for black hole formations via other means. Fluctuations of spacetime soon after the Big Bang may have formed areas that were denser then their surroundings. Initially, these regions would not have been compact enough to form a black hole, but eventually, the curvature of spacetime in the regions become large enough to cause them to collapse into a black hole. Different models for the early universe vary widely in their predictions of the scale of these fluctuations. Various models predict the creation of primordial black holes ranging from a Planck mass (~2.2×10−8 kg) to hundreds of thousands of solar masses. Primordial black holes with masses less than 1015 g would have evaporated by now due to Hawking radiation. Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the universe was expanding rapidly and did not have the gravitational differential necessary for black hole formation. Models for the gravitational collapse of objects of relatively constant size, such as stars, do not necessarily apply in the same way to rapidly expanding space such as the Big Bang. In principle, black holes could be formed in high-energy particle collisions that achieve sufficient density, although no such events have been detected. These hypothetical micro black holes, which could form from the collision of cosmic rays and Earth's atmosphere or in particle accelerators like the Large Hadron Collider, would not be able to aggregate additional mass. Instead, they would evaporate in about 10−25 seconds, posing no threat to the Earth. Evolution Black holes can also merge with other objects such as stars or even other black holes. This is thought to have been important, especially in the early growth of supermassive black holes, which could have formed from the aggregation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes. Mergers of supermassive black holes may take a long time: As a binary of supermassive black holes approach each other, most nearby stars are ejected, leaving little for the remaining black holes to gravitationally interact with that would allow them to get closer to each other. This phenomenon has been called the final parsec problem, as the distance at which this happens is usually around one parsec. When a black hole accretes matter, the gas in the inner accretion disk orbits at very high speeds because of its proximity to the black hole. The resulting friction heats the inner disk to temperatures at which it emits vast amounts of electromagnetic radiation (mainly X-rays) detectable by telescopes. By the time the matter of the disk reaches the ISCO, between 5.7% and 42% of its mass will have been converted to energy, depending on the black hole's spin. About 90% of this energy is released within about 20 black hole radii. In many cases, accretion disks are accompanied by relativistic jets that are emitted along the black hole's poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood, in part due to insufficient data. Many of the universe's most energetic phenomena have been attributed to the accretion of matter on black holes. Active galactic nuclei and quasars are believed to be the accretion disks of supermassive black holes. X-ray binaries are generally accepted to be binary systems in which one of the two objects is a compact object accreting matter from its companion. Ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes. At a certain rate of accretion, the outward radiation pressure will become as strong as the inward gravitational force, and the black hole should unable to accrete any faster. This limit is called the Eddington limit. However, many black holes accrete beyond this rate due to their non-spherical geometry or instabilities in the accretion disk. Accretion beyond the limit is called Super-Eddington accretion and may have been commonplace in the early universe. Stars have been observed to get torn apart by tidal forces in the immediate vicinity of supermassive black holes in galaxy nuclei, in what is known as a tidal disruption event (TDE). Some of the material from the disrupted star forms an accretion disk around the black hole, which emits observable electromagnetic radiation. The correlation between the masses of supermassive black holes in the centres of galaxies with the velocity dispersion and mass of stars in their host bulges suggests that the formation of galaxies and the formation of their central black holes are related. Black hole winds from rapid accretion, particularly when the galaxy itself is still accreting matter, can compress gas nearby, accelerating star formation. However, if the winds become too strong, the black hole may blow nearly all of the gas out of the galaxy, quenching star formation. Black hole jets may also energize nearby cavities of plasma and eject low-entropy gas from out of the galactic core, causing gas in galactic centers to be hotter than expected. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.: Ch. 9.6 A stellar black hole of 1 M☉ has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimetre. The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent limits on the possibility of existence of low mass primordial black holes, with modern research predicting that primordial black holes must make up less than a fraction of 10−7 of the universe's total mass. NASA's Fermi Gamma-ray Space Telescope, launched in 2008, has searched for these flashes, but has not yet found any. The properties of a black hole are constrained and interrelated by the theories that predict these properties. When based on general relativity, these relationships are called the laws of black hole mechanics. For a black hole that is not still forming or accreting matter, the zeroth law of black hole mechanics states the black hole's surface gravity is constant across the event horizon. The first law relates changes in the black hole's surface area, angular momentum, and charge to changes in its energy. The second law says the surface area of a black hole never decreases on its own. Finally, the third law says that the surface gravity of a black hole is never zero. These laws are mathematical analogs of the laws of thermodynamics. They are not equivalent, however, because, according to general relativity without quantum mechanics, a black hole can never emit radiation, and thus its temperature must always be zero.: 11 Quantum mechanics predicts that a black hole will continuously emit thermal Hawking radiation, and therefore must always have a nonzero temperature. It also predicts that all black holes have entropy which scales with their surface area. When quantum mechanics is accounted for, the laws of black hole mechanics become equivalent to the classical laws of thermodynamics. However, these conclusions are derived without a complete theory of quantum gravity, although many potential theories do predict black holes having entropy and temperature. Thus, the true quantum nature of black hole thermodynamics continues to be debated.: 29 Observational evidence Millions of black holes with around 30 solar masses derived from stellar collapse are expected to exist in the Milky Way. Even a dwarf galaxy like Draco should have hundreds. Only a few of these have been detected. By nature, black holes do not themselves emit any electromagnetic radiation other than the hypothetical Hawking radiation, so astrophysicists searching for black holes must generally rely on indirect observations. The defining characteristic of a black hole is its event horizon. The horizon itself cannot be imaged, so all other possible explanations for these indirect observations must be considered and eliminated before concluding that a black hole has been observed.: 11 The Event Horizon Telescope (EHT) is a global system of radio telescopes capable of directly observing a black hole shadow. The angular resolution of a telescope is based on its aperture and the wavelengths it is observing. Because the angular diameters of Sagittarius A* and Messier 87* in the sky are very small, a single telescope would need to be about the size of the Earth to clearly distinguish their horizons using radio wavelengths. By combining data from several different radio telescopes around the world, the Event Horizon Telescope creates an effective aperture the diameter size of the Earth. The EHT team used imaging algorithms to compute the most probable image from the data in its observations of Sagittarius A* and M87*. Gravitational-wave interferometry can be used to detect merging black holes and other compact objects. In this method, a laser beam is split down two long arms of a tunnel. The laser beams reflect off of mirrors in the tunnels and converge at the intersection of the arms, cancelling each other out. However, when a gravitational wave passes, it warps spacetime, changing the lengths of the arms themselves. Since each laser beam is now travelling a slightly different distance, they do not cancel out and produce a recognizable signal. Analysis of the signal can give scientists information about what caused the gravitational waves. Since gravitational waves are very weak, gravitational-wave observatories such as LIGO must have arms several kilometers long and carefully control for noise from Earth to be able to detect these gravitational waves. Since the first measurements in 2016, multiple gravitational waves from black holes have been detected and analyzed. The proper motions of stars near the centre of the Milky Way provide strong observational evidence that these stars are orbiting a supermassive black hole. Since 1995, astronomers have tracked the motions of 90 stars orbiting an invisible object coincident with the radio source Sagittarius A*. In 1998, by fitting the motions of the stars to Keplerian orbits, the astronomers were able to infer that Sagittarius A* must be a 2.6×106 M☉ object must be contained within a radius of 0.02 light-years. Since then, one of the stars—called S2—has completed a full orbit. From the orbital data, astronomers were able to refine the calculations of the mass of Sagittarius A* to 4.3×106 M☉, with a radius of less than 0.002 light-years. This upper limit radius is larger than the Schwarzschild radius for the estimated mass, so the combination does not prove Sagittarius A* is a black hole. Nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for confining so much invisible mass into such a small volume. Additionally, there is some observational evidence that this object might possess an event horizon, a feature unique to black holes. The Event Horizon Telescope image of Sagittarius A*, released in 2022, provided further confirmation that it is indeed a black hole. X-ray binaries are binary systems that emit a majority of their radiation in the X-ray part of the electromagnetic spectrum. These X-ray emissions result when a compact object accretes matter from an ordinary star. The presence of an ordinary star in such a system provides an opportunity for studying the central object and to determine if it might be a black hole. By measuring the orbital period of the binary, the distance to the binary from Earth, and the mass of the companion star, scientists can estimate the mass of the compact object. The Tolman-Oppenheimer-Volkoff limit (TOV limit) dictates the largest mass a nonrotating neutron star can be, and is estimated to be about two solar masses. While a rotating neutron star can be slightly more massive, if the compact object is much more massive than the TOV limit, it cannot be a neutron star and is generally expected to be a black hole. The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. Observations of rotation broadening of the optical star reported in 1986 lead to a compact object mass estimate of 16 solar masses, with 7 solar masses as the lower bound. In 2011, this estimate was updated to 14.1±1.0 M☉ for the black hole and 19.2±1.9 M☉ for the optical stellar companion. X-ray binaries can be categorized as either low-mass or high-mass; This classification is based on the mass of the companion star, not the compact object itself. In a class of X-ray binaries called soft X-ray transients, the companion star is of relatively low mass, allowing for more accurate estimates of the black hole mass. These systems actively emit X-rays for only several months once every 10–50 years. During the period of low X-ray emission, called quiescence, the accretion disk is extremely faint, allowing detailed observation of the companion star. Numerous black hole candidates have been measured by this method. Black holes are also sometimes found in binaries with other compact objects, such as white dwarfs, neutron stars, and other black holes. The centre of nearly every galaxy contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M–sigma relation, strongly suggests a connection between the formation of the black hole and that of the galaxy itself. Astronomers use the term active galaxy to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the high levels of activity in the centers of these galaxies, regions called active galactic nuclei (AGN), may be explained by accretion onto supermassive black holes. These AGN consist of a central black hole that may be millions or billions of times more massive than the Sun, a disk of interstellar gas and dust called an accretion disk, and two jets perpendicular to the accretion disk. Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, Messier 32, Messier 87, the Sombrero Galaxy, and the Milky Way itself. Another way black holes can be detected is through observation of effects caused by their strong gravitational field. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected, making objects behind them appear distorted. When the lensing object is a black hole, this effect can be strong enough to create multiple images of a star or other luminous source. However, the distance between the lensed images may be too small for contemporary telescopes to resolve—this phenomenon is called microlensing. Instead of seeing two images of a lensed star, astronomers see the star brighten slightly as the black hole moves towards the line of sight between the star and Earth and then return to its normal luminosity as the black hole moves away. The turn of the millennium saw the first 3 candidate detections of black holes in this way, and in January 2022, astronomers reported the first confirmed detection of a microlensing event from an isolated black hole. This was also the first determination of an isolated black hole mass, 7.1±1.3 M☉. Alternatives While there is a strong case for supermassive black holes, the model for stellar-mass black holes assumes of an upper limit for the mass of a neutron star: objects observed to have more mass are assumed to be black holes. However, the properties of extremely dense matter are poorly understood. New exotic phases of matter could allow other kinds of massive objects. Quark stars would be made up of quark matter and supported by quark degeneracy pressure, a form of degeneracy pressure even stronger than neutron degeneracy pressure. This would halt gravitational collapse at a higher mass than for a neutron star. Even stronger stars called electroweak stars would convert quarks in their cores into leptons, providing additional pressure to stop the star from collapsing. If, as some extensions of the Standard Model posit, quarks and leptons are made up of the even-smaller fundamental particles called preons, a very compact star could be supported by preon degeneracy pressure. While none of these hypothetical models can explain all of the observations of stellar black hole candidates, a Q star is the only alternative which could significantly exceed the mass limit for neutron stars and thus provide an alternative for supermassive black holes.: 12 A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. A dark energy star would convert infalling matter into vacuum energy; This vacuum energy would be much larger than the vacuum energy of outside space, exerting outwards pressure and preventing a singularity from forming. A black star would be gravitationally collapsing slowly enough that quantum effects would keep it just on the cusp of fully collapsing into a black hole. A gravastar would consist of a very thin shell and a dark-energy interior providing outward pressure to stop the collapse into a black hole or formation of a singularity; It could even have another gravastar inside, called a 'nestar'. Open questions According to the no-hair theorem, a black hole is defined by only three parameters: its mass, charge, and angular momentum. This seems to mean that all other information about the matter that went into forming the black hole is lost, as there is no way to determine anything about the black hole from outside other than those three parameters. When black holes were thought to persist forever, this information loss was not problematic, as the information can be thought of as existing inside the black hole. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information is seemingly gone forever. This is called the black hole information paradox. Theoretical studies analyzing the paradox have led to both further paradoxes and new ideas about the intersection of quantum mechanics and general relativity. While there is no consensus on the resolution of the paradox, work on the problem is expected to be important for a theory of quantum gravity.: 126 Observations of faraway galaxies have found that ultraluminous quasars, powered by supermassive black holes, existed in the early universe as far as redshift z ≥ 7 {\displaystyle z\geq 7} . These black holes have been assumed to be the products of the gravitational collapse of large population III stars. However, these stellar remnants were not massive enough to produce the quasars observed at early times without accreting beyond the Eddington limit, the theoretical maximum rate of black hole accretion. Physicists have suggested a variety of different mechanisms by which these supermassive black holes may have formed. It has been proposed that smaller black holes may have also undergone mergers to produce the observed supermassive black holes. It is also possible that they were seeded by direct-collapse black holes, in which a large cloud of hot gas avoids fragmentation that would lead to multiple stars, due to low angular momentum or heating from a nearby galaxy. Given the right circumstances, a single supermassive star forms and collapses directly into a black hole without undergoing typical stellar evolution. Additionally, these supermassive black holes in the early universe may be high-mass primordial black holes, which could have accreted further matter in the centers of galaxies. Finally, certain mechanisms allow black holes to grow faster than the theoretical Eddington limit, such as dense gas in the accretion disk limiting outward radiation pressure that prevents the black hole from accreting. However, the formation of bipolar jets prevent super-Eddington rates. In fiction Black holes have been portrayed in science fiction in a variety of ways. Even before the advent of the term itself, objects with characteristics of black holes appeared in stories such as the 1928 novel The Skylark of Space with its "black Sun" and the "hole in space" in the 1935 short story Starship Invincible. As black holes grew to public recognition in the 1960s and 1970s, they began to be featured in films as well as novels, such as Disney's The Black Hole. Black holes have also been used in works of the 21st century, such as Christopher Nolan's science fiction epic Interstellar. Authors and screenwriters have exploited the relativistic effects of black holes, particularly gravitational time dilation. For example, Interstellar features a black hole planet with a time dilation factor of over 60,000:1, while the 1977 novel Gateway depicts a spaceship approaching but never crossing the event horizon of a black hole from the perspective of an outside observer due to time dilation effects. Black holes have also been appropriated as wormholes or other methods of faster-than-light travel, such as in the 1974 novel The Forever War, where a network of black holes is used for interstellar travel. Additionally, black holes can feature as hazards to spacefarers and planets: A black hole threatens a deep-space outpost in 1978 short story The Black Hole Passes, and a binary black hole dangerously alters the orbit of a planet in the 2018 Netflix reboot of Lost in Space. Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-118] | [TOKENS: 10728] |
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Advertising_industry] | [TOKENS: 233] |
Contents Advertising industry The advertising industry is the global industry of public relations and marketing companies, media services, and advertising agencies. Several large advertising agencies, including WPP plc, Omnicom, Publicis Groupe, Interpublic and Dentsu, are among the industry's largest. It is a global, multibillion-dollar business that connects manufacturers and consumers. The industry ranges from nonprofit organizations to Fortune 500 companies. The advertising industry has the capacity to provide valuable insights into the audience’s behavior and generate data that enables the business to deeply understand their target audiences. The analytical tools were developed to facilitate the evaluation, as the advertising sector expanded. This advancement assists the business to create a better product and measure their growth investment. In the United States, there are more than 65,000 advertising agencies employing nearly 250,000 employees with annual revenues of $166.8 billion, as of 2014. In 2016, global advertising sales reached $493 billion. In 2017, it was estimated that digital ad sales were first to surpass the TV market. Trade associations Trade associations representing parts or all of the advertising industry include: Programs References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Alphonse_Daudet] | [TOKENS: 1211] |
Contents Alphonse Daudet Alphonse Daudet (French: [dodɛ]; 13 May 1840 – 16 December 1897) was a French novelist. He was the husband of Julia Daudet and father of Edmée, Léon and Lucien Daudet. Early life Daudet was born in Nîmes, France. His family, on both sides, belonged to the bourgeoisie. His father, Vincent Daudet, was a silk manufacturer—a man dogged through life by misfortune and failure. Alphonse, amid much truancy, had a depressing boyhood. In 1856 he left Lyon, where his schooldays had been mainly spent, and began his career as a schoolteacher at Alès, Gard, in the south of France. The position proved to be intolerable and Daudet said later that for months after leaving Alès he would wake with horror, thinking he was still among his unruly pupils. These experiences and others were reflected in his novel Le Petit Chose. On 1 November 1857, he abandoned teaching and took refuge with his brother Ernest Daudet, three years his senior, who was trying, "and thereto soberly", to make a living as a journalist in Paris. Alphonse took to writing, and his poems were collected into a small volume, Les Amoureuses (1858), which met with a fair reception. He obtained employment on Le Figaro, then under Cartier de Villemessant's energetic editorship, wrote two or three plays, and began to be recognized in literary communities as possessing distinction and promise. Morny, Napoleon III's all-powerful minister, appointed him to be one of his secretaries—a post which he held till Morny's death in 1865. Literary career In 1866, Daudet's Lettres de mon moulin (Letters from My Windmill), written in Clamart, near Paris, and alluding to a windmill in Fontvieille, Provence, won the attention of many readers. The first of his longer books, Le Petit Chose (1868), did not, however, produce popular sensation. It is, in the main, the story of his own earlier years told with much grace and pathos. The year 1872 brought the famous Aventures prodigieuses de Tartarin de Tarascon, and the three-act play L'Arlésienne. But Fromont jeune et Risler aîné (1874) at once took the world by storm. It struck a note, not new certainly in English literature, but comparatively new in French. His creativeness resulted in characters that were real and also typical. Jack, a novel about an illegitimate child, a martyr to his mother's selfishness, which followed in 1876, served only to deepen the same impression. Henceforward his career was that of a successful man of letters, mainly spent writing novels: Le Nabab (1877), Les Rois en exil (1879), Numa Roumestan (1881), Sapho (1884), L'Immortel (1888), and writing for the stage: reminiscing in Trente ans de Paris (1887) and Souvenirs d'un homme de lettres (1888). These, with the three Tartarins–Tartarin de Tarascon, Tartarin sur les Alpes, Port-Tarascon–and the short stories, written for the most part before he had acquired fame and fortune, constitute his life work. L'Immortel is a bitter attack on the Académie française, to which august body Daudet never belonged. Daudet also wrote for children, including La Belle Nivernaise, the story of an old boat and her crew. In 1867 Daudet married Julia Allard, author of Impressions de nature et d'art (1879), L'Enfance d'une Parisienne (1883), and some literary studies written under the pseudonym "Karl Steen". Daudet was far from faithful, and was one of a generation of French literary syphilitics. Having lost his virginity at the age of twelve, he then slept with his friends' mistresses throughout his marriage. Daudet would undergo several painful treatments and operations for his subsequently paralysing disease. His journal entries relating to the pain he experienced from tabes dorsalis are collected in the volume In the Land of Pain, translated by Julian Barnes. He died in Paris on 16 December 1897, and was interred at that city's Père Lachaise Cemetery. Political and social views, controversy and legacy Daudet was a monarchist and a fervent opponent of the French Republic. He was an antisemite,[citation needed] though less famously so than his son Léon. The main character of Le Nabab was inspired by a Jewish politician who was elected as a deputy for Nîmes. Daudet campaigned against him and lost.[citation needed] Daudet counted many antisemitic literary figures amongst his friends, including Edouard Drumont, who founded the Antisemitic League of France and founded and edited the anti-Semitic newspaper La Libre Parole. It has been argued that Daudet deliberately exaggerated his links to Provence to further his literary career and social success (following Frederic Mistral's success), including lying to his future wife about his "Provençal" roots. Numerous colleges and schools in contemporary France bear his name and his books are widely read and several are in print.[citation needed] Works Major works, and works in English translation (date given of first translation). For a complete bibliography see Works by Alphonse Daudet [fr]. References Bibliography Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Black_hole#Active_galactic_nucleus] | [TOKENS: 13839] |
Contents Black hole A black hole is an astronomical body so compact that its gravity prevents anything, including light, from escaping. Albert Einstein's theory of general relativity predicts that a sufficiently compact mass will form a black hole. The boundary of no escape is called the event horizon. In general relativity, a black hole's event horizon seals an object's fate but produces no locally detectable change when crossed. General relativity also predicts that every black hole should have a central singularity, where the curvature of spacetime is infinite. In many ways, a black hole acts like an ideal black body, as it reflects no light. Quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly. Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterise a black hole. Due to his influential research, the Schwarzschild metric is named after him. David Finkelstein, in 1958, first interpreted Schwarzschild's model as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971. Black holes typically form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses may form by absorbing other stars and merging with other black holes, or via direct collapse of gas clouds. There is consensus that supermassive black holes exist in the centres of most galaxies. The presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Matter falling toward a black hole can form an accretion disk of infalling plasma, heated by friction and emitting light. In extreme cases, this creates a quasar, some of the brightest objects in the universe. Merging black holes can also be detected by observation of the gravitational waves they emit. If other stars are orbiting a black hole, their orbits can be used to determine the black hole's mass and location. Such observations can be used to exclude possible alternatives such as neutron stars. In this way, astronomers have identified numerous stellar black hole candidates in binary systems and established that the radio source known as Sagittarius A*, at the core of the Milky Way galaxy, contains a supermassive black hole of about 4.3 million solar masses. History The idea of a body so massive that even light could not escape was first proposed in the late 18th century by English astronomer and clergyman John Michell and independently by French scientist Pierre-Simon Laplace. Both scholars proposed very large stars in contrast to the modern concept of an extremely dense object. Michell's idea, in a short part of a letter published in 1784, calculated that a star with the same density but 500 times the radius of the sun would not let any emitted light escape; the surface escape velocity would exceed the speed of light.: 122 Michell correctly hypothesized that such supermassive but non-radiating bodies might be detectable through their gravitational effects on nearby visible bodies. In 1796, Laplace mentioned that a star could be invisible if it were sufficiently large while speculating on the origin of the Solar System in his book Exposition du Système du Monde. Franz Xaver von Zach asked Laplace for a mathematical analysis, which Laplace provided and published in a journal edited by von Zach. In 1905, Albert Einstein showed that the laws of electromagnetism would be invariant under a Lorentz transformation: they would be identical for observers travelling at different velocities relative to each other. This discovery became known as the principle of special relativity. Although the laws of mechanics had already been shown to be invariant, gravity remained yet to be included.: 19 In 1907, Einstein published a paper proposing his equivalence principle, the hypothesis that inertial mass and gravitational mass have a common cause. Using the principle, Einstein predicted the redshift and half of the lensing effect of gravity on light; the full prediction of gravitational lensing required development of general relativity.: 19 By 1915, Einstein refined these ideas into his general theory of relativity, which explained how matter affects spacetime, which in turn affects the motion of other matter. This formed the basis for black hole physics. Only a few months after Einstein published the field equations describing general relativity, astrophysicist Karl Schwarzschild set out to apply the idea to stars. He assumed spherical symmetry with no spin and found a solution to Einstein's equations.: 124 A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution. At a certain radius from the center of the mass, the Schwarzschild solution became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this radius, which later became known as the Schwarzschild radius, was not understood at the time. Many physicists of the early 20th century were skeptical of the existence of black holes. In a 1926 popular science book, Arthur Eddington critiqued the idea of a star with mass compressed to its Schwarzschild radius as a flaw in the then-poorly-understood theory of general relativity.: 134 In 1939, Einstein himself used his theory of general relativity in an attempt to prove that black holes were impossible. His work relied on increasing pressure or increasing centrifugal force balancing the force of gravity so that the object would not collapse beyond its Schwarzschild radius. He missed the possibility that implosion would drive the system below this critical value.: 135 By the 1920s, astronomers had classified a number of white dwarf stars as too cool and dense to be explained by the gradual cooling of ordinary stars. In 1926, Ralph Fowler showed that quantum-mechanical degeneracy pressure was larger than thermal pressure at these densities.: 145 In 1931, Subrahmanyan Chandrasekhar calculated that a non-rotating body of electron-degenerate matter below a certain limiting mass is stable, and by 1934 he showed that this explained the catalog of white dwarf stars.: 151 When Chandrasekhar announced his results, Eddington pointed out that stars above this limit would radiate until they were sufficiently dense to prevent light from exiting, a conclusion he considered absurd. Eddington and, later, Lev Landau argued that some yet unknown mechanism would stop the collapse. In the 1930s, Fritz Zwicky and Walter Baade studied stellar novae, focusing on exceptionally bright ones they called supernovae. Zwicky promoted the idea that supernovae produced stars with the density of atomic nuclei—neutron stars—but this idea was largely ignored.: 171 In 1939, based on Chandrasekhar's reasoning, J. Robert Oppenheimer and George Volkoff predicted that neutron stars below a certain mass limit, later called the Tolman–Oppenheimer–Volkoff limit, would be stable due to neutron degeneracy pressure. Above that limit, they reasoned that either their model would not apply or that gravitational contraction would not stop.: 380 John Archibald Wheeler and two of his students resolved questions about the model behind the Tolman–Oppenheimer–Volkoff (TOV) limit. Harrison and Wheeler developed the equations of state relating density to pressure for cold matter all the way through electron degeneracy and neutron degeneracy. Masami Wakano and Wheeler then used the equations to compute the equilibrium curve for stars, relating mass to circumference. They found no additional features that would invalidate the TOV limit. This meant that the only thing that could prevent black holes from forming was a dynamic process ejecting sufficient mass from a star as it cooled.: 205 The modern concept of black holes was formulated by Robert Oppenheimer and his student Hartland Snyder in 1939.: 80 In the paper, Oppenheimer and Snyder solved Einstein's equations of general relativity for an idealized imploding star, in a model later called the Oppenheimer–Snyder model, then described the results from far outside the star. The implosion starts as one might expect: the star material rapidly collapses inward. However, as the density of the star increases, gravitational time dilation increases and the collapse, viewed from afar, seems to slow down further and further until the star reaches its Schwarzschild radius, where it appears frozen in time.: 217 In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, calling it "a perfect unidirectional membrane: causal influences can cross it in only one direction". In this sense, events that occur inside of the black hole cannot affect events that occur outside of the black hole. Finkelstein created a new reference frame to include the point of view of infalling observers.: 103 Finkelstein's new frame of reference allowed events at the surface of an imploding star to be related to events far away. By 1962 the two points of view were reconciled, convincing many skeptics that implosion into a black hole made physical sense.: 226 The era from the mid-1960s to the mid-1970s was the "golden age of black hole research", when general relativity and black holes became mainstream subjects of research.: 258 In this period, more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the cylindrically symmetric solution for a black hole that is both rotating and electrically charged. In 1967, Werner Israel found that the Schwarzschild solution was the only possible solution for a nonspinning, uncharged black hole, meaning that a Schwarzschild black hole would be defined by its mass alone. Similar identities were later found for Reissner-Nordstrom and Kerr black holes, defined only by their mass and their charge or spin respectively. Together, these findings became known as the no-hair theorem, which states that a stationary black hole is completely described by the three parameters of the Kerr–Newman metric: mass, angular momentum, and electric charge. At first, it was suspected that the strange mathematical singularities found in each of the black hole solutions only appeared due to the assumption that a black hole would be perfectly spherically symmetric, and therefore the singularities would not appear in generic situations where black holes would not necessarily be symmetric. This view was held in particular by Vladimir Belinski, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions, although they would later reverse their positions. However, in 1965, Roger Penrose proved that general relativity without quantum mechanics requires that singularities appear in all black holes. Astronomical observations also made great strides during this era. In 1967, Antony Hewish and Jocelyn Bell Burnell discovered pulsars and by 1969, these were shown to be rapidly rotating neutron stars. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities, but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse. Based on observations in Greenwich and Toronto in the early 1970s, Cygnus X-1, a galactic X-ray source discovered in 1964, became the first astronomical object commonly accepted to be a black hole. Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed: 442 when Hawking, in 1974, showed that quantum field theory implies that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole, predicting the effect now known as Hawking radiation. While Cygnus X-1, a stellar-mass black hole, was generally accepted by the scientific community as a black hole by the end of 1973, it would be decades before a supermassive black hole would gain the same broad recognition. Although, as early as the 1960s, physicists such as Donald Lynden-Bell and Martin Rees had suggested that powerful quasars in the center of galaxies were powered by accreting supermassive black holes, little observational proof existed at the time. However, the Hubble Space Telescope, launched decades later, found that supermassive black holes were not only present in these active galactic nuclei, but that supermassive black holes in the center of galaxies were ubiquitous: Almost every galaxy had a supermassive black hole at its center, many of which were quiescent. In 1999, David Merritt proposed the M–sigma relation, which related the dispersion of the velocity of matter in the center bulge of a galaxy to the mass of the supermassive black hole at its core. Subsequent studies confirmed this correlation. Around the same time, based on telescope observations of the velocities of stars at the center of the Milky Way galaxy, independent work groups led by Andrea Ghez and Reinhard Genzel concluded that the compact radio source in the center of the galaxy, Sagittarius A*, was likely a supermassive black hole. On 11 February 2016, the LIGO Scientific Collaboration and Virgo Collaboration announced the first direct detection of gravitational waves, named GW150914, representing the first observation of a black hole merger. At the time of the merger, the black holes were approximately 1.4 billion light-years away from Earth and had masses of 30 and 35 solar masses.: 6 In 2017, Rainer Weiss, Kip Thorne, and Barry Barish, who had spearheaded the project, were awarded the Nobel Prize in Physics for their work. Since the initial discovery in 2015, hundreds more gravitational waves have been observed by LIGO and another interferometer, Virgo. On 10 April 2019, the first direct image of a black hole and its vicinity was published, following observations made by the Event Horizon Telescope (EHT) in 2017 of the supermassive black hole in Messier 87's galactic centre. In 2022, the Event Horizon Telescope collaboration released an image of the black hole in the center of the Milky Way galaxy, Sagittarius A*; The data had been collected in 2017. In 2020, the Nobel Prize in Physics was awarded for work on black holes. Andrea Ghez and Reinhard Genzel shared one-half for their discovery that Sagittarius A* is a supermassive black hole. Penrose received the other half for his work showing that the mathematics of general relativity requires the formation of black holes. Cosmologists lamented that Hawking's extensive theoretical work on black holes would not be honored since he died in 2018. In December 1967, a student reportedly suggested the phrase black hole at a lecture by John Wheeler; Wheeler adopted the term for its brevity and "advertising value", and Wheeler's stature in the field ensured it quickly caught on, leading some to credit Wheeler with coining the phrase. However, the term was used by others around that time. Science writer Marcia Bartusiak traces the term black hole to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive. The term was used in print by Life and Science News magazines in 1963, and by science journalist Ann Ewing in her article "'Black Holes' in Space", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science held in Cleveland, Ohio. Definition A black hole is generally defined as a region of spacetime from which no information-carrying signals or objects can escape. However, verifying an object as a black hole by this definition would require waiting for an infinite time and at an infinite distance from the black hole to verify that indeed, nothing has escaped, and thus cannot be used to identify a physical black hole. Broadly, physicists do not have a precisely-agreed-upon definition of a black hole. Among astrophysicists, a black hole is a compact object with a mass larger than four solar masses. A black hole may also be defined as a reservoir of information: 142 or a region where space is falling inwards faster than the speed of light. Properties The no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. If the conjecture is true, any two black holes that share the same values for these properties, or parameters, are indistinguishable from one another. The degree to which the conjecture is true for real black holes is currently an unsolved problem. The simplest static black holes have mass but neither electric charge nor angular momentum. According to Birkhoff's theorem, these Schwarzschild black holes are the only vacuum solution that is spherically symmetric. Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum. The simplest static black holes have mass but neither electric charge nor angular momentum. Contrary to the popular notion of a black hole "sucking in everything" in its surroundings, from far away, the external gravitational field of a black hole is identical to that of any other body of the same mass. While a black hole can theoretically have any positive mass, the charge and angular momentum are constrained by the mass. The total electric charge Q and the total angular momentum J are expected to satisfy the inequality Q 2 4 π ϵ 0 + c 2 J 2 G M 2 ≤ G M 2 {\displaystyle {\frac {Q^{2}}{4\pi \epsilon _{0}}}+{\frac {c^{2}J^{2}}{GM^{2}}}\leq GM^{2}} for a black hole of mass M. Black holes with the maximum possible charge or spin satisfying this inequality are called extremal black holes. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These are so-called naked singularities that can be observed from the outside. Because these singularities make the universe inherently unpredictable, many physicists believe they could not exist. The weak cosmic censorship hypothesis, proposed by Sir Roger Penrose, rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. However, this theory has not yet been proven, and some physicists believe that naked singularities could exist. It is also unknown whether black holes could even become extremal, forming naked singularities, since natural processes counteract increasing spin and charge when a black hole becomes near-extremal. The total mass of a black hole can be estimated by analyzing the motion of objects near the black hole, such as stars or gas. All black holes spin, often fast—One supermassive black hole, GRS 1915+105 has been estimated to spin at over 1,000 revolutions per second. The Milky Way's central black hole Sagittarius A* rotates at about 90% of the maximum rate. The spin rate can be inferred from measurements of atomic spectral lines in the X-ray range. As gas near the black hole plunges inward, high energy X-ray emission from electron-positron pairs illuminates the gas further out, appearing red-shifted due to relativistic effects. Depending on the spin of the black hole, this plunge happens at different radii from the hole, with different degrees of redshift. Astronomers can use the gap between the x-ray emission of the outer disk and the redshifted emission from plunging material to determine the spin of the black hole. A newer way to estimate spin is based on the temperature of gasses accreting onto the black hole. The method requires an independent measurement of the black hole mass and inclination angle of the accretion disk followed by computer modeling. Gravitational waves from coalescing binary black holes can also provide the spin of both progenitor black holes and the merged hole, but such events are rare. A spinning black hole has angular momentum. The supermassive black hole in the center of the Messier 87 (M87) galaxy appears to have an angular momentum very close to the maximum theoretical value. That uncharged limit is J ≤ G M 2 c , {\displaystyle J\leq {\frac {GM^{2}}{c}},} allowing definition of a dimensionless spin magnitude such that 0 ≤ c J G M 2 ≤ 1. {\displaystyle 0\leq {\frac {cJ}{GM^{2}}}\leq 1.} Most black holes are believed to have an approximately neutral charge. For example, Michal Zajaček, Arman Tursunov, Andreas Eckart, and Silke Britzen found the electric charge of Sagittarius A* to be at least ten orders of magnitude below the theoretical maximum. A charged black hole repels other like charges just like any other charged object. If a black hole were to become charged, particles with an opposite sign of charge would be pulled in by the extra electromagnetic force, while particles with the same sign of charge would be repelled, neutralizing the black hole. This effect may not be as strong if the black hole is also spinning. The presence of charge can reduce the diameter of the black hole by up to 38%. The charge Q for a nonspinning black hole is bounded by Q ≤ G M , {\displaystyle Q\leq {\sqrt {G}}M,} where G is the gravitational constant and M is the black hole's mass. Classification Black holes can have a wide range of masses. The minimum mass of a black hole formed by stellar gravitational collapse is governed by the maximum mass of a neutron star and is believed to be approximately two-to-four solar masses. However, theoretical primordial black holes, believed to have formed soon after the Big Bang, could be far smaller, with masses as little as 10−5 grams at formation. These very small black holes are sometimes called micro black holes. Black holes formed by stellar collapse are called stellar black holes. Estimates of their maximum mass at formation vary, but generally range from 10 to 100 solar masses, with higher estimates for black holes progenated by low-metallicity stars. The mass of a black hole formed via a supernova has a lower bound: If the progenitor star is too small, the collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. Degeneracy pressure occurs from the Pauli exclusion principle—Particles will resist being in the same place as each other. Smaller progenitor stars, with masses less than about 8 M☉, will be held together by the degeneracy pressure of electrons and will become a white dwarf. For more massive progenitor stars, electron degeneracy pressure is no longer strong enough to resist the force of gravity and the star will be held together by neutron degeneracy pressure, which can occur at much higher densities, forming a neutron star. If the star is still too massive, even neutron degeneracy pressure will not be able to resist the force of gravity and the star will collapse into a black hole.: 5.8 Stellar black holes can also gain mass via accretion of nearby matter, often from a companion object such as a star. Black holes that are larger than stellar black holes but smaller than supermassive black holes are called intermediate-mass black holes, with masses of approximately 102 to 105 solar masses. These black holes seem to be rarer than their stellar and supermassive counterparts, with relatively few candidates having been observed. Physicists have speculated that such black holes may form from collisions in globular and star clusters or at the center of low-mass galaxies. They may also form as the result of mergers of smaller black holes, with several LIGO observations finding merged black holes within the 110-350 solar mass range. The black holes with the largest masses are called supermassive black holes, with masses more than 106 times that of the Sun. These black holes are believed to exist at the centers of almost every large galaxy, including the Milky Way. Some scientists have proposed a subcategory of even larger black holes, called ultramassive black holes, with masses greater than 109-1010 solar masses. Theoretical models predict that the accretion disc that feeds black holes will be unstable once a black hole reaches 50-100 billion times the mass of the Sun, setting a rough upper limit to black hole mass. Structure While black holes are conceptually invisible sinks of all matter and light, in astronomical settings, their enormous gravity alters the motion of surrounding objects and pulls nearby gas inwards at near-light speed, making the area around black holes the brightest objects in the universe. Some black holes have relativistic jets—thin streams of plasma travelling away from the black hole at more than one-tenth of the speed of light. A small faction of the matter falling towards the black hole gets accelerated away along the hole rotation axis. These jets can extend as far as millions of parsecs from the black hole itself. Black holes of any mass can have jets. However, they are typically observed around spinning black holes with strongly-magnetized accretion disks. Relativistic jets were more common in the early universe, when galaxies and their corresponding supermassive black holes were rapidly gaining mass. All black holes with jets also have an accretion disk, but the jets are usually brighter than the disk. Quasars, typically found in other galaxies, are believed to be supermassive black holes with jets; microquasars are believed to be stellar-mass objects with jets, typically observed in the Milky Way. The mechanism of formation of jets is not yet known, but several options have been proposed. One method proposed to fuel these jets is the Blandford-Znajek process, which suggests that the dragging of magnetic field lines by a black hole's rotation could launch jets of matter into space. The Penrose process, which involves extraction of a black hole's rotational energy, has also been proposed as a potential mechanism of jet propulsion. Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disk-like structure around the object.: 242 As the disk's angular momentum is transferred outward due to internal processes, its matter falls farther inward, converting its gravitational energy into heat and releasing a large flux of x-rays. The temperature of these disks can range from thousands to millions of Kelvin, and temperatures can differ throughout a single accretion disk. Accretion disks can also emit in other parts of the electromagnetic spectrum, depending on the disk's turbulence and magnetization and the black hole's mass and angular momentum. Accretion disks can be defined as geometrically thin or geometrically thick. Geometrically thin disks are mostly confined to the black hole's equatorial plane and have a well-defined edge at the innermost stable circular orbit (ISCO), while geometrically thick disks are supported by internal pressure and temperature and can extend inside the ISCO. Disks with high rates of electron scattering and absorption, appearing bright and opaque, are called optically thick; optically thin disks are more translucent and produce fainter images when viewed from afar. Accretion disks of black holes accreting beyond the Eddington limit are often referred to as polish donuts due to their thick, toroidal shape that resembles that of a donut. Quasar accretion disks are expected to usually appear blue in color. The disk for a stellar black hole, on the other hand, would likely look orange, yellow, or red, with its inner regions being the brightest. Theoretical research suggests that the hotter a disk is, the bluer it should be, although this is not always supported by observations of real astronomical objects. Accretion disk colors may also be altered by the Doppler effect, with the part of the disk travelling towards an observer appearing bluer and brighter and the part of the disk travelling away from the observer appearing redder and dimmer. In Newtonian gravity, test particles can stably orbit at arbitrary distances from a central object. In general relativity, however, there exists a smallest possible radius for which a massive particle can orbit stably. Any infinitesimal inward perturbations to this orbit will lead to the particle spiraling into the black hole, and any outward perturbations will, depending on the energy, cause the particle to spiral in, move to a stable orbit further from the black hole, or escape to infinity. This orbit is called the innermost stable circular orbit, or ISCO. The location of the ISCO depends on the spin of the black hole and the spin of the particle itself. In the case of a Schwarzschild black hole (spin zero) and a particle without spin, the location of the ISCO is: r I S C O = 3 r s = 6 G M c 2 , {\displaystyle r_{\rm {ISCO}}=3\,r_{\text{s}}={\frac {6\,GM}{c^{2}}},} where r I S C O {\displaystyle r_{\rm {_{ISCO}}}} is the radius of the ISCO, r s {\displaystyle r_{\text{s}}} is the Schwarzschild radius of the black hole, G {\displaystyle G} is the gravitational constant, and c {\displaystyle c} is the speed of light. The radius of this orbit changes slightly based on particle spin. For charged black holes, the ISCO moves inwards. For spinning black holes, the ISCO is moved inwards for particles orbiting in the same direction that the black hole is spinning (prograde) and outwards for particles orbiting in the opposite direction (retrograde). For example, the ISCO for a particle orbiting retrograde can be as far out as about 9 r s {\displaystyle 9r_{\text{s}}} , while the ISCO for a particle orbiting prograde can be as close as at the event horizon itself. The photon sphere is a spherical boundary for which photons moving on tangents to that sphere are bent completely around the black hole, possibly orbiting multiple times. Light rays with impact parameters less than the radius of the photon sphere enter the black hole. For Schwarzschild black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius; the radius for non-Schwarzschild black holes is at least 1.5 times the radius of the event horizon. When viewed from a great distance, the photon sphere creates an observable black hole shadow. Since no light emerges from within the black hole, this shadow is the limit for possible observations.: 152 The shadow of colliding black holes should have characteristic warped shapes, allowing scientists to detect black holes that are about to merge. While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Therefore, any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon. Light emitted towards the photon sphere may also curve around the black hole and return to the emitter. For a rotating, uncharged black hole, the radius of the photon sphere depends on the spin parameter and whether the photon is orbiting prograde or retrograde. For a photon orbiting prograde, the photon sphere will be 1-3 Schwarzschild radii from the center of the black hole, while for a photon orbiting retrograde, the photon sphere will be between 3-5 Schwarzschild radii from the center of the black hole. The exact location of the photon sphere depends on the magnitude of the black hole's rotation. For a charged, nonrotating black hole, there will only be one photon sphere, and the radius of the photon sphere will decrease for increasing black hole charge. For non-extremal, charged, rotating black holes, there will always be two photon spheres, with the exact radii depending on the parameters of the black hole. Near a rotating black hole, spacetime rotates similar to a vortex. The rotating spacetime will drag any matter and light into rotation around the spinning black hole. This effect of general relativity, called frame dragging, gets stronger closer to the spinning mass. The region of spacetime in which it is impossible to stay still is called the ergosphere. The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but bulges out from it around the equator. Matter and radiation can escape from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole, slowing down the rotation of the black hole.: 268 A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process, is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei. The observable region of spacetime around a black hole closest to its event horizon is called the plunging region. In this area it is no longer possible for free falling matter to follow circular orbits or stop a final descent into the black hole. Instead, it will rapidly plunge toward the black hole at close to the speed of light, growing increasingly hot and producing a characteristic, detectable thermal emission. However, light and radiation emitted from this region can still escape from the black hole's gravitational pull. For a nonspinning, uncharged black hole, the radius of the event horizon, or Schwarzschild radius, is proportional to the mass, M, through r s = 2 G M c 2 ≈ 2.95 M M ⊙ k m , {\displaystyle r_{\mathrm {s} }={\frac {2GM}{c^{2}}}\approx 2.95\,{\frac {M}{M_{\odot }}}~\mathrm {km,} } where rs is the Schwarzschild radius and M☉ is the mass of the Sun.: 124 For a black hole with nonzero spin or electric charge, the radius is smaller,[Note 1] until an extremal black hole could have an event horizon close to r + = G M c 2 , {\displaystyle r_{\mathrm {+} }={\frac {GM}{c^{2}}},} half the radius of a nonspinning, uncharged black hole of the same mass. Since the volume within the Schwarzschild radius increase with the cube of the radius, average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass: supermassive black holes are much less dense than stellar black holes. The average density of a 108 M☉ black hole is comparable to that of water. The defining feature of a black hole is the existence of an event horizon, a boundary in spacetime through which matter and light can pass only inward towards the center of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach or affect an outside observer, making it impossible to determine whether such an event occurred.: 179 For non-rotating black holes, the geometry of the event horizon is precisely spherical, while for rotating black holes, the event horizon is oblate. To a distant observer, a clock near a black hole would appear to tick more slowly than one further from the black hole.: 217 This effect, known as gravitational time dilation, would also cause an object falling into a black hole to appear to slow as it approached the event horizon, never quite reaching the horizon from the perspective of an outside observer.: 218 All processes on this object would appear to slow down, and any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. An object falling from half of a Schwarzschild radius above the event horizon would fade away until it could no longer be seen, disappearing from view within one hundredth of a second. It would also appear to flatten onto the black hole, joining all other material that had ever fallen into the hole. On the other hand, an observer falling into a black hole would not notice any of these effects as they cross the event horizon. Their own clocks appear to them to tick normally, and they cross the event horizon after a finite time without noting any singular behaviour. In general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle.: 222 Black holes that are rotating and/or charged have an inner horizon, often called the Cauchy horizon, inside of the black hole. The inner horizon is divided up into two segments: an ingoing section and an outgoing section. At the ingoing section of the Cauchy horizon, radiation and matter that fall into the black hole would build up at the horizon, causing the curvature of spacetime to go to infinity. This would cause an observer falling in to experience tidal forces. This phenomenon is often called mass inflation, since it is associated with a parameter dictating the black hole's internal mass growing exponentially, and the buildup of tidal forces is called the mass-inflation singularity or Cauchy horizon singularity. Some physicists have argued that in realistic black holes, accretion and Hawking radiation would stop mass inflation from occurring. At the outgoing section of the inner horizon, infalling radiation would backscatter off of the black hole's spacetime curvature and travel outward, building up at the outgoing Cauchy horizon. This would cause an infalling observer to experience a gravitational shock wave and tidal forces as the spacetime curvature at the horizon grew to infinity. This buildup of tidal forces is called the shock singularity. Both of these singularities are weak, meaning that an object crossing them would only be deformed a finite amount by tidal forces, even though the spacetime curvature would still be infinite at the singularity. This is as opposed to a strong singularity, where an object hitting the singularity would be stretched and squeezed by an infinite amount. They are also null singularities, meaning that a photon could travel parallel to the them without ever being intercepted. Ignoring quantum effects, every black hole has a singularity inside, points where the curvature of spacetime becomes infinite, and geodesics terminate within a finite proper time.: 205 For a non-rotating black hole, this region takes the shape of a single point; for a rotating black hole it is smeared out to form a ring singularity that lies in the plane of rotation.: 264 In both cases, the singular region has zero volume. All of the mass of the black hole ends up in the singularity.: 252 Since the singularity has nonzero mass in an infinitely small space, it can be thought of as having infinite density. Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity once they cross the event horizon. As they fall further into the black hole, they will be torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the noodle effect. Eventually, they will reach the singularity and be crushed into an infinitely small point.: 182 However any perturbations, such as those caused by matter or radiation falling in, would cause space to oscillate chaotically near the singularity. Any matter falling in would experience intense tidal forces rapidly changing in direction, all while being compressed into an increasingly small volume. Alternative forms of general relativity, including addition of some quatum effects, can lead to regular, or nonsingular, black holes without singularities. For example, the fuzzball model, based on string theory, states that black holes are actually made up of quantum microstates and need not have a singularity or an event horizon. The theory of loop quantum gravity proposes that the curvature and density at the center of a black hole is large, but not infinite. Formation Black holes are formed by gravitational collapse of massive stars, either by direct collapse or during a supernova explosion in a process called fallback. Black holes can result from the merger of two neutron stars or a neutron star and a black hole. Other more speculative mechanisms include primordial black holes created from density fluctuations in the early universe, the collapse of dark stars, a hypothetical object powered by annihilation of dark matter, or from hypothetical self-interacting dark matter. Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. At the end of a star's life, it will run out of hydrogen to fuse, and will start fusing more and more massive elements, until it gets to iron. Since the fusion of elements heavier than iron would require more energy than it would release, nuclear fusion ceases. If the iron core of the star is too massive, the star will no longer be able to support itself and will undergo gravitational collapse. While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer would see the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the delay growing to infinity as the emitting material reaches the event horizon. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away. Observations of quasars at redshift z ∼ 7 {\displaystyle z\sim 7} , less than a billion years after the Big Bang, has led to investigations of other ways to form black holes. The accretion process to build supermassive black holes has a limiting rate of mass accumulation and a billion years is not enough time to reach quasar status. One suggestion is direct collapse of nearly pure hydrogen gas (low metalicity) clouds characteristic of the young universe, forming a supermassive star which collapses into a black hole. It has been suggested that seed black holes with typical masses of ~105 M☉ could have formed in this way which then could grow to ~109 M☉. However, the very large amount of gas required for direct collapse is not typically stable to fragmentation to form multiple stars. Thus another approach suggests massive star formation followed by collisions that seed massive black holes which ultimately merge to create a quasar.: 85 A neutron star in a common envelope with a regular star can accrete sufficient material to collapse to a black hole or two neutron stars can merge. These avenues for the formation of black holes are considered relatively rare. In the current epoch of the universe, conditions needed to form black holes are rare and are mostly only found in stars. However, in the early universe, conditions may have allowed for black hole formations via other means. Fluctuations of spacetime soon after the Big Bang may have formed areas that were denser then their surroundings. Initially, these regions would not have been compact enough to form a black hole, but eventually, the curvature of spacetime in the regions become large enough to cause them to collapse into a black hole. Different models for the early universe vary widely in their predictions of the scale of these fluctuations. Various models predict the creation of primordial black holes ranging from a Planck mass (~2.2×10−8 kg) to hundreds of thousands of solar masses. Primordial black holes with masses less than 1015 g would have evaporated by now due to Hawking radiation. Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the universe was expanding rapidly and did not have the gravitational differential necessary for black hole formation. Models for the gravitational collapse of objects of relatively constant size, such as stars, do not necessarily apply in the same way to rapidly expanding space such as the Big Bang. In principle, black holes could be formed in high-energy particle collisions that achieve sufficient density, although no such events have been detected. These hypothetical micro black holes, which could form from the collision of cosmic rays and Earth's atmosphere or in particle accelerators like the Large Hadron Collider, would not be able to aggregate additional mass. Instead, they would evaporate in about 10−25 seconds, posing no threat to the Earth. Evolution Black holes can also merge with other objects such as stars or even other black holes. This is thought to have been important, especially in the early growth of supermassive black holes, which could have formed from the aggregation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes. Mergers of supermassive black holes may take a long time: As a binary of supermassive black holes approach each other, most nearby stars are ejected, leaving little for the remaining black holes to gravitationally interact with that would allow them to get closer to each other. This phenomenon has been called the final parsec problem, as the distance at which this happens is usually around one parsec. When a black hole accretes matter, the gas in the inner accretion disk orbits at very high speeds because of its proximity to the black hole. The resulting friction heats the inner disk to temperatures at which it emits vast amounts of electromagnetic radiation (mainly X-rays) detectable by telescopes. By the time the matter of the disk reaches the ISCO, between 5.7% and 42% of its mass will have been converted to energy, depending on the black hole's spin. About 90% of this energy is released within about 20 black hole radii. In many cases, accretion disks are accompanied by relativistic jets that are emitted along the black hole's poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood, in part due to insufficient data. Many of the universe's most energetic phenomena have been attributed to the accretion of matter on black holes. Active galactic nuclei and quasars are believed to be the accretion disks of supermassive black holes. X-ray binaries are generally accepted to be binary systems in which one of the two objects is a compact object accreting matter from its companion. Ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes. At a certain rate of accretion, the outward radiation pressure will become as strong as the inward gravitational force, and the black hole should unable to accrete any faster. This limit is called the Eddington limit. However, many black holes accrete beyond this rate due to their non-spherical geometry or instabilities in the accretion disk. Accretion beyond the limit is called Super-Eddington accretion and may have been commonplace in the early universe. Stars have been observed to get torn apart by tidal forces in the immediate vicinity of supermassive black holes in galaxy nuclei, in what is known as a tidal disruption event (TDE). Some of the material from the disrupted star forms an accretion disk around the black hole, which emits observable electromagnetic radiation. The correlation between the masses of supermassive black holes in the centres of galaxies with the velocity dispersion and mass of stars in their host bulges suggests that the formation of galaxies and the formation of their central black holes are related. Black hole winds from rapid accretion, particularly when the galaxy itself is still accreting matter, can compress gas nearby, accelerating star formation. However, if the winds become too strong, the black hole may blow nearly all of the gas out of the galaxy, quenching star formation. Black hole jets may also energize nearby cavities of plasma and eject low-entropy gas from out of the galactic core, causing gas in galactic centers to be hotter than expected. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.: Ch. 9.6 A stellar black hole of 1 M☉ has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimetre. The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent limits on the possibility of existence of low mass primordial black holes, with modern research predicting that primordial black holes must make up less than a fraction of 10−7 of the universe's total mass. NASA's Fermi Gamma-ray Space Telescope, launched in 2008, has searched for these flashes, but has not yet found any. The properties of a black hole are constrained and interrelated by the theories that predict these properties. When based on general relativity, these relationships are called the laws of black hole mechanics. For a black hole that is not still forming or accreting matter, the zeroth law of black hole mechanics states the black hole's surface gravity is constant across the event horizon. The first law relates changes in the black hole's surface area, angular momentum, and charge to changes in its energy. The second law says the surface area of a black hole never decreases on its own. Finally, the third law says that the surface gravity of a black hole is never zero. These laws are mathematical analogs of the laws of thermodynamics. They are not equivalent, however, because, according to general relativity without quantum mechanics, a black hole can never emit radiation, and thus its temperature must always be zero.: 11 Quantum mechanics predicts that a black hole will continuously emit thermal Hawking radiation, and therefore must always have a nonzero temperature. It also predicts that all black holes have entropy which scales with their surface area. When quantum mechanics is accounted for, the laws of black hole mechanics become equivalent to the classical laws of thermodynamics. However, these conclusions are derived without a complete theory of quantum gravity, although many potential theories do predict black holes having entropy and temperature. Thus, the true quantum nature of black hole thermodynamics continues to be debated.: 29 Observational evidence Millions of black holes with around 30 solar masses derived from stellar collapse are expected to exist in the Milky Way. Even a dwarf galaxy like Draco should have hundreds. Only a few of these have been detected. By nature, black holes do not themselves emit any electromagnetic radiation other than the hypothetical Hawking radiation, so astrophysicists searching for black holes must generally rely on indirect observations. The defining characteristic of a black hole is its event horizon. The horizon itself cannot be imaged, so all other possible explanations for these indirect observations must be considered and eliminated before concluding that a black hole has been observed.: 11 The Event Horizon Telescope (EHT) is a global system of radio telescopes capable of directly observing a black hole shadow. The angular resolution of a telescope is based on its aperture and the wavelengths it is observing. Because the angular diameters of Sagittarius A* and Messier 87* in the sky are very small, a single telescope would need to be about the size of the Earth to clearly distinguish their horizons using radio wavelengths. By combining data from several different radio telescopes around the world, the Event Horizon Telescope creates an effective aperture the diameter size of the Earth. The EHT team used imaging algorithms to compute the most probable image from the data in its observations of Sagittarius A* and M87*. Gravitational-wave interferometry can be used to detect merging black holes and other compact objects. In this method, a laser beam is split down two long arms of a tunnel. The laser beams reflect off of mirrors in the tunnels and converge at the intersection of the arms, cancelling each other out. However, when a gravitational wave passes, it warps spacetime, changing the lengths of the arms themselves. Since each laser beam is now travelling a slightly different distance, they do not cancel out and produce a recognizable signal. Analysis of the signal can give scientists information about what caused the gravitational waves. Since gravitational waves are very weak, gravitational-wave observatories such as LIGO must have arms several kilometers long and carefully control for noise from Earth to be able to detect these gravitational waves. Since the first measurements in 2016, multiple gravitational waves from black holes have been detected and analyzed. The proper motions of stars near the centre of the Milky Way provide strong observational evidence that these stars are orbiting a supermassive black hole. Since 1995, astronomers have tracked the motions of 90 stars orbiting an invisible object coincident with the radio source Sagittarius A*. In 1998, by fitting the motions of the stars to Keplerian orbits, the astronomers were able to infer that Sagittarius A* must be a 2.6×106 M☉ object must be contained within a radius of 0.02 light-years. Since then, one of the stars—called S2—has completed a full orbit. From the orbital data, astronomers were able to refine the calculations of the mass of Sagittarius A* to 4.3×106 M☉, with a radius of less than 0.002 light-years. This upper limit radius is larger than the Schwarzschild radius for the estimated mass, so the combination does not prove Sagittarius A* is a black hole. Nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for confining so much invisible mass into such a small volume. Additionally, there is some observational evidence that this object might possess an event horizon, a feature unique to black holes. The Event Horizon Telescope image of Sagittarius A*, released in 2022, provided further confirmation that it is indeed a black hole. X-ray binaries are binary systems that emit a majority of their radiation in the X-ray part of the electromagnetic spectrum. These X-ray emissions result when a compact object accretes matter from an ordinary star. The presence of an ordinary star in such a system provides an opportunity for studying the central object and to determine if it might be a black hole. By measuring the orbital period of the binary, the distance to the binary from Earth, and the mass of the companion star, scientists can estimate the mass of the compact object. The Tolman-Oppenheimer-Volkoff limit (TOV limit) dictates the largest mass a nonrotating neutron star can be, and is estimated to be about two solar masses. While a rotating neutron star can be slightly more massive, if the compact object is much more massive than the TOV limit, it cannot be a neutron star and is generally expected to be a black hole. The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. Observations of rotation broadening of the optical star reported in 1986 lead to a compact object mass estimate of 16 solar masses, with 7 solar masses as the lower bound. In 2011, this estimate was updated to 14.1±1.0 M☉ for the black hole and 19.2±1.9 M☉ for the optical stellar companion. X-ray binaries can be categorized as either low-mass or high-mass; This classification is based on the mass of the companion star, not the compact object itself. In a class of X-ray binaries called soft X-ray transients, the companion star is of relatively low mass, allowing for more accurate estimates of the black hole mass. These systems actively emit X-rays for only several months once every 10–50 years. During the period of low X-ray emission, called quiescence, the accretion disk is extremely faint, allowing detailed observation of the companion star. Numerous black hole candidates have been measured by this method. Black holes are also sometimes found in binaries with other compact objects, such as white dwarfs, neutron stars, and other black holes. The centre of nearly every galaxy contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M–sigma relation, strongly suggests a connection between the formation of the black hole and that of the galaxy itself. Astronomers use the term active galaxy to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the high levels of activity in the centers of these galaxies, regions called active galactic nuclei (AGN), may be explained by accretion onto supermassive black holes. These AGN consist of a central black hole that may be millions or billions of times more massive than the Sun, a disk of interstellar gas and dust called an accretion disk, and two jets perpendicular to the accretion disk. Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, Messier 32, Messier 87, the Sombrero Galaxy, and the Milky Way itself. Another way black holes can be detected is through observation of effects caused by their strong gravitational field. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected, making objects behind them appear distorted. When the lensing object is a black hole, this effect can be strong enough to create multiple images of a star or other luminous source. However, the distance between the lensed images may be too small for contemporary telescopes to resolve—this phenomenon is called microlensing. Instead of seeing two images of a lensed star, astronomers see the star brighten slightly as the black hole moves towards the line of sight between the star and Earth and then return to its normal luminosity as the black hole moves away. The turn of the millennium saw the first 3 candidate detections of black holes in this way, and in January 2022, astronomers reported the first confirmed detection of a microlensing event from an isolated black hole. This was also the first determination of an isolated black hole mass, 7.1±1.3 M☉. Alternatives While there is a strong case for supermassive black holes, the model for stellar-mass black holes assumes of an upper limit for the mass of a neutron star: objects observed to have more mass are assumed to be black holes. However, the properties of extremely dense matter are poorly understood. New exotic phases of matter could allow other kinds of massive objects. Quark stars would be made up of quark matter and supported by quark degeneracy pressure, a form of degeneracy pressure even stronger than neutron degeneracy pressure. This would halt gravitational collapse at a higher mass than for a neutron star. Even stronger stars called electroweak stars would convert quarks in their cores into leptons, providing additional pressure to stop the star from collapsing. If, as some extensions of the Standard Model posit, quarks and leptons are made up of the even-smaller fundamental particles called preons, a very compact star could be supported by preon degeneracy pressure. While none of these hypothetical models can explain all of the observations of stellar black hole candidates, a Q star is the only alternative which could significantly exceed the mass limit for neutron stars and thus provide an alternative for supermassive black holes.: 12 A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. A dark energy star would convert infalling matter into vacuum energy; This vacuum energy would be much larger than the vacuum energy of outside space, exerting outwards pressure and preventing a singularity from forming. A black star would be gravitationally collapsing slowly enough that quantum effects would keep it just on the cusp of fully collapsing into a black hole. A gravastar would consist of a very thin shell and a dark-energy interior providing outward pressure to stop the collapse into a black hole or formation of a singularity; It could even have another gravastar inside, called a 'nestar'. Open questions According to the no-hair theorem, a black hole is defined by only three parameters: its mass, charge, and angular momentum. This seems to mean that all other information about the matter that went into forming the black hole is lost, as there is no way to determine anything about the black hole from outside other than those three parameters. When black holes were thought to persist forever, this information loss was not problematic, as the information can be thought of as existing inside the black hole. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information is seemingly gone forever. This is called the black hole information paradox. Theoretical studies analyzing the paradox have led to both further paradoxes and new ideas about the intersection of quantum mechanics and general relativity. While there is no consensus on the resolution of the paradox, work on the problem is expected to be important for a theory of quantum gravity.: 126 Observations of faraway galaxies have found that ultraluminous quasars, powered by supermassive black holes, existed in the early universe as far as redshift z ≥ 7 {\displaystyle z\geq 7} . These black holes have been assumed to be the products of the gravitational collapse of large population III stars. However, these stellar remnants were not massive enough to produce the quasars observed at early times without accreting beyond the Eddington limit, the theoretical maximum rate of black hole accretion. Physicists have suggested a variety of different mechanisms by which these supermassive black holes may have formed. It has been proposed that smaller black holes may have also undergone mergers to produce the observed supermassive black holes. It is also possible that they were seeded by direct-collapse black holes, in which a large cloud of hot gas avoids fragmentation that would lead to multiple stars, due to low angular momentum or heating from a nearby galaxy. Given the right circumstances, a single supermassive star forms and collapses directly into a black hole without undergoing typical stellar evolution. Additionally, these supermassive black holes in the early universe may be high-mass primordial black holes, which could have accreted further matter in the centers of galaxies. Finally, certain mechanisms allow black holes to grow faster than the theoretical Eddington limit, such as dense gas in the accretion disk limiting outward radiation pressure that prevents the black hole from accreting. However, the formation of bipolar jets prevent super-Eddington rates. In fiction Black holes have been portrayed in science fiction in a variety of ways. Even before the advent of the term itself, objects with characteristics of black holes appeared in stories such as the 1928 novel The Skylark of Space with its "black Sun" and the "hole in space" in the 1935 short story Starship Invincible. As black holes grew to public recognition in the 1960s and 1970s, they began to be featured in films as well as novels, such as Disney's The Black Hole. Black holes have also been used in works of the 21st century, such as Christopher Nolan's science fiction epic Interstellar. Authors and screenwriters have exploited the relativistic effects of black holes, particularly gravitational time dilation. For example, Interstellar features a black hole planet with a time dilation factor of over 60,000:1, while the 1977 novel Gateway depicts a spaceship approaching but never crossing the event horizon of a black hole from the perspective of an outside observer due to time dilation effects. Black holes have also been appropriated as wormholes or other methods of faster-than-light travel, such as in the 1974 novel The Forever War, where a network of black holes is used for interstellar travel. Additionally, black holes can feature as hazards to spacefarers and planets: A black hole threatens a deep-space outpost in 1978 short story The Black Hole Passes, and a binary black hole dangerously alters the orbit of a planet in the 2018 Netflix reboot of Lost in Space. Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/First_Continental_Congress] | [TOKENS: 1192] |
Contents First Continental Congress The First Continental Congress was a meeting of delegates of twelve of the Thirteen Colonies (Georgia did not attend) held from September 5 to October 26, 1774, at Carpenters' Hall in Philadelphia at the beginning of the American Revolution. The meeting was organized by the delegates after the British Navy implemented a blockade of Boston Harbor and the Parliament of Great Britain passed the punitive Intolerable Acts in response to the Boston Tea Party. During the opening weeks of the Congress, the delegates conducted a spirited discussion about how the colonies could collectively respond to the British government's coercive actions, and they worked to make a common cause. As a prelude to its decisions, the Congress's first action was the adoption of the Suffolk Resolves, a measure drawn up by several counties in Massachusetts that included a declaration of grievances, called for a trade boycott of British goods, and urged each colony to set up and train its own militia. A less radical plan was then proposed to create a Union of Great Britain and the Colonies, but the delegates tabled the measure and later struck it from the record of their proceedings. The First Continental Congress agreed on a Declaration and Resolves that included the Continental Association, a proposal for an embargo on British trade. They also drew up a Petition to the King pleading for redress of their grievances and repeal of the Intolerable Acts. That appeal was unsuccessful, leading delegates from the colonies to convene the Second Continental Congress, also held in Philadelphia, the following May, shortly after the Battles of Lexington and Concord, to organize the defense of the colonies as the American Revolutionary War. Convention The Congress met from September 5 to October 26, 1774, in Carpenters' Hall in Philadelphia with delegates from 12 of the Thirteen Colonies participating, Georgia being the one colony not to attend. The delegates were elected by the people of the respective colonies, the colonial legislature, or by the Committee of Correspondence of a colony. Loyalist sentiments outweighed Patriot views in Georgia, leading that colony to not immediately join the revolutionary cause until the following year when it sent delegates to the Second Continental Congress. Peyton Randolph of Virginia was elected as president of the Congress on the opening day, and he served through October 22 when ill health forced him to retire, and Henry Middleton of South Carolina was elected in his place for the balance of the session. Charles Thomson, leader of the Philadelphia Committee of Correspondence, was selected as the congressional secretary. The rules adopted by the delegates were designed to guard the equality of participants and to promote free-flowing debate. As the deliberations progressed, it became clear that those in attendance were not of one mind concerning why they were there. Conservatives such as Joseph Galloway (Pennsylvania), John Dickinson (Pennsylvania), John Jay (New York), and Edward Rutledge (South Carolina) believed their task to be forging policies to pressure Parliament to rescind its unreasonable acts. Their ultimate goal was to develop a reasonable solution to the difficulties and bring about reconciliation between the Colonies and Great Britain. Others such as Patrick Henry (Virginia), Roger Sherman (Connecticut), Samuel Adams (Massachusetts), and John Adams (Massachusetts) believed their task to be developing a decisive statement of the rights and liberties of the Colonies. Their ultimate goal was to end what they felt to be the abuses of parliamentary authority and to retain their rights, which had been guaranteed under Colonial charters and the English constitution. Roger Sherman denied the legislative authority of Parliament, and Patrick Henry believed that the Congress needed to develop a completely new system of government, independent from Great Britain, for the existing Colonial governments were already dissolved. In contrast to these ideas, Joseph Galloway put forward a "Plan of Union" which suggested that an American legislative body should be formed with some authority, whose consent would be required for imperial measures. Declaration and Resolves In the end, the voices of compromise carried the day. Rather than calling for independence, the First Continental Congress passed and signed the Continental Association in its Declaration and Resolves, which called for a boycott of British goods to take effect in December 1774. After Congress signed on October 20, 1774, embracing non-exportation, they also planned nonimportation of slaves beginning December 1, which would have abolished the slave trade in the United States of America 33 years before it actually ended. Accomplishments The primary accomplishment of the First Continental Congress was a compact among the colonies to boycott British goods beginning on December 1, 1774, unless parliament should rescind the Intolerable Acts. Additionally, Great Britain's colonies in the West Indies were threatened with a boycott unless they agreed to non-importation of British goods. Imports from Britain dropped by 97 percent in 1775, compared with the previous year. Committees of observation and inspection were to be formed in each Colony to ensure compliance with the boycott. It was further agreed that if the Intolerable Acts were not repealed, the colonies would also cease exports to Britain after September 10, 1775. The Houses of Assembly of each participating colony approved the proceedings of the Congress, with the exception of New York. The boycott was successfully implemented, but its potential for altering British colonial policy was cut off by the outbreak of hostilities in April 1775. Congress also voted to meet again the following year if their grievances were not addressed satisfactorily. Anticipating that there would be cause to convene a second congress, delegates resolved to send letters of invitation to those colonies that had not joined them in Philadelphia, including Quebec, Saint John's Island (now Prince Edward Island), Nova Scotia, Georgia, East Florida, and West Florida. Of these, only Georgia would ultimately send delegates to the next Congress. List of delegates Gallery See also References Informational notes Citations Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/ABC_News_(United_States)] | [TOKENS: 2718] |
Contents ABC News (United States) ABC News is the news division of the American television network ABC. Its flagship program is the daily evening newscast ABC World News Tonight with David Muir; other programs include morning news-talk show Good Morning America, Nightline, 20/20, and This Week with George Stephanopoulos. The network also includes daytime talk shows The View, Live with Kelly and Mark, and Tamron Hall. In addition to the division's television programs, ABC News has radio and digital outlets, including ABC News Radio and ABC News Live, plus various podcasts hosted by ABC News personalities. History ABC began in 1943 as the NBC Blue Network, a radio network that was spun off from NBC, as ordered by the Federal Communications Commission (FCC) in 1942. The reason for the order was to expand competition in radio broadcasting in the United States, specifically news and political broadcasting, and broaden the projected points of view. Only a few companies, such as NBC and CBS, dominated the radio market. NBC conducted the split voluntarily in case its appeal of the ruling was denied, and it was forced to split its two networks into separate companies. Regular television news broadcasts on ABC began soon after the network signed on its initial owned-and-operated television station (WJZ-TV, now WABC-TV) and production center in New York City in August 1948. Broadcasts continued as the ABC network expanded nationwide. Until the early 1970s, ABC News programs and ABC in general consistently ranked third in viewership behind CBS and NBC news programs. ABC had fewer affiliate stations and a weaker prime-time programming slate to support the network's news operations compared to the two larger networks, each of which had established their radio news operations during the 1930s. By the 1970s, the network had effectively turned around, with its prime-time entertainment programs achieving more substantial ratings and drawing in higher advertising revenue and profits for ABC overall. With the appointment of the president of ABC Sports, Roone Arledge as president of ABC News in 1977, ABC invested the resources to make it a significant source of news content. Arledge, known for experimenting with the broadcast "model", created many of ABC News' most popular and enduring programs, including 20/20, World News Tonight, This Week, Nightline, and Primetime Live. ABC News' longtime slogan, "More Americans get their news from ABC News than from any other source." (introduced in the late 1980s), was a claim referring to the number of people who watch, listen to and read ABC News content on television, radio and (eventually) the Internet, and not necessarily to the telecasts alone. In June 1998, ABC News (which owned an 80% stake in the service), Nine Network and ITN sold their respective interests in Worldwide Television News to the Associated Press.[citation needed] Additionally, ABC News signed a multi-year content deal with AP for its affiliate video service, Associated Press Television News (APTV), while providing material from ABC's news video service, ABC News One, to APTV. Scandal erupted on October 7, 1985, over a decision by Arledge, president of ABC News and Sports, to kill a 13-minute report about Marilyn Monroe, possibly due to his close ties to Ethel Kennedy. 20/20 drew criticism from the program's co-anchors, Hugh Downs and Barbara Walters, and the executive producer, Av Westin. Arledge said that he had killed the piece because it was "gossip-column stuff" and "does not live up to its billing." Downs, however, took issue with Arledge's judgment. "I am upset about the way it was handled," he said in an interview. "I honestly believe that this is more carefully documented than anything any network did during Watergate. I lament the fact that the decision reflects badly on people I respect and it reflects badly on me and the broadcast." Additionally, Westin said: "I don't anticipate not putting it on the air. The journalism is solid. Everything in there has two sources. We are documenting that there was a relationship between Bobby and Marilyn and Jack and Marilyn. A variety of eyewitnesses attest to that on camera." Two other aspects of the unaired report, according to an ABC staff member who has seen it, are eyewitness accounts of wiretapping of Monroe's home by Jimmy Hoffa, the teamster leader, that reveal meetings between her and the Kennedy brothers, and accounts of a visit to Monroe by Robert F. Kennedy on the day of her death. Fred Otash, a detective who said he was the chief wiretapper, is interviewed on camera, and ABC staff members said three other wiretappers corroborated his account. In addition, several people not in the book say on camera that Monroe kept diaries with references to meetings with the Kennedy brothers, according to a staff member who has seen the report. "It set out to be a piece which would demonstrate that because of alleged relations between Robert Kennedy and John F. Kennedy and Monroe, the presidency was compromised because organized crime was involved," he said. "Based on what has been uncovered so far, there was no evidence." Arledge's decision to kill the broadcast resulted in the subsequent decision of Geraldo Rivera to leave ABC entirely. Rivera was a 20/20 correspondent but did not work on that story. He had been publicly critical of Arledge's decision. Arledge, a champion and defender of Rivera, said he thought the story needed more work. The story probed purported affairs between actress Marilyn Monroe, President John F. Kennedy, and his brother Robert F. Kennedy. On August 7, 2014, ABC announced that it would relaunch its radio network division, ABC Radio, on January 1, 2015. The change occurred following the announcement that Cumulus would replace its ABC News radio service with Westwood One News (via CNN). On September 20, 2019, ABC Radio was renamed as ABC Audio as the network has evolved to offer a podcast portfolio and other forms of on-demand and linear content. In April 2018, it was announced that FiveThirtyEight would be transferred to ABC News from ESPN, Inc., majority owned by the Walt Disney Company. On September 10, 2018, ABC News launched a second attempt to extend its Good Morning America brand into the afternoon with GMA3: What You Need to Know. In May 2019, ABC News Live, a news focused streaming channel, was launched on Roku. Following a reorganization of ABC's parent company, The Walt Disney Company which created the Walt Disney Direct-to-Consumer and International segment in March 2018, ABC News Digital and Live Streaming, including ABC News Live and FiveThirtyEight, were transferred to the new segment. In an October 2018 Simmons Research survey of 38 news organizations, ABC News was ranked the second most trusted news organization by Americans, behind The Wall Street Journal. In December 2024, ABC's owner, the Walt Disney Company, settled a defamation lawsuit brought by Donald Trump against ABC News, by agreeing to donate $15 million to Trump's future presidential library foundation and paying $1 million in Trump's legal fees. Disney also agreed to ABC and anchor George Stephanopoulos publishing a statement saying they regretted that Stephanopoulos, in an interview, had repeatedly said that Trump had been found liable for raping E. Jean Carroll. In November 2025, President Donald Trump became angry with an ABC News reporter for asking a question about the Jeffrey Epstein case and called for the revocation of the network's broadcasting license. Programming Other services ABC News Radio is the radio service of ABC Audio, a division of the ABC News. Formerly known as ABC Radio News, ABC News Radio feeds through Skyview Networks with newscasts on the hour to its affiliates. ABC News Radio is the largest commercial radio news organization in the US. ABCNews.com launched on May 15, 1997, by ABC News Internet Ventures, a joint venture between Starwave and ABC formed in April 1997. Starwave had owned and operated ESPNet SportsZone (later known as ESPN.com) since 1995, which licensed the ESPN brand and video clips from ABC's corporate sister ESPN Inc. Disney wanted more control of their Internet properties, which meant ABCNews.com was operated as a joint venture with ABC News having editorial control. Disney had also bought a minority stake in Starwave before the launch of ABCNews.com and would later buy the company outright. The website initially had a dedicated staff of about 30. In addition to articles, it featured short video clips and audio from the start, delivered using RealAudio and RealVideo technology. Some content was also available via America Online. In 2011, ABC News and Yahoo News announced a strategic partnership to share ABC's online reporting on Yahoo's website; the deal expanded in 2015 to include the Disney/ABC Television Group. In 2018, ABC News, and Good Morning America specifically, ended the hosting partnership with Yahoo, instead opting to continue separate web presences. Although Disney retired the Go.com branding in 2013, ABC News' website has the Go.com branding, with its URL reading ABCNews.Go.com. ABC News Live is a 24/7 streaming video news channel for breaking news, live events, newscasts and longer-form reports and documentaries operated by ABC News since 2018, The channel is available through DirecTV Stream, Disney+, FuboTV, Google TV, Haystack News, Hulu, LG Channels, Pluto TV, Prime Video Live TV, Samsung TV Plus, Sling Freestream, The Roku Channel, Tubi, Vizio Watch Free+, Xumo, and YouTube TV. The service is under the direction of Justin Dial, Vice President of Streaming Content, Seniboye Tienabeso, Executive Director of ABC News Live, Chandra Zeikel, Executive Producer and Eric Ortega, Executive Producer. This unit is producing: Satellite News Channel was a joint venture between ABC News and Group W that launched on June 21, 1982, as a satellite-delivered cable television network. SNC used footage from ABC News and seven Washington, D.C.–based crews and stories from other overseas networks to provide a rotating newscast every 20 minutes. However, this channel had difficulty getting clearance from cable systems, so ABC News and Group W decided to sell it to its competitor, CNN (a subsidiary of Time Warner's Turner Broadcasting System). CNN ceased Satellite News Channel's operations on October 27, 1983. SNC was either replaced by CNN or CNN2 on most cable systems. ABC News Now was a 24-hour cable news network that launched on July 26, 2004, as a digital subchannel by ABC News, being the company's second attempt in the 24-hour cable news world after Satellite News Channel. It was offered via digital television, broadband and streaming video at ABCNews.com and on mobile phones. It delivered breaking news, headline news each half hour, and a wide range of entertainment and lifestyle programming. The channel was available in the United States and Europe. Its Talk Back feature allowed viewers to voice their input by submitting videos and personal thoughts on controversial issues and current topics. It was shut down as a digital subchannel after its experimental phase ended with the Presidential inauguration in 2005. ABC News Now was replaced on cable providers with Fusion on October 28, 2013. Fusion was a digital cable and satellite network owned and operated by Fusion Media Group, LLC, which was a joint venture between ABC News and Univision Communications. ABC and Univision formally announced their launch on May 2, 2012. Launched on October 28, 2013, Fusion features a mix of traditional news and investigative programs along with satirical content aimed at English-speaking Hispanic and Latino American adults between the ages of 18 and 34. The network replaced ABC News Now, a mainly streaming service of ABC News content. In December 2015, it was reported that Disney was in talks to sell its stake in Fusion to Univision. The split was complete on April 21, 2016; Univision alone would continue to operate Fusion until December 31, 2021, when it shut down the network. Personnel New York (Main Headquarters) Washington, D.C. Atlanta London San Francisco Current ABC News Radio personnel Contributors In Australia, Sky News Australia airs daily broadcasts of ABC World News Tonight (at 10:30 a.m.) and Nightline (at 1:30 a.m.) as well as weekly airings of 20/20 (on Wednesdays at 1:30 p.m., with an extended version at 2:00 p.m. on Sundays) and occasionally Primetime (at 1:30 p.m. on Thursdays, with extended edition at 2:00 p.m. on Saturdays). Coincidentally, that country's public broadcasting, the Australian Broadcasting Corporation, operates its unrelated news division that is also named ABC News. The U.S. ABC News maintains a content-sharing agreement with the Nine Network, which also broadcasts GMA domestically in the early morning before its own breakfast program. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/History_of_the_Jews_in_Slovenia] | [TOKENS: 3360] |
Contents History of the Jews in Slovenia The history of the Jews in Slovenia and areas connected with it goes back to the times of Ancient Rome. In 2011, the small Slovenian Jewish community (Slovene: Judovska skupnost Slovenije) was estimated at 100 to 130 members, of whom around 130 are officially registered, most of whom live in the capital, Ljubljana. History of the community The ancient Jewish community of Slovenia predated the 6th-century Slavic settlement of the Eastern Alps, when the Slavic ancestors of the present-day Slovenes entered their current territory. The first Jews arrived in what is now Slovenia in Roman times, with archaeological evidence of Jews found in Maribor and in the village of Škocjan in Lower Carniola. In Škocjan, an engraved menorah dating from the 5th century AD was found in a graveyard. In the 12th century, Jews arrived in the Slovene lands fleeing poverty in Italy and central Europe. Even though they were forced to live in ghettos, many Jews prospered. Relations between Jews and the local Christian population were generally peaceful. In Maribor, Jews were successful bankers, winegrowers, and millers. Several "Jewish courts" (Judenhof) existed in Styria, settling disputes between Jews and Christians. Israel Isserlein, who authored several essays on medieval Jewish life in Lower Styria, was the most important rabbi at the time, having lived in Maribor. In 1397, Jewish ghettos in Radgona and Ptuj were set ablaze by anonymous anti-Jewish assailants. The first synagogue in Ljubljana was mentioned in 1213. Issued with a Privilegium, Jews were able to settle an area of Ljubljana located on the left bank of the Ljubljanica River. The streets Židovska ulica (Jewish Street) and Židovska steza (Jewish Lane), which now occupy the area, are still reminiscent of that period. The wealth of the Jews bred resentment among the Inner Austrian nobility and the burghers, with many refusing to repay Jewish money-lenders, and local merchants considered Jews to be competitors. The antisemitism of the Catholic Church also played an important role in creating animosity against the Jews, In 1494 and 1495 the assemblies of Styria and Carinthia offered Austrian Emperor Maximilian a bounty for the expulsion of the Jews from both provinces. Maximilian granted their request, citing as reasons for the expulsion the Jewish pollution of the Christian sacrament, the ritual killings of Christian children, and the defrauding of debtors. The expulsions started immediately, with the last Jews expelled by 1718.[dubious – discuss] The Jews were expelled from Maribor in 1496. Following separate demands by the citizens of Ljubljana for the expulsion of the Jews, Jews were expelled from Ljubljana in 1515. After the expulsion of the Jewish community, the Maribor Synagogue was turned into a church. In 1709, the Holy Roman Emperor Charles VI, ruler of the Habsburg monarchy, issued a decree allowing Jews to return to Inner Austria. Nevertheless, Jews in that time settled almost exclusively in the commercial city of Trieste and, to a much smaller extent, in the town of Gorizia (now both part of Italy). The decree was overturned in 1817 by Francis I, and Jews were granted full civil and political right only with the Austrian constitution of 1867. Nevertheless, the Slovene Lands remained virtually without a consistent Jewish population, with the exception of Gorizia, Trieste, the region of Prekmurje, and some smaller towns in the western part of the County of Gorizia and Gradisca (Gradisca, Cervignano), which were inhabited mostly by a Friulian-speaking population. According to the census of 1910, only 146 Jews lived in the territory of present-day Slovenia, excluding the Prekmurje region. Yet despite this, as elsewhere in Austria-Hungary, antisemitism started to intensify also in Slovenia, from the mid-19th century onward. propagated by prominent Slovene Catholic leaders, such as Bishop Anton Mahnič and Janez Evangelist Krek. The former called for a war against Judaism and the latter sought to persuade believers that the Jews were transmitters of the most harmful influences. In 1918, in the chaotic transition between Austria-Hungary and the new Kingdom of Serbs, Croats and Slovenes, riots broke out against Jews and Hungarians in many places in Prekmurje. Soldiers returning from the front and locals looted Jewish and Hungarian shops. On November 4, 1918, in Beltinci, locals looted Jewish homes and shops, tortured Jews, and set fire to the synagogue. After the pogrom, the once powerful Beltinci Orthodox Jewish community, numbering 150 in the mid-19th century, disappeared. In 1937 the local authorities demolished the Beltinci synagogue Rampant anti-Semitism was among the reasons why few Jews decided to settle in the area, and the overall Jewish population remained at a very low level. In the 1920s, after the formation of the Kingdom of Serbs, Croats and Slovenes (Yugoslavia), the local Jewish community merged with the Jewish community of Zagreb, Croatia. According to the 1931 census, there were about 900 Jews in the Drava Banovina, mostly concentrated in Prekmurje, which was part of the Kingdom of Hungary prior to 1919. This was the reason why in the mid-1930s Murska Sobota became the seat of the Jewish Community of Slovenia. During that period, the Jewish population was reinvigorated by many immigrants fleeing from neighbouring Austria and Nazi Germany to a more tolerant Kingdom of Yugoslavia. Nevertheless, in the prewar period the Slovene Roman Catholic Church and its affiliated largest political party, the Slovenian People's Party, engaged in antisemitism, with Catholic papers writing about "Jews" as "a disaster for our countryside", "Jews" as "fraudsters" and "traitors to Christ", while the main Slovene Catholic daily, Slovenec, informed local Jews that their "road out of Yugoslavia ... was open". and that from Slovenia "we export such goods [I.e. Jews] without compensation". While interior minister in the Yugoslav government, the leading Slovene politician and former Catholic priest, Anton Korošec, declared "all Jews, Communists, and Freemasons as traitors, conspirators, and enemies of the State". Then in 1940 Korošec introduced two antisemitic laws in Yugoslavia, to ban Jews from the food industry and restrict the number of Jewish students in high schools and universities Slovene Jews were severely affected, as Sharika Horvat noted in her testimony for the Shoah Foundation, "everything fell apart .... under the Korošec government." According to official Yugoslav data, the number of self-declared Jews (according to religion, not to ancestry) in Yugoslav Slovenia rose to 1,533 by 1939. In that year, there were 288 declared Jews in Maribor, 273 in Ljubljana, 270 in Murska Sobota, 210 in Lendava and 66 in Celje. The other 400 Jews lived scattered around the country, with a quarter of them living in the Prekmurje region. Prior to World War Two, there were two active synagogues in Slovenia, one in Murska Sobota and one in Lendava. The overall number of Jews prior to the Axis invasion of Yugoslavia in April 1941 is estimated to have been around 2,500, including baptised Jews and refugees from Austria and Germany. The Jewish community, very small even before World War II and the Shoah, was further reduced by the Nazis occupation between 1941 and 1945; the Jews in northern and eastern Slovenia (the Slovenian Styria, Upper Carniola, Slovenian Carinthia, and Posavje), which was annexed to the Third Reich, were deported to concentration camps as early as in the late spring of 1941.[citation needed] Very few survived.[quantify] In Ljubljana and in Lower Carniola, which came under Italian occupation, the Jews were relatively safe until September 1943, when most of the zone was occupied by the Nazi German forces.[citation needed] In late 1943, most of them were deported to concentration camps, although some managed to escape, especially by fleeing to the zones freed by the partisan resistance.[citation needed] In Ljubljana, 32 Jews were able to hide until September 1944, when they were betrayed and arrested in raids by the collaborationist Slovene Home Guard police and handed over to the Nazis, who then sent them to Auschwitz, where most were exterminated. The Slovene Home Guard greatly intensified the antisemitism already present in prewar Slovene Catholic circles, engaging in vicious antisemitic propaganda. Thus the Slovene Home Guard leader, Leon Rupnik, attacked Jews in virtually all his public speeches, In 1944, the Home Guard newspaper wrote: "Judaism wants to enslave the whole world. It can enslave it if it also economically destroys all the nations. That is why it drove nations into war to destroy themselves and thereby benefit the Jews. Communism is the most loyal executor of Jewish orders, along with liberal democracy. Both ideas were created by Jews for non-Jewish peoples. The Slovenian nation also wants to bring Judaism to its knees, along with its moral decay and impoverishment." The influential Catholic priest, Lambert Ehrlich, who advocated collaboration with the Italian Fascist authorities, campaigned against "Jewish Satanism," which he maintained was trying to get its hands on other peoples’ national treasures.[citation needed] The Jews of Prekmurje, where the majority of Slovenian Jewry lived prior to World War Two, suffered the same fate as the Jews of Hungary. Following the German occupation of Hungary, almost the entire Jewish population of the Prekmurje region was deported to Auschwitz. Very few survived. All together it is estimated that of the 1,500 Jews in Slovenia in 1939, only 200 managed to survive, meaning 87% were exterminated by the Nazis, among the highest rates in Europe. Some Slovene Jews managed to save themselves by joining the partisans. Unlike the Polish resistance, which did not allow Jews in their ranks,[citation needed] the Yugoslav partisans welcomed Jews. 3,254 Jews in former Yugoslavia survived by joining the partisans, more than one-fifth of all survivors. After the war 10 Jewish partisans were named Yugoslav national heroes. For assisting Jews during the Holocaust, 15 Slovenes have been named Righteous Among the Nations, by Yad Vashem. Under Communism in Yugoslavia, the Jewish community in Socialist Republic of Slovenia numbered fewer than 100 members. The Federation of Jewish Communities was reestablished and upon the establishment of the State of Israel (1948), the Federation sought and received permission from the Yugoslav authorities to organize Jewish emigration to Israel. 8,000 Yugoslav Jews, among them Slovene Jews, who were all allowed to take their property with them, left for Israel. In 1953, the synagogue of Murska Sobota, the only remaining after the Shoah, which the handful of Jewish survivors were unable to maintain and therefore sold in 1949 to the city, was demolished by the local Communist authorities to make way for new apartments. Many Jews were expelled from Yugoslavia as "ethnic Germans",[citation needed] and most of Jewish property was confiscated.[citation needed] In Ljubljana, Jewish properties were confiscated as "enemy property" by the City Confiscation Committee (Slovene: Mestna zaplembena komisija) and turned over to the communist elite. These properties included the Ebenspanger Mansion (used by Boris Kidrič), the Mergenthaler Mansion (used by the OZNA, or secret police), and the Pollak mansion (used by Edvard Kocbek). In addition, the Moskovič mansion was sold under questionable circumstances and is now used by the Social Democrats, the successor of the Communist Party of Slovenia. The Judovska občina v Ljubljani (Jewish Community of Ljubljana) was officially reformed following World War II. Its first president was Artur Kon, followed by Aleksandar Švarc, and by Roza Fertig-Švarc in 1988. In 1969, it numbered only 84 members and its membership was declining due to emigration and age. In the 1960s and 1970s, there was a revival of Jewish themes in Slovenian literature, almost exclusively by women authors. Berta Bojetu was the most renowned Jewish author who wrote in Slovene. Others included Miriam Steiner and Zlata Medic-Vokač. After 1990 In the last Yugoslav census in 1991, 199 Slovenes declared themselves of the Jewish religion, and in the 2011 census, the number was 99. The Jewish community today is estimated at only 100 members. The community consists of people of Ashkenazi and Sephardi descent. In 1999, the first Chief Rabbi for Slovenia was appointed since 1941. Before that, religious services were provided with help from the Jewish community of Zagreb. The present chief rabbi for Slovenia, Ariel Haddad, resides in Trieste and is a member of the Lubavitcher Hassidic school. The current president of the Jewish Community of Slovenia is Andrej Kožar Beck. Since the year 2000, there has been a noticeable revival of Jewish culture in Slovenia. In 2003, a synagogue was opened in Ljubljana. In 2008, the Association Isserlein was founded to promote the legacy of Jewish culture in Slovenia. It has organized several public events that have received positive responses from the media, such as the public lighting of the hanukiah in Ljubljana in 2009. There has also been a growing public interest in the historical legacy of Jews of Slovenia. In 2008, the complex of the Jewish Cemetery in Rožna Dolina near Nova Gorica was restored due to the efforts of the local Social Democratic Party politicians, pressure from the neighboring Jewish Community of Gorizia, and the American Embassy in Slovenia. In January 2010, the first monument to the victims of the Shoah in Slovenia was unveiled in Murska Sobota. Occasional antisemitic incidents still occur, such as Holocaust denial and antisemitic pronouncements by Slovene right-wingers. In April 2024, a World Jewish Congress delegation gathered in Slovenia in response to the Jewish community's call for governmental response to rising antisemitism. Following the delegation, WJC Executive Vice President Maram Stern issued an open letter to the Slovenian Minister of Foreign and European Affairs Tanja Fajon. The letter stated that "Invariably, tendentious attacks on Israel fan the flames of antisemitism… Ultimately, the people of Slovenia and the government will be most affected by the hatred that has been metastasizing throughout the country, and only you and your colleagues can administer the cure.” The only functioning Synagogue in Slovenia has been in the Jewish Cultural Center at Križevniška 3 in Ljubljana since 2016, where the sefer torah of the Slovene Jewish community is located. Rituals are occasional for Sabbaths and for major Jewish holidays. In 2021, a new Synagogue was opened in Ljubljana, which is also the first synagogue that is not managed by the municipality, but directly by the Jewish community. Notable Jews from Slovenia See also Notes and references External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Sentimental_comedy] | [TOKENS: 1785] |
Contents Sentimental comedy Sentimental comedy is an 18th-century dramatic genre which sprang up as a reaction to the immoral tone of English Restoration plays. In sentimental comedies, middle-class protagonists triumphantly overcome a series of moral trials. These plays aimed to produce tears rather than laughter and reflected contemporary philosophical conceptions of humans as inherently good but capable of being led astray by bad example. By appealing to his noble sentiments, a man could be reformed and set back on the path of virtue. Although the plays contained characters whose natures seemed overly virtuous and whose problems were too easily resolved, they were accepted by audiences as truthful representations of the human predicament. Elements of the genre The characters in sentimental comedy are either strictly good or bad. Heroes have no faults or bad habits, villains are thoroughly evil or morally degraded. The authors' purpose was to show the audience the innate goodness of people and that through morality people who have been led astray can find the path of righteousness. The plot usually centered on the domestic trials of middle-class couples and included romantic love scenes. Their private woes are exhibited with much emotional stress intended to arouse the spectator’s pity and suspense in advance of the approaching happy ending. Lovers are often shown separated from each other by socioeconomic factors at the beginning, but brought together in the end by a discovery about the identity of the lower class lover. Plots also contained an element of mystery to be solved. Verse was not used in order to create a closer illusion of reality. It was thought that rhyme would obscure the true meaning of the words and make the truth disappear. The playwrights of this genre aimed to bring the audience to tears, not laughter, as the name sentimental comedy might suggest. They believed that noisy laughter inhibited the silent sympathy and thought of the audience. Playwrights strove to touch the feelings of the spectators so that they could learn from the play and relate the events they witnessed on stage to their own lives, causing them to live more virtuously. Major works The best known work of this genre is Sir Richard Steele's The Conscious Lovers (1722), in which the penniless heroine Indiana faces various tests until the discovery that she is an heiress leads to the necessary happy ending. Steele wished his plays to bring the audience, "a pleasure too exquisite for laughter." Steele was an Irish writer and politician, remembered mainly for co-founding the magazine The Spectator. While he wrote a few notable sentimental comedies, he was criticized for being a hypocrite as he wrote moral plays, booklets, and articles but enjoyed drinking, occasional dueling, and debauchery around town. Scholars argue whether a more important writer of the genre was Colley Cibber, an actor-manager, writer, and poet laureate who wrote the first sentimental comedy, Love's Last Shift, in order to give himself a role. The play did establish him as both an actor and a playwright, and though some of his 25 plays were praised, his political adaptations of well-known works met with much criticism. Neither Steele nor Colley, or any other writer, made a career of writing sentimental comedies as the genre was popular for only a short time. In fact, all of the authors of sentimental comedy at this time wrote other forms, including restoration comedy and tragedy. Sentimental comedies continued to coexist with more conventional laughing comedies such as Oliver Goldsmith's She Stoops to Conquer (1773) and Richard Brinsley Sheridan's The Rivals (1775) until the sentimental genre waned in the early 19th century. Significant environmental factors Sentimental comedy was a reaction to the bawdy restoration comedy of the 17th and 18th centuries. Many believed that the sexually explicit behavior encouraged by Charles II on the stage led to the demoralization of the English population outside the theater. Many felt that restoration comedies, which started out ridiculing vice, appeared to support vice instead therefore becoming one of the leading causes of moral corruption. One of the leading environmental factors that made way for this new genre was Jeremy Collier's Short View of the Immorality and Profaneness of the English Stage, published in 1698. This essay signaled the public opposition to the supposed improprieties of plays staged during the previous three decades. Collier convincingly argued that the, "business of plays is to recommend Vertue, and discountenance Vice". Other sentimentalists took on the responsibility to moralize the stage in hopes of repairing the perceived damage of restoration comedies. These playwrights and theoreticians used the theater to instruct rather than delight after puritan opposition to theater grew from 1660 to 1698. At the opening night of Cibber's Love's Last Shift at Dury Lane Theater in January 1696 spectators experienced a new genre. They were genuinely surprised by the unexpected reconciliation and the joy of seeing this, "spread such an uncommon rapture of pleasure in the audience that never were spectators more happy in easing their minds by uncommon and repeated plaudits and honest tears." This enthusiasm was aroused by the virtues of the characters, creating a sense of astonishment in the audience because they allowed them to feel admiration for people like themselves. This feeling became the hallmark of sentimentalism. Richard Steele stated that sentimental comedies, "makes us approve ourselves more" and Denis Diderot advocated that sentimentalism helps spectators remember that all nature is inherently good. Sentimentalists met resistance with playwrights of true comedy, who also had a moral aim but strove to reach it by exhibiting characters from which the audience should take warning instead of emulate. Sentimental comedy influenced and became absorbed into a new genre called domestic tragedy beginning around the mid-18th century. These tragedies intended to use real-life situations, settings, and prose to move an audience and foreshadowed the realism to come in the 19th century. Critical response Pierre Beaumarchais was very much in support of sentimental comedy and describes his reasoning in his essay published in 1767. He explains first that the purpose of sentimental comedy is to offer a more immediate interest and more direct moral lesson than tragedy, and a deeper meaning than comedy. Since according to Beaumarchais noisy laughter is the enemy of thought, sentimental comedy gives its audience a chance to find silent sympathy and thought provoking isolation in tears. Being touched by the action on stage allows viewers to learn from the play and as good men are reminded of the rewards of virtues they are able to relate the play's events to real life. The form is praised for doing away with verse and rhyme as they can obscure the meaning – making the truth disappear. Beaumarchais is instead in favor of language found in nature, and used in sentimental comedy. To combat the opposition, Beaumarchais lays out come criticism of laughing comedy. He argues that laughing at others distances the laughter from those being made fun of and that mockery is therefore not the best weapon to fight vice. A play that encourages this type of behavior also interests the audience more in the rascal than the honest man showing the viewers that morality is shallow, worthless, and inverted. Even Beaumarchais admits that some critics describe the genre as deadly dawdling prose with no comic relief, maxims, or characters with improbable plots that will inspire laziness in young writers who will not take the time to write verse. In this essay, alternately titled A Comparison between Laughing and Sentimental Comedy and published in 1773, Oliver Goldsmith invokes the classical definition of comedy through Aristotle and Terence and insists that comedy is meant to expose the vices rather than the distresses of man. He argues that theatre is meant to amuse its spectators and while sentimental comedy might amuse the public, laughing comedy would amuse them more. He goes further to say that the characters of sentimental comedy are difficult to relate to and that audience members will, therefore, remain indifferent to the characters' plight. Goldsmith advocates that since sentimental comedies show distresses that they should be labeled as tragedies, though a simple name change will not enhance their efficacy. The essay is ended with a sarcastic comment about the ease with which any writer could create a sentimental comedy with just some, "insipid dialogue, without character or humour...make a pathetic scene or two, with a sprinkling of tender melancholy conversation...and there is no doubt that all the ladies will cry". Sentimental comedy had both supporters and naysayers, but by the 1770s the genre had all but died out, leaving in its place laughing comedies, such as Oliver Goldsmith's She Stoops to Conquer, which were generally concerned the intrigues of those living in upper-class society. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Committee_of_safety_(American_Revolution)] | [TOKENS: 1671] |
Contents Committee of safety (American Revolution) In the American Revolution, committees of correspondence, committees of inspection, also known as committees of observation and committees of safety, were different local committees of Patriots that became a shadow government; they took control of the Thirteen Colonies away from royal officials, who became increasingly helpless. In the Province of Massachusetts Bay, as affairs drew toward a crisis, it became usual for towns to appoint three committees: of correspondence, of inspection, and of safety. The first was to keep the community informed of dangers either legislative or executive, and concert measures of public good; the second to watch for violations of non-importation agreements[clarify], or attempts of loyalists to evade them; the third to act as general executive while the legal authority was in abeyance. In February 1776 these were regularly legalized by the Massachusetts General Court but consolidated into one called the "Committee of Correspondence, Inspection, and Safety" to be elected annually by the towns. Committees of correspondence Committees of correspondence were public functionaries, and first existed in England, created by the parliamentary party of the 17th century in their struggles with the Stuarts. In 1763, when the English government attempted to enforce the trade and navigation acts on the American colonies after the Peace of Paris, the colonial leaders advised the merchants to hold meetings and appoint committees to memorialize the legislature and correspond with each other to forward a union of interests. This was done in Massachusetts, Rhode Island, and New York 1763–64. On November 21, 1772, a town meeting at Faneuil Hall in Boston, appointed a correspondence committee of 21 to communicate with other Massachusetts towns concerning infringements of popular rights. Until late in 1774, the committee remained the real executive body of Boston and largely of the province. The Boston committee, by legal town meeting, was made the executive of Boston. Under its direction the tea was thrown into the harbor, and the Tea Act of 1773 roused the remaining colonies: Georgia in September, Maryland and Delaware in October, North Carolina in December, New York and New Jersey in February, chose legislative committees of correspondence. New municipalities later joined the movement, including several in New Hampshire, Rhode Island, and New York City. After the Boston Port Bill came into effect the Boston committee invited those of eight other towns to meet in Faneuil Hall, and the meeting sent circulars to the other colonies recommending suspension of trade with Great Britain, while the legislative committee was directed by the House to send copies of the Port Bill to other colonies, and call attention to it as an attempt to suppress American liberty. The organization of the committees was at once enormously extended; almost every town, city, or county had one. In the middle and southern colonies the committees were empowered, by the terms of their appointment, to elect deputies to meet with those of other committees, to consult on measures for the public good. Committees of inspection Resolution 11, passed by the First Continental Congress in Philadelphia, established Committees of Inspection in every county, city, and town to enforce the Continental Association. Hundreds of committees of inspection were formed following the First Continental Congress's declaration of the Continental Association, a boycott of British goods, in October 1774. In New York City, it was called the Committee of Observation or Committee of Sixty. The focus of the committees was initially on enforcing the Non-importation Agreements, which aimed to hinder the import of British manufactured goods. However, as the revolutionary crisis continued, the committees rapidly took on greater powers, filling the vacuum left by the colonial governments; the committees began to collect taxes and recruit soldiers. Kathleen Burk writes, "It is significant that the Committees believed that they derived their authority from the Continental Congress, not from the provincial assemblies or congresses." Committees of safety Committees of Safety were a later outcome of the committees of correspondence. Committees of safety were executive bodies that governed during adjournments of, were created by, and derived their authority from, provincial assemblies or congresses, like those of the New York Provincial Congress. The Committees of safety were emergency panels of leading citizens, who passed laws, handed down regulations, enacted statutes, and did other fundamental business prior to the Declaration of Independence in July 1776 and the passage of individual state constitutions. As they assumed power to govern, however, they generally chose to observe rough legal procedures, warning and shaming enemies rather than killing them. Two examples of the rough legal proceedings were forced public confessions and apologies for slander or more violently, roughing up an individual for voting against giving the poorer Bostonians supplies. Many of the men that had served on their individual states' Committees of safety were later delegates for the Continental Congress. Importance T. H. Breen writes that "proliferation of local committees represented a development of paramount importance in the achievement of independence," because the committees were the first step in the creation of "a formal structure capable not only of policing the revolution on the ground but also of solidifying ties with other communities." The network of committees were also vital for reinforcing "a shared sense of purpose," speaking to "an imagined collectivity—a country of the mind" of Americans. For ordinary people, they were community forums where personal loyalties were revealed, tested, and occasionally punished. ... Serving on committees of safety ... was certainly not an activity for the faint of heart. The members of these groups exposed ideological dissenters, usually people well-known in the communities in which they lived. Although the committees attempted as best they could to avoid physical violence, they administered revolutionary justice as they alone defined it. They worked out their own investigative procedures, interrogated people suspected of undermining the American cause, and meted out punishments they deemed appropriate to the crimes. By mid-1775 the committees increasingly busied themselves with identifying, denouncing, and shunning political offenders. By demanding that enemies receive "civil excommunication" – the chilling words of a North Carolina committee – these groups silenced critics without sparking the kind of bloodbath that has characterized so many other insurgencies throughout the world. The strengthening of the committees of correspondence in the 1770s also marked the creation of what Gordon S. Wood terms "a new kind of popular politics in America." Wood writes that "the rhetoric of liberty now brought to the surface long-latent political tendencies. Ordinary people were no longer willing to trust only wealthy and learned gentlemen to represent them ... various artisan, religious, and ethnic groups now felt that their particular interests were so distinct that only people of their kind could speak for them. In 1774 radicals in Philadelphia demanded that seven artisans and six Germans be added to the revolutionary committee of the city." The development of coalition and interest-group politics greatly alarmed both royal officials and more conservative Patriots. For example, William Henry Drayton, the prominent South Carolina planter who had studied at Oxford University, complained about the participation of cobblers and butchers, stating that "Nature never intended that such men should be profound politicians, or able statesmen. In 1775, the royal governor of Georgia "noted in astonishment that the committee in control of Savannah consisted of 'a Parcel of the Lowest People, chiefly carpenters, shoemakers, Blacksmiths etc with a Jew at their head."[Footnote 1] Very few records of committees of safety survive. Committee activities are attested to primarily through newspapers and published material. By 1775, the committees had become counter-governments that gradually replaced royal authority and took control of local governments. They regulated the economy, politics, morality, and militia of their individual communities. In North Carolina in December 1776, they came under the control of a more powerful central authority, the Council of Safety. Eighteen years later, at the height of the French Revolution, France was ruled by its own Committee of Public Safety. The French revolutionaries were familiar with the American struggle — for them, the most recent and significant precedent of a Republican revolution. References Footnote Bibliography This article incorporates text from a publication now in the public domain: Rines, George Edwin, ed. (1904). "Committees of Correspondence". Encyclopedia Americana. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Wilhelm_Beer] | [TOKENS: 317] |
Contents Wilhelm Beer Wilhelm Wolff Beer (4 January 1797 – 27 March 1850) was a banker and astronomer from Berlin, Prussia, and the brother of Giacomo Meyerbeer. Astronomy Beer's fame derives from his hobby, astronomy. He built a private observatory with a 9.5 cm refractor in Tiergarten, Berlin. Together with Johann Heinrich Mädler he produced the first exact map of the Moon (entitled Mappa Selenographica) in 1834–1836, and in 1837 published a description of the Moon (Der Mond nach seinen kosmischen und individuellen Verhältnissen). Both remained the best descriptions of the Moon for many decades. In 1830, Beer and Mädler created the first globe of the planet Mars. In 1840 they made a map of Mars and calculated its rotation period to be 24 h 37 min 22.7 s, only 0.1 seconds different from the actual period as it is known today. Other work In addition to his hobby of astronomy, he helped with the establishment of a railway system in Prussia, and promoted the Jewish community in Berlin. In his last decade of life, he worked as a writer and politician. In 1849 he was elected as an MP for the first chamber of the Prussian parliament. Named after Beer The crater Beer on Mars is named in Wilhelm Beer's honor and lies near Mädler. There is also a crater called Beer on the Moon. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Albert-L%C3%A1szl%C3%B3_Barab%C3%A1si] | [TOKENS: 2715] |
Contents Albert-László Barabási Albert-László Barabási (born March 30, 1967) is a Romanian-born Hungarian-American physicist, renowned for his pioneering discoveries in network science and network medicine. He is a distinguished university professor and Robert Gray Professor of Network Science at Northeastern University, holding additional appointments at the Department of Medicine, Harvard Medical School and the Department of Network and Data Science at Central European University. Barabási previously served as the former Emil T. Hofmann Professor of Physics at the University of Notre Dame and was an associate member of the Center of Cancer Systems Biology at the Dana–Farber Cancer Institute, Harvard University. In 1999 Barabási discovered the concept of scale-free networks and proposed the Barabási–Albert model, which explains the widespread emergence of such networks in natural, technological and social systems, including the World Wide Web and online communities. Barabási is the founding president of the Network Science Society, which sponsors the flagship NetSci Conference established in 2006. Birth and education Barabási was born on March 30, 1967 to an ethnic Hungarian family in Cârța, Harghita County, Romania. His father, László Barabási, was a historian, museum director and writer, while his mother, Katalin Keresztes, taught literature, and later became director of a children's theater. He attended a high school specializing in science and mathematics; where he won a local physics olympiad in the 9th and 12th grade. Between 1986 and 1989, he studied physics and engineering at the University of Bucharest; during which time he began researching chaos theory and published three papers. In 1989, Barabási emigrated to Hungary, together with his father. He received a master's degree in 1991 at Eötvös Loránd University in Budapest, under the supervision of Tamás Vicsek. Barabási then enrolled in the Physics program at Boston University, where he earned his PhD in 1994. His doctoral thesis, conducted under the direction of H. Eugene Stanley, was published by Cambridge University Press under the title Fractal Concepts in Surface Growth. Academic career After a one-year postdoc at the IBM Thomas J. Watson Research Center, Barabási joined the faculty at the University of Notre Dame in 1995. In 2000, at the age of 32, he was named the Emil T. Hofman Professor of Physics, becoming the youngest endowed professor. In 2004 he founded the Center for Complex Network Research. In 2005–6 he was a visiting professor at Harvard University. In fall 2007, Barabási left Notre Dame to become a Distinguished University Professor and director of the Center for Network Science at Northeastern University. Concurrently, he took up an appointment in the Department of Medicine at Harvard Medical School. As of 2008, Barabási holds Hungarian, Romanian and U.S. citizenship. Research and achievements Barabási's contributions to network science and network medicine have fundamentally changed the study of complex systems. Barabási's work challenged the prevailing notion that complex networks could be adequately modeled as random networks. He is particularly renowned for his 1999 discovery of scale-free networks. In 1999 he created a map of the World Wide Web and found that its degree distribution does not follow the Poisson distribution expected for random networks, but instead it is best approximated by a power law. Collaborating with his student, Réka Albert, he introduced the Barabási–Albert model, which proposed that growth and preferential attachment are jointly responsible for the emergence of the scale-free property in real-world networks. The following year, Barabási demonstrated that the power law degree distribution is not limited to the World Wide Web, but also appear in metabolic networks and protein–protein interaction networks, demonstrating the universality of the scale-free property. In 2009 Science celebrated the ten-year anniversary of Barabási's groundbreaking discovery by dedicating a special issue to Complex Systems and Networks, recognizing his paper as one of the most cited in the journal's history.[citation needed] In a 2001 paper with Réka Albert and Hawoong Jeong, Barabási demonstrated that networks exhibit robustness to random failures but are highly vulnerable to targeted attacks, a characteristic known as the Achilles' heel property. Specifically, networks can easily withstand the random failure of a large number of nodes, highlighting their significant robustness. However, they are prone to rapid collapse when the most connected hubs are deliberately removed. The breakdown threshold of a network was analytically linked to the second moment of the degree distribution, whose convergence to zero for large networks explain why heterogenous networks can survive the failure of a large fraction of their nodes. In 2016, Barabási extended these concepts to network resilience, demonstrating that the network structure determines a system's capacity for resilience. While robustness refers to the system's ability to carry out its basic functions despite the loss of some nodes and links, resilience involved the system's ability to adapt to internal and external disturbances by modifying its mode of operation without losing functionality. Therefore, resilience is a dynamical property that requires a fundamental shift in the system's core activities. Barabási is recognized as one of the founders of network medicine, a term he introduced in his 2007 article entitled "Network Medicine – From Obesity to the "Diseasome"", published in The New England Journal of Medicine. His work established the concept of diseasome, or disease network, which illustrates how diseases are interconnected through shared genetic factors, highlighting their common genetic roots. He subsequently pioneered the use of large patient data, linking the roots of disease comorbidity to molecular networks. A key concept of network medicine is Barabási's discovery that genes associated with the same disease are located in the same network neighborhood, which led to the concept of disease module, which is currently employed to facilitate drug discovery, drug design, and the development of biomarkers. He elaborated on these concepts in his a 2012 TEDMED talk, emphasizing their significance in medical research and treatment strategies. His contributions have been instrumental in establishing the Channing Division of Network Medicine at Harvard Medical School and the Network Medicine Institute, representing 33 universities and institutions around the world committed to advancing the field. Barabási's work in network medicine has led to multiple experimentally falsifiable predictions, helping identify experimentally validated novel pathways in asthma, the prediction of new mechanism of action for compounds such as rosmarinic acid, and the repurposing of existing drugs for new therapeutic functions (drug repurposing). The practical applications of network medicine have made significant impacts in clinical settings. For example, his research aids physicians in determining whether rheumatoid arthritis patients will respond to anti-TNF therapy. During COVID Barabási led a major collaboration involving researchers from Harvard University, Boston University and The Broad Institute, to predict and experimentally test the efficacy for COVID patients of 6,000 approved drugs. Barabási's work on nutritional dark matter and food composition, in collaboration with Giulia Menichetti, has fundamentally reshaped our understanding of diet as a complex system and its implication for health. In his 2019 study, he revealed that conventional nutritional databases track only a minuscule fraction of the over 26,000 biochemicals present in food, coining the term "nutritional dark matter," work that inspired the Periodic Table of Food Initiative by the Rockefeller Foundation and the American Heart Association. In 2021, he extended network medicine approaches to elucidate the health implications of polyphenols, demonstrating how intricate molecular networks connect dietary compounds to health outcomes. His research on food processing led to the development of the first AI tool to predict the degree of food processing for any food, and showed that over 73% of the US food supply is ultra-processed and correlating processing levels with adverse health markers. Barabási's efforts culminated in the 2025 release of GroceryDB and the TrueFood database, that is used by millions on a daily basis, as it reveals the processing levels of foods in US supermarkets. Barabási in 2005 discovered the fat-tailed nature of the interevent times in human activity patterns. The pattern indicated that human activity is bursty - short periods of intensive activity are followed by long periods that lack detectable activity. Bursty patterns have been subsequently discovered in a wide range of processes, from web browsing to email communications and gene expression patterns. He proposed the Barabási model of human dynamics, to explain the phenomena, demonstrating that a queuing model can explain the bursty nature of human activity, a topic is covered by his book Bursts: The Hidden Pattern Behind Everything We Do. Barabási laid foundational work in understanding individual human mobility patterns through a series of influential papers. In his 2008 Nature publication, Barabási utilized anonymized mobile phone data to analyze human mobility, discovering that human movement exhibits a high degree of regularity in time and space, with individuals showing consistent travel distances and a tendency to return to frequently visited locations. In a subsequent 2010 Science paper, he explored the predictability of human dynamics by analyzing mobile phone user trajectories. Contrary to expectations, he found a 93% predictability of in human movements across all users. He introduced two principles governing human trajectories, leading to the development of the widely used model for individual mobility. Using this modeling framework, a decade before the COVID-19 pandemic, Barabási predicted the spreading patterns of a virus transmitted through direct contact. Barabási has made significant contributions to the understanding of network controllability and observability, addressing the fundamental question of how large networks regulate and manage their own behavior. He was the first to apply the tools of control theory to network science, bridging disciplines that had traditionally been studied separately. He proposed a method to identify the nodes through which one can control a complex network, by mapping the control problem, widely studied in physics and engineering since Maxwell, into graph matching, merging statistical mechanics and control theory. Barabási utilized network control principles to predict the functions of individual neurons within the Caenorhabditis elegans connectome. This application provided direct experimental confirmation of network control theories by successfully identifying new neurons involved in the organism's locomotion, and experimentally confirming the validity of the predictions. His work demonstrated the practical utility of network control methods in biological systems, highlighting their potential for uncovering previously unknown functional components within complex networks. Awards Barabási was the recipient of the 2024 Gothenburg Lise Meitner Award; he has also been the recipient of the 2023 Julius Edgar Lilienfeld Prize, the top prize of the American Physical Society, "for pioneering work on the statistical physics of networks that transformed the study of complex systems, and for lasting contributions in communicating the significance of this rapidly developing field to a broad range of audiences." In 2021 he received the EPS Statistical and Nonlinear Physics Prize, awarded by the European Physical Society for "his pioneering contributions to the development of complex network science, in particular for his seminal work on scale-free networks, the preferential attachment model, error and attack tolerance in complex networks, controllability of complex networks, the physics of social ties, communities, and human mobility patterns, genetic, metabolic, and biochemical networks, as well as applications in network biology and network medicine." Barabási has been elected to the US National Academy of Sciences, Austrian Academy of Sciences (2024), Hungarian Academy of Sciences (2004), Academia Europaea (2007), European Academy of Sciences and Art (2018), Romanian Academy of Sciences (2018) and the Massachusetts Academy of Sciences (2013). He was elected Fellow of the American Physical Society (2003), of the American Association for the Advancement of Science (2011), of the Network Science Society (2021). He was awarded a Doctor Honoris Causa by Obuda University (2023) in Hungary, the Technical University of Madrid (2011), Utrecht University (2018) and West University of Timișoara (2020). He received the Bolyai Prize from the Hungarian Academy of Sciences (2019), the Senior Scientific Award of the Complex Systems Society (2017) for "setting the basis of what is now modern Network Science", the Lagrange Prize (2011) C&C Prize (2008) Japan "for stimulating innovative research on networks and discovering that the scale-free property is a common feature of various real-world complex networks" and the Cozzarelli Prize, National Academies of Sciences (USA), John von Neumann Medal (2006) awarded by the John von Neumann Computer Society from Hungary, for outstanding achievements in computer-related science and technology and the FEBS Anniversary Prize for Systems Biology (2005). In 2021, Barabási was ranked 2nd in the world in a ranking of the world's best engineering and technology scientists, based on their h-index. Selected publications References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Andrew_McCollum] | [TOKENS: 263] |
Contents Andrew McCollum Andrew McCollum (born September 4, 1983) is an American angel investor and businessman. He is a co-founder of Facebook and the current chief executive officer of Philo. Education McCollum attended Harvard University with co-founder Mark Zuckerberg and others on the founding team. He worked at Facebook from February 2004 till September 2007. Initially, he worked on Wirehog, a file-sharing program, together with Adam D'Angelo. McCollum returned to Harvard College and graduated in 2007 with a bachelor's degree in computer science. He later earned a master's degree in education from the Harvard Graduate School of Education. McCollum was a member of the Harvard team that competed in the 31st Association for Computing Machinery International Collegiate Programming Contest in Tokyo, having placed second in the regional competitions behind Massachusetts Institute of Technology. Career McCollum was the cofounder of JobSpice, an online resume preparation tool. He currently acts as Entrepreneur in Residence at New Enterprise Associates and Flybridge Capital Partners. In November 2014, McCollum was announced as new Philo CEO, succeeding Christopher Thorpe. Personal life McCollum married Gretchen Sisson, a sociology postdoc, in June 2012. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/History_of_the_Jews_in_Spain] | [TOKENS: 19918] |
Contents History of the Jews in Spain The history of the Jews in the current-day Spanish territory stretches back to Biblical times according to Jewish tradition, but the settlement of organised Jewish communities in the Iberian Peninsula possibly traces back to the times after the destruction of the Second Temple in 70 CE. The earliest archaeological evidence of Hebrew presence in Iberia consists of a 2nd-century gravestone found in Mérida. From the late 6th century onward, following the Visigothic monarchs' conversion from Arianism to the Nicene Creed, conditions for Jews in Iberia considerably worsened. After the Umayyad conquest of Hispania in the early 8th century, Jews lived under the Dhimmi system and progressively Arabised. Jews of Al-Andalus stood out particularly during the 10th and the 11th centuries, in the caliphal and first taifa periods. Scientific and philological study of the Hebrew Bible began, and secular poetry was written in Hebrew for the first time. After the Almoravid and Almohad invasions, many Jews fled to Northern Africa and the Christian Iberian kingdoms. Targets of antisemitic mob violence, Jews living in the Christian kingdoms faced persecution throughout the 14th century, leading to the 1391 pogroms. As a result of the Alhambra Decree of 1492, the remaining practising Jews in Castile and Aragon were forced to convert to Catholicism (thus becoming 'New Christians' who faced discrimination under the limpieza de sangre system) whereas those who continued to practise Judaism (c. 100,000–200,000) were expelled, creating diaspora communities. Tracing back to a 1924 decree, there have been initiatives to favour the return of Sephardi Jews to Spain by facilitating Spanish citizenship on the basis of demonstrated ancestry. An estimated 40,000 to 50,000 Jews live in Spain today. Early history The earliest significant Jewish presence in the Iberian Peninsula is generally traced back to the first centuries CE, when the region, known to the Romans as Hispania, was part of the Roman Empire. This presence is supported by both archaeological finds and literary sources. Among the early artifacts of likely Jewish origin discovered in Spain is an amphora from the first century CE, discovered in Ibiza, part of the Balearic Islands. The vessel bears two Hebrew characters, suggesting Jewish contact with the region, likely through trade between Judaea and the Balearics. Additionally, a signet ring from Cádiz, dating to the 8th–7th century BCE, features an inscription generally considered Phoenician, though some scholars interpret it as "paleo-Hebraic," possibly indicating a Jewish presence in biblical times. Two trilingual Jewish inscriptions from Tarragona and Tortosa, dated between the 2nd century BCE and the 6th century CE, further support evidence of early Jewish settlements. A tombstone from Adra (formerly Abdera), inscribed with the name of a Jewish infant, Annia Salomonula, dates to the 3rd century CE. One of the earliest references possibly indicating a Jewish presence in Roman-era Spain is Paul the Apostle's Epistle to the Romans. Paul's stated intention to travel to Spain to preach the gospel has been interpreted by many as evidence of established Jewish communities in the region during the mid-first century CE. Flavius Josephus, in The Jewish War, records that Herod Antipas, son of Herod the Great and tetrarch of Galilee and Perea, was exiled by Emperor Caligula to Spain in 39 CE. However, in his later work, Antiquities of the Jews, Josephus identifies the location of Antipas's banishment as Gaul. Rabbinic literature from references Spain as a distant land with a Jewish presence. The Mishnah, redacted around 200 CE, implied that there was a Jewish community in Spain, and that there was communication with the Jewish community in the Land of Israel. A tradition passed down by Rabbi Berekhiah and Rabbi Shimon bar Yochai, quoting second-century tanna Rabbi Meir, states: "Do not fear, O Israel, for I help you from remote lands, and your seed from the land of their captivity, from Gaul, from Spain, and from their neighbors." From a slightly later period, Midrash Rabbah (Leviticus Rabba § 29.2), and Pesikta de-Rav Kahana (Rosh Hashanna), both, make mention of the Jewish diaspora in Spain (Hispania) and their eventual return. Among these early references are several decrees of the Council of Elvira, convened in the early fourth century, which address proper Christian behaviour with regard to the Jews of Spain, notably forbidding marriage between Jews and Christians. Thus, while there are limited material and literary indications for Jewish contact with Spain from a very early period, more definitive and substantial data begins with the third century. Data from this period suggest a well-established community, whose foundations must have been laid sometime earlier. Some suggest that substantial Jewish immigration probably occurred during the Roman period of Hispania. The province came under Roman control with the fall of Carthage after the Second Punic War (218–202 BC). It is likely that these communities originated several generations earlier in the aftermath of the conquest of Judea, and possible that they originated much earlier. It is within the realm of possibility that they went there under the Romans as free men to take advantage of its rich resources and build enterprises there. These early arrivals would have been joined by those who had been enslaved by the Romans under Vespasian and Titus, and dispersed to the extreme west during the period of the Jewish-Roman War, and especially after the defeat of Judea in 70. One questionable estimate places the number carried off to Spain at 80,000. Subsequent immigrations came into the area along both the northern African and southern European sides of the Mediterranean. As citizens of the Roman Empire, the Jews of Spain engaged in a variety of occupations, including agriculture. Until the adoption of Christianity, Jews had close relations with non-Jewish populations, and played an active role in the social and economic life of the province. The edicts of the Synod of Elvira, although early examples of priesthood-inspired antisemitism, provide evidence of Jews who were integrated enough into the greater community to cause alarm among some: of the council's 80 canonic decisions, all those that pertained to Jews served to maintain a separation between the two communities. It seems that by this time the presence of Jews was of greater concern to Catholic authorities than the presence of pagans; Canon 16, which prohibited marriage with Jews, was worded more strongly than canon 15, which prohibited marriage with pagans. Canon 78 threatens those who commit adultery with Jews with ostracism. Canon 48 forbade Jews from blessing Christian crops, and Canon 50 forbade sharing meals with Jews; repeating the command to Hebrew the Bible indicated respect to Gentile.[further explanation needed] Although the spread of Jews into Europe is most commonly associated with the Diaspora that ensued from the Roman conquest of Judea, emigration from Judea into the greater Roman Mediterranean area antedated the destruction of Jerusalem at the hands of the Romans under Titus. Any Jews already in Hispania at this time would have been joined by those who had been enslaved by the Romans under Vespasian and Titus, and dispersed to the extreme west during the period of the Jewish Wars, and especially after the defeat of Judea in 70. One account placed the number carried off to Hispania at 80,000. Subsequent immigrations came into the area along both the northern African and southern European sides of the Mediterranean. As citizens of the Roman Empire, the Jews of Hispania engaged in a variety of occupations, including agriculture. Until the adoption of Christianity, Jews had close relations with non-Jewish populations and played an active role in the social and economic life of the province.[citation needed] Around 300 CE, the Synod of Elvira, an ecclesiastical council held in the Roman province of Hispania Baetica, addressed the interactions between Christians and Jews, which were relatively common at the time, with some Christians even admiring Jewish practices. To mitigate Jewish influence on Christian society, the council enacted several edicts aimed at reinforcing separation between the two groups. Canon 16 prohibited intermarriage between Christians and Jews, Canon 78 imposed penalties on Christians committing adultery with Jewish women, Canon 48 forbade Jews from blessing Christian crops, and Canon 50 prohibited shared meals between Christians and Jews. Severus of Minorca's Letter on the Conversion of the Jews, from the 5th century, recounts the alleged conversion of Menorca's Jewish population in 418. Following the arrival of Saint Stephen's relics in Magona, Severus launched a campaign against the local Jews. Fearing violence and inspired by the Maccabees, the Jews stockpiled weapons. Severus mobilized Christians, accused Jewish leaders of plotting, and inspected the synagogue's weapons. This led to a riot, with Christians seizing and burning the synagogue. Within a week, all 540 local Jews were converted by force. In comparison to Jewish life in Byzantium and Italy, life for the early Jews in Hispania and the rest of southern Europe was relatively tolerable. This is due in large measure to the difficulty the Church had in establishing itself in its western frontier. In the west, Germanic tribes such as the Suevi, the Vandals, and especially the Visigoths had more or less disrupted the political and ecclesiastical systems of the Roman empire, and for several centuries the Jews enjoyed a degree of peace their brethren to the east did not.[citation needed] In Jewish tradition Medieval Jewish legends often traced the arrival of Jews in Spain to the First Temple period. One such legend from the 16th century claimed that a funeral inscription in Murviedro belonged to Adoniram, a commander of King Solomon, who had supposedly died in Spain while collecting tribute. Another legend spoke of a letter allegedly sent by the Jews of Toledo to Judaea in 30 CE, asking to prevent the crucifixion of Jesus. These legends aimed to establish that Jews had settled in Spain well before the Roman period and to absolve them of any responsibility for the death of Jesus, a charge often leveled at them in later centuries. Several early Jewish writers wrote that their families had lived in Spain since the destruction of the first temple. Isaac Abravanel (1437–1508) stated that the Abravanel family had lived on the Iberian Peninsula for 2,000 years. The earliest mention of Sepharad is, allegedly, found in Obadiah 1:20: “And the exiles of this host of the sons of Israel who are among the Canaanites as far as Ṣarfat (Heb. צרפת), and the exiles of Jerusalem who are in Sepharad, will possess the cities of the south.” While the medieval lexicographer, David ben Abraham al-Fasi, identifies Ṣarfat with the city of Ṣarfend (Ladino: צרפנדה), the word Sepharad (Hebrew: ספרד) in the same verse has been translated by the 1st century rabbinic scholar, Jonathan ben Uzziel, as Aspamia. Based on a later teaching in the compendium of Jewish oral laws compiled by Judah ha-Nasi in 189 CE, known as the Mishnah, Aspamia is associated with a very faraway place, generally thought of as Hispania, or Spain. Circa 960, Hisdai ibn Shaprut, minister of trade in the court of the caliph in Córdoba, wrote to Joseph the king of Khazaria, saying: “The name of our land in which we dwell is called in the sacred tongue, Sepharad, but in the language of the Arabs, the indwellers of the lands, Alandalus [Andalusia], the name of the capital of the kingdom, Córdoba.” Some legends associated the biblical placename Tarshish with Tartessus, a locale in southern Spain, and suggesting Jewish traders were active in Spain during the Phoenician and Carthaginian eras. In the Bible, Trashish is mentioned in the books of Jeremiah, Ezekiel, I Kings, Jonah and Romans; In generally describing Tyre's empire from west to east, Tarshish is listed first (Ezekiel 27.12–14), and in Jonah 1.3 it is the place to which Jonah sought to flee from the LORD; evidently it represents the westernmost place to which one could sail. One might speculate that commerce conducted by Jewish emissaries, merchants, craftsmen, or other tradesmen among the Canaanitic-speaking Phoenicians from Tyre might have brought them to Tarshish. Although the notion of Tarshish as Spain is merely based on suggestive material, it leaves open the possibility of a very early Jewish presence in the Iberian peninsula. According to Rabbi David Kimchi (1160–1235), in his commentary on Obadiah 1:20, Ṣarfat and Sepharad refer to the Jews exiled during the war with Titus and who went as far as the countries Alemania (Germany), Escalona, France and Spain. He explicitly identified Ṣarfat and Sepharad as France and Spain, respectively. Some scholars think that, in the case of the place-name, Ṣarfat (lit. Ṣarfend) – which, as noted, was applied to the Jewish Diaspora in France, the association with France was made only exegetically because of its similarity in spelling with the name פרנצא (France), by a reversal of its letters. Spanish Jew Moses de León (ca. 1250 – 1305) mentions a tradition concerning the first Jewish exiles, saying that the vast majority of the first exiles driven away from the land of Israel during the Babylonian captivity refused to return, for they had seen that the Second Temple would be destroyed like the first. In yet another teaching, passed down later by Moshe ben Machir in the 16th century, an explicit reference is made to the fact that Jews have lived in Spain since the destruction of the First Temple: Now, I have heard that this praise, emet weyaṣiv [which is now used by us in the prayer rite] was sent by the exiles who have driven away from Jerusalem and who were not with Ezra in Babylon and that Ezra had sent inquiring after them, but they did not wish to go up [there], replied that since they were destined to go off again into exile a second time, and that the Temple would once again be destroyed, why should we then double our anguish? It is best for us that we remain here in our place and to serve God. Now, I have heard that they are the people of Ṭulayṭulah (Toledo) and those who are near to them. However, that they might not be thought of as wicked men and those who are lacking in fidelity, may God forbid, they wrote down for them this magnanimous praise, etc. Similarly, Gedaliah ibn Jechia the Spaniard has written: In [5,]252 anno mundi [1492 CE], the King Ferdinand and his wife, Isabella, made war with the Ishmaelites who were in Granada and took it, and while they returned they commanded the Jews in all of his kingdoms that in but a short time they were to take leave from the countries [they had heretofore possessed], they being Castile, Navarre, Catalonia, Aragón, Granada and Sicily. Then the [Jewish] inhabitants of Ṭulayṭulah (Toledo) answered that they were not present [in the land of Judea] at the time when their Christ was put to death. Apparently, it was written upon a large stone in the city's street which some very ancient sovereign inscribed and testified that the Jews of Ṭulayṭulah (Toledo) did not depart from there during the building of the Second Temple, and were not involved in putting to death [the man whom they called] Christ. Yet, no apology was of any avail to them, neither unto the rest of the Jews, till at length, six hundred-thousand souls had evacuated from there. Don Isaac Abrabanel, a prominent Jewish figure in the 15th century and one of the king's trusted courtiers who witnessed the 1492 expulsion of Jews, informs his readers that the first Jews to reach Spain were brought by ship to Spain by a certain Phiros, a confederate of the king of Babylon in laying siege to Jerusalem. This man was a Greek by birth, but had been given a kingdom in Spain. He became related by marriage to a certain Espan, the nephew of King Heracles, who also ruled over a kingdom in Spain. This Heracles later renounced his throne because of his preference for his native country in Greece, leaving his kingdom to his nephew, Espan, from whom the country's name España (Spain) derives. The Jewish exiles transported there by the said Phiros were descended by lineage from Judah, Benjamin, Shimon and Levi, and were, according to Abrabanel, settled in two districts in southern Spain: one, Andalusia, in the city of Lucena – a city so-called by the Jewish exiles that had come there; the second, in the country around Ṭulayṭulah (Toledo). Abrabanel says that the name Ṭulayṭulah was given to the city by its first Jewish inhabitants, and surmises that the name may have meant טלטול (= wandering), on account of their wandering from Jerusalem. He says, furthermore, that the original name of the city was Pirisvalle, so named by its early pagan[clarification needed] inhabitants. According to Abrabanel, the Jewish exiles that arrived in Spain during the biblical period were later joined by those brought by Titus after the destruction of the Second Temple. Rabbi and scholar Abraham ibn Daud wrote in 1161: "A tradition exists with the [Jewish] community of Granada that they are from the inhabitants of Jerusalem, of the descendants of Judah and Benjamin, rather than from the villages, the towns in the outlying districts [of Israel]." Elsewhere, he writes about his maternal grandfather's family and how they came to Spain: "When Titus prevailed over Jerusalem, his officer who was appointed over Hispania appeased him, requesting that he send to him captives made-up of the nobles of Jerusalem, and so he sent a few of them to him, and there were amongst them those who made curtains and who were knowledgeable in the work of silk, and [one] whose name was Baruch, and they remained in Mérida." Here, Rabbi Abraham ben David refers to the second influx of Jews into Spain, shortly after the destruction of Israel's Second Temple in 70 CE. Don Isaac Abrabanel wrote that he found written in the ancient annals of Spanish history collected by the kings of Spain that the 50,000 Jewish households then residing in the cities throughout Spain were the descendants of men and women who were sent to Spain by the Roman Emperor and who had formerly been subjected to him, and whom Titus had originally exiled from places in or around Jerusalem. The two Jewish exiles, those sent to Spain after the destruction of the First Temple, and those sent there after the destruction of the Second Temple, joined together and became one community. Under the Visigoths (5th century to 711) Barbarian invasions brought most of the Iberian Peninsula under Visigothic rule by the early 5th century. Other than in their contempt for Catholics, who reminded them of the Romans, the Visigoths did not generally take much of an interest in the religious creeds within their kingdom. It was only in 506, when Alaric II (484–507) published his Breviarium Alaricianum in which he adopted the laws of the ousted Romans that a Visigothic king concerned himself with the Jews. The tides turned even more dramatically following the conversion of the Visigothic royal family under Recared from Arianism to Catholicism in 587. In their desire to consolidate the realm under the new religion, the Visigoths adopted an aggressive policy concerning the Jews. As the king and the church acted in a single interest, the situation for the Jews deteriorated. At the Toledo III Council in 589, bishops endorsed the Breviary's restrictions on Jews, including prohibitions on intermarriage with Christians, owning Christian slaves, and holding public office. While the policies of the subsequent Kings Liuva II (601–604), Witteric (603–610), and Gundemar (610–612) are unknown,[citation needed] Sisebut (612–620) embarked on Recared's course with renewed vigour. Sisebut instituted what was to become a recurring phenomenon in European Christian kingdoms, the first edicts requiring the mass conversion of all Jews to Christianity. After his 613 decree that Jews must either convert or be expelled, some fled to Gaul or North Africa, while as many as 90,000 converted. Many of the conversos, like those of later periods, maintained their Jewish identities in secret. During the more tolerant reign of Suintila (621–631), most of the conversos returned to Judaism, and a number of the exiles returned to Spain. In 633, the Fourth Council of Toledo, while taking a stance in opposition to compulsory baptism, convened to address the problem of crypto-Judaism. The canons referred to forcibly converted Jews as "baptized Jews" or simply as "Jews," but never as "Christians". It was decided that if professed Christian were determined to be a practising Jew, their children were to be taken away to be raised in monasteries or trusted Christian households. The council further directed that all who had reverted to Judaism during the reign of Swintila had to return to Christianity. The trend toward intolerance continued with the ascent of Chintila (636–639). He directed the Sixth Council of Toledo to order that only Catholics could remain in the kingdom, and taking an unusual step further, he excommunicated "in advance" any of his successors who did not act in accordance with his anti-Jewish edicts. Again, many converted, but others chose exile. However, the "problem" continued. The Eighth Council of Toledo in 653 again tackled the issue of Jews within the realm. Further measures at the time included the forbidding of all Jewish rites (including circumcision and the observation of the Shabbat), and all converted Jews had to promise to put to death, either by burning or by stoning, any of their brethren known to have relapsed to Judaism. The council was aware that prior efforts had been frustrated by lack of compliance among authorities on the local level; therefore, anyone, including nobles and clergy, found to have aided Jews in their practice of Judaism was to be punished by seizure of one quarter of their property and excommunication. The efforts again proved unsuccessful. The Jewish population remained sufficiently sizable as to prompt Wamba (672–680) to issue limited expulsion orders against them, and the reign of Erwig (680–687) also seemed vexed by the issue. The Twelfth Council of Toledo again called for forced baptism and, for those who disobeyed, seizure of property, corporal punishment, exile, ll and slavery. Jewish children over seven years of age were taken from their parents and similarly dealt with in 694. Erwig also took measures to ensure that Catholic sympathisers would not be inclined to aid Jews in their efforts to subvert the council's rulings. Heavy fines awaited any nobles who acted in favour of the Jews, and members of the clergy who were remiss in enforcement were subject to a number of punishments. Egica (687–702), recognising the wrongness of forced baptism, relaxed the pressure on the conversos but kept it up on practising Jews. Economic hardships included increased taxes and the forced sale, at a fixed price, of all property ever acquired from Christians. That effectively ended all agricultural activity for the Jews of Spain. Furthermore, Jews were not to engage in commerce with the Christians of the kingdom or to conduct business with Christians overseas. Egica's measures were upheld by the Sixteenth Council of Toledo in 693. In 694, at the Council of Toledo, Jews were condemned to slavery by the Visigoths because of a plot to revolt against them encouraged by the Eastern Roman Empire and Romans still residing in Spain. After the Visigoths elites adopted the Nicene creed, persecutions of Jews increased. The degree of complicity that the Jews had in the Islamic invasion in 711 is uncertain, but since they were openly treated as enemies in the country in which they had resided for generations, it would be no surprise for them to have appealed to the Moors to the south, who were quite tolerant in comparison to the Visigoths, for aid. In any case, the Jews in 694 were accused of conspiring with Muslims across the Mediterranean. Jews were declared traitors, including baptised Jews, found their property confiscated and themselves enslaved. The decree exempted only the converts who dwelt in the mountain passes of Septimania, who were necessary for the kingdom's protection. The Eastern Roman Empire sent its navy on numerous occasions in the late 7th century and the early 8th century to try to instill uprisings in the Jewish and Christian Roman populations in Spain and Gaul against their Visigoth and Frankish rulers that was also aimed at halting the expansion of Muslim Arabs in the Roman world. The Jews of Spain were utterly embittered and alienated by Catholic rule at the time of the Muslim invasion. The Moors were perceived as a liberating force and welcomed by Jews eager to help them to administer the country. In many conquered towns, the Muslims left the garrison in the hands of the Jews before they proceeded further north, which initiated the Golden Age of Spanish Jews. Jewish life in al-Andalus (711–1085) With the victory of Tariq ibn Ziyad in 711, the lives of the Sephardim changed dramatically. For the most part, the invasion of the Moors was welcomed by the Jews of Iberia. Both Muslim and Catholic sources tell that Jews provided valuable aid to the invaders. Once the city was captured, the defence of Córdoba was left in the hands of Jews, and Granada, Málaga, Seville, and Toledo were left to a mixed army of Jews and Moors. The Chronicle of Lucas de Tuy records that when the Catholics left Toledo on Sunday before Easter to go to the Church of Saint Leocadia to listen to the divine sermon, the Jews acted treacherously, informed the Saracens, closed the gates of the city before the Catholics and opened them for the Moors. However, unlike de Tuy's account, Rodrigo Jiménez de Rada's De rebus Hispaniae maintains that Toledo was "almost completely empty of its inhabitants" not because of Jewish treachery but because "many had fled to Amiara, others to Asturias and some to the mountains" and the city was then fortified by a militia of Arabs and Jews (3.24). Although in the cases of some towns, the behaviour of the Jews may have been conducive to Muslim success, it was of limited impact overall. In spite of the restrictions placed upon the Jews as dhimmis, life under Muslim rule was one of great opportunity in comparison to that under prior Catholic Visigoths, as was testified by the influx of Jews from abroad. To Jews throughout the Catholic and Muslim worlds, Iberia was seen as a land of relative tolerance and opportunity. After initial Arab-Berber victories, especially with the establishment of Umayyad dynasty rule by Abd al-Rahman I in 755, the native Jewish community was joined by Jews from the rest of Europe, as well as from Arab territories from Morocco to Mesopotamia (the latter region was known as Babylonia in Jewish sources). Thus, the Sephardim found themselves enriched culturally, intellectually, and religiously by the commingling of diverse Jewish traditions. Contacts with Middle Eastern communities were strengthened, and the influence of the Babylonian academies of Sura and Pumbedita was at its greatest. As a result, until the mid-10th century, much Sephardic scholarship focused on Halakha. Although not as influential, traditions of the Levant, known as Palestine, were also introduced, in an increased interest in Hebrew and biblical studies. Arabic culture, of course, also made a lasting impact on Sephardic cultural development. General re-evaluation of scripture was prompted by Muslim anti-Jewish polemics and the spread of rationalism, as well as the anti-Rabbanite polemics of Karaite Judaism. In adopting Arabic, as had the Babylonian geonim (the heads of the Talmudic Academies in Babylonia), the cultural and intellectual achievements of Arabic culture were opened up to the educated Jew, as was much of the scientific and philosophical speculation of Greek culture, which had been best preserved by Arab scholars. The meticulous regard which the Arabs had for grammar and style also had the effect of stimulating an interest among Jews in philological matters in general. Arabic came to be the main language of Sephardic science, philosophy and everyday business. From the second half of the 9th century, most Jewish prose, including many non-halakhic religious works, was in Arabic. The thorough adoption of Arabic greatly facilitated the assimilation of Jews into Arabic culture. Although initially, the often-bloody disputes among Muslim factions generally kept Jews out of the political sphere, the first approximately two centuries that preceded the Golden Age were marked by increased activity by Jews in a variety of professions, including medicine, commerce, finance and agriculture. By the ninth century, some members of the Sephardic community felt confident enough to take part in proselytizing amongst previously-Jewish "Catholics". Most famous were the heated correspondences sent between Bodo the Frank, a former deacon who had converted to Judaism in 838, and the converso Bishop of Córdoba, Álvaro of Córdoba. Both men, by using such epithets as "wretched compiler", tried to convince the other to return to their former religion but to no avail. During the al-Andalus period, Jews primarily lived in cities rather than rural areas. They likely made up around 2% of the overall population, though their presence was much more significant in certain regions. In medieval Granada, Jews may have even formed the majority, and the city was popularly referred to as Gharnātat al-Yahūd—"Granada of the Jews." After the Umayyad dynasty was overthrown by the Abbasids in 750, a surviving prince, 'Abd al-Raḥmān I, fled Damascus and eventually reached the Iberian Peninsula, where he established the independent Emirate of Córdoba in 756, with the city as its capital. In 929, his descendant 'Abd al-Raḥmān III proclaimed the Caliphate of Córdoba, asserting full political and religious independence from the Abbasid and Fatimid caliphates in the east. This marked the beginning of a period of relative stability, prosperity, and cultural flourishing in al-Andalus, attracting increasing numbers of Jewish migrants from North Africa, Italy and the eastern Mediterranean, where conditions had become increasingly unstable. A vibrant, largely Arabic-speaking Jewish community emerged, integrated into the region's commercial, intellectual, and administrative life. The onset of the so-called Golden Age is closely associated with the career of Ḥasdai ibn Shaprūṭ (c. 915–c. 970), a Jewish courtier who served 'Abd al-Raḥmān III and his successor, al-Ḥakam II. Initially recognized for his medical expertise, he rose to become a trusted advisor, diplomat, and financial administrator. Appointed nasi (leader) of the Jewish community, he played a central role in fostering a cultural and scholarly renaissance. Under his patronage, Hebrew studies flourished, and Córdoba became, in the words of one scholar, the "Mecca of Jewish scholars who could be assured of a hospitable welcome from Jewish courtiers and men of means." He founded a talmudic academy in the city under Rabbi Moses ben Hanoch, acquired Jewish texts from Babylonia, and drew to his circle notable figures such as Dunash ben Labraṭ, the innovator of Hebrew metrical poetry, and Menaḥem ben Saruq, compiler of the first Hebrew dictionary, which later gained wide use among Jewish communities in Germany and France. Hasdai benefited world Jewry by creating a favourable environment for scholarly pursuits within Iberia but also by using his influence to intervene on behalf of foreign Jews, as is reflected in his letter to the Byzantine Princess Helena. In it, he requested protection for the Jews under Byzantine rule, attested to the fair treatment of the Christians of al-Andalus and indicated that such was contingent on the treatment of Jews abroad. As a prominent dignitary, he corresponded with the Khazars, a kingdom that had converted to Judaism in the 8th century. In 1009, the Caliphate of Córdoba entered a period of civil war and instability that ultimately led to its collapse. By 1031, the caliphate had formally disintegrated, marking the beginning of the Taifa period in al-Andalus. The region fragmented into numerous independent Muslim principalities, or taifas, each governed by local rulers. These mini-states were often centered around major cities—such as Seville, Granada, Zaragoza, and Toledo—and ruled by local dynasties or ambitious military leaders. While politically divided and frequently in conflict with one another, the taifas also experienced a burst of cultural and intellectual activity, often competing in patronage of poets, artists, and scientists. Rather than having a stifling effect, the disintegration of the caliphate expanded the opportunities to Jewish and other professionals. The services of Jewish scientists, doctors, traders, poets and scholars were generally valued by the Christian as well as Muslim rulers of regional centres, especially as recently-conquered towns were put back into order. One of the most prominent Jews to hold high office in the taifa kingdoms was Samuel ibn Naghrillah, also known as Samuel ha-Nagid (993–1056). According to tradition, his rise to power began when his refined calligraphy brought him to the attention of the court in Granada, where he entered the service of King Ḥabbūs al-Muzaffar and later his son Bādīs ibn Ḥabbūs. Over the course of three decades, Samuel served as vizier, policy advisor, and military commander, one of the very few Jews in Islamic history, along with his son Joseph ibn Naghrilla, to lead Muslim armies. The period during which Samuel ha-Nagid commanded a Jewish-led army represents the only known instance of such leadership between antiquity and the modern state of Israel. A distinguished poet and scholar, Samuel also authored an introduction to the Talmud that remains influential. During the taifa period, Jews also held vizierial positions in cities such as Seville, Lucena, and Zaragoza. Lucena experienced its heyday as a Jewish community from the 10th to the early 12th century, when it became one of the most important centers of Jewish life in al-Andalus; its population was reportedly entirely Jewish, and it was home to a prestigious yeshiva led by prominent scholars such as Isaac Alfasi and Joseph ibn Migash. Samuel ha-Nagid is regarded as one of the greatest poets of the Golden Age of Hebrew poetry in Spain, alongside figures such as Solomon ibn Gabirol, Judah Halevi and Abraham and Moses ibn Ezra. These poets composed a wide range of works, including secular poetry on love, friendship, nature, and war, as well as liturgical poems and religious hymns praising God and the covenant between the Creator and the people of Israel. Judah Halevi, born in Tudela, Navarre, is considered one of the greatest Hebrew poets of all time. Among his most celebrated works are the Zionides (Shirei Tzion), which express longing for the Land of Israel—especially the well-known poems Libi BaMizrah ("My Heart is in the East") and Siyyon ha-lo tishali ("Zion, Do You Not Inquire?"). HaLevi was also the author of the Kuzari, a fictional dialogue inspired by the Khazar king's conversion to Judaism. The work advocates the spiritual primacy of Judaism over rationalist philosophy and other religions, and concludes with a call to return to the Land of Israel. Later in life, he left Spain and set out for the Land of Israel, composing a final series of poems during his journey; he is believed to have died en route or at Jerusalem's gates. His poetic and philosophical legacy continued to influence Jewish thought and literature long after his death, and his works remain foundational texts in the Hebrew literary tradition. The intellectual achievements of the Sephardim of al-Andalus influenced the lives of non-Jews as well. Most notable of the literary contributions is Ibn Gabirol's neo-Platonic Fons Vitae ("The Source of Life"). Thought by many to have been written by a Christian, the work was admired by Christians and studied in monasteries throughout the Middle Ages. Some Arabic philosophers followed Jewish ones in their ideas although that phenomenon was somewhat hindered in that, although in Arabic, Jewish philosophical works were usually written with Hebrew characters. Jews were also active in such fields as astronomy, medicine, logic and mathematics. In addition to training the mind in logical yet abstract and subtle modes of thought, the study of the natural world, as the direct study of the work of the Creator, was ideally a way to better understand and become closer to God. Al-Andalus also became a major centre of Jewish philosophy during Hasdai's time. Following the tradition of the Talmud and the Midrash, many of the most notable Jewish philosophers were dedicated to the field of ethics, although the ethical Jewish rationalism rested on the notion that traditional approaches had not been successful in their treatments of the subject in that they were lacking in rational, scientific arguments. In addition to contributions of original work, the Sephardim were active as translators. Greek texts were rendered into Arabic, Arabic into Hebrew, Hebrew and Arabic into Latin and all combinations of vice versa occurred. In translating the great works of Arabic, Hebrew, and Greek into Latin, Iberian Jews were instrumental in bringing the fields of science and philosophy, which formed much of the basis of Renaissance learning, into the rest of Europe. The so-called Golden Age of Jewish life in Muslim Spain began to wane well before the completion of the Christian Reconquista, eroded in part by the growing influence of zealot Islamic movements from North Africa. A major turning point came with the Granada massacre of 1066, when a Muslim mob stormed the royal palace where Joseph ibn Nagrela, son of Samuel ha-Nagid and vizier to the emir of Granada, had sought refuge. He was seized and publicly crucified, and the violence quickly escalated into a full-scale pogrom in which 4,000 Jews were reportedly killed and 1,500 Jewish families were attacked. Almoravids and Almohads (1085–1215) After the fall of Toledo to Christian forces in 1085, the ruler of Seville appealed to the Almoravids, a Berber Muslim dynasty from North Africa, for military assistance. The Almoravids, known for their strict religious conservatism, abhorred the more cosmopolitan and tolerant culture of al-Andalus, including the elevated status that some dhimmīs (non-Muslims under Muslim rule) held within Andalusi society. In addition to battling the Christians, who were gaining ground, the Almoravides implemented numerous reforms to bring al-Andalus more in line with their notions of proper Islam. In spite of large-scale forcible conversions, Sephardic culture was not entirely decimated. Members of Lucena's Jewish community, for example, managed to bribe their way out of conversion. As the spirit of Andalusian Islam was absorbed by the Almoravides, policies concerning Jews were relaxed. The poet Moses ibn Ezra continued to write during this time, and several Jews served as diplomats and physicians to the Almoravides. Wars in North Africa with Muslim tribes eventually forced the Almoravides to withdraw their forces from Iberia. As the Christians advanced, Iberian Muslims again appealed to their brethren to the south, this time to those who had displaced the Almoravides in north Africa. The Almohads, who had taken control of much of Islamic Iberia by 1172, far surpassed the Almoravides in fundamentalist outlook and treated the dhimmis harshly. Jews and Christians were expelled from Morocco and Islamic Spain. Faced with the choice of either death or conversion, many Jews emigrated. Some, such as the family of Maimonides, fled south and east to the more tolerant Muslim lands, and others went northward to settle in the growing Christian kingdoms. Meanwhile, the Reconquista continued in the north. By the early 12th century, conditions for some Jews in the emerging Christian kingdoms were becoming increasingly favourable. As had happened during the reconstruction of towns after the breakdown of authority under the Umayyads, the services of Jews were employed by the Christian leaders, who were increasingly emerging victorious during the later Reconquista. The Jews' knowledge of the language and the culture of the enemy, their skills as diplomats and professionals and their desire for relief from intolerable conditions rendered their services of great value to the Christians during the Reconquista, the very same reasons that they had proved useful to the Arabs in the early stages of the Muslim invasion. The necessity of having conquerors settle in reclaimed territories also outweighed the prejudices of antisemitism, at least while the Islamic threat was imminent. Thus, as conditions in Islamic Iberia worsened, immigration to Christian principalities increased. The Jews from the Muslim south were not entirely secure in their northward migrations, however. Old prejudices were compounded by newer ones. Suspicions of complicity with Islam were alive, and Jews who immigrated from Muslim territories spoke Arabic. However, many of the newly-arrived Jews of the north prospered during the late eleventh and early twelfth centuries. The majority of Latin documentation regarding Jews during the period refers to their landed property, fields and vineyards. In many ways, life had come full circle for the Sephardim of al-Andalus. As conditions became more oppressive in the areas under Muslim rule during the 12th and the 13th centuries, Jews again looked to an outside culture for relief. Christian leaders of reconquered cities granted them extensive autonomy, and Jewish scholarship recovered and developed as communities grew in size and importance (Assis, p. 18). However, the Reconquista Jews never reached the same heights as had those of the Golden Age. Christian kingdoms (974–1300) Catholic princes,[who?] the counts of Castile and the first kings of León, treated the Jews harshly. In their operations against the Moors they did not spare the Jews, destroying their synagogues and killing their teachers and scholars.[citation needed] Only gradually did the rulers come to realize that, surrounded as they were by powerful enemies, they could not afford to turn the Jews against them.[citation needed] Garcia Fernandez, Count of Castile, in the fuero of Castrojeriz (974), placed the Jews in many respects on an equality with Catholics; and similar measures were adopted by the Council of Leon (1020), presided over by Alfonso V. In Leon many Jews owned real estate, and engaged in agriculture and viticulture as well as in the handicrafts; and here, as in other towns, they lived on friendly terms with the Christian population.[citation needed] The Council of Coyanza [es] (1050) therefore found it necessary to revive the old Visigothic law forbidding, under pain of punishment by the Church, Jews and Christians to live together in the same house, or to eat together.[citation needed] Ferdinand I of Castile set aside a part of the Jewish taxes for the use of the Church, and even the not very religious-minded Alfonso VI gave to the church of León the taxes paid by the Jews of Castro. Alfonso VI, the conqueror of Toledo (1085), was tolerant and benevolent in his attitude toward the Jews, for which he won the praise of Pope Alexander II. To estrange the wealthy and industrious Jews from the Moors he offered the former various privileges. In the fuero of Najara Sepulveda, issued and confirmed by him in 1076, he not only granted the Jews full equality with Catholics, but he even accorded them the rights enjoyed by the nobility. To show their gratitude to the king for the rights granted them, the Jews willingly placed themselves at his and the country's service. Alfonso's army contained 40,000 Jews, who were distinguished from the other combatants by their black-and-yellow turbans; for the sake of this Jewish contingent the Battle of Sagrajas was not begun until after the Sabbath had passed. The king's favoritism toward the Jews, which became so pronounced that Pope Gregory VII warned him not to permit Jews to rule over Catholics, roused the hatred and envy of the latter. After the Battle of Uclés, at which the Infante Sancho, together with 30,000 men were killed, an anti-Jewish riot broke out in Toledo; many Jews were slain, and their houses and synagogues were burned (1108). Alfonso intended to punish the murderers and incendiaries, but died in June 1109 before he could carry out his intention. After his death the inhabitants of Carrión de los Condes fell upon the Jews; many were slain, others were imprisoned, and their houses were pillaged. Alfonso VII, who assumed the title of Emperor of Leon, Toledo, and Santiago, curtailed in the beginning of his reign the rights and liberties which his father had granted the Jews. He ordered that neither a Jew nor a convert might exercise legal authority over Catholics, and he held the Jews responsible for the collection of the royal taxes. Soon, however, he became more friendly, confirming the Jews in all their former privileges and even granting them additional ones, by which they were placed on equality with Catholics. Considerable influence with the king was enjoyed by Judah ben Joseph ibn Ezra (Nasi). After the conquest of Calatrava (1147) the king placed Judah in command of the fortress, later making him his court chamberlain. Judah ben Joseph stood in such favor with the king that the latter, at his request, not only admitted into Toledo the Jews who had fled from the persecutions of the Almohades, but even assigned many fugitives dwellings in Flascala (near Toledo), Fromista, Carrion, Palencia, and other places, where new congregations were soon established. After the brief reign of King Sancho III, a war broke out between Fernando II of León, (who granted the Jews special privileges), and the united kings of Aragon and Navarre. Jews fought in both armies, and after the declaration of peace they were placed in charge of the fortresses. Alfonso VIII of Castile (1166–1214), who had succeeded to the throne, entrusted the Jews with guarding Or, Celorigo, and, later, Mayorga, while Sancho the Wise of Navarre placed them in charge of Estella, Funes, and Murañon. During the reign of Alfonso VIII the Jews gained still greater influence, aided, doubtless, by the king's love of the beautiful Rachel (Fermosa) of Toledo, who was Jewish. When the king was defeated at the Battle of Alarcos by the Almohades under Yusuf Abu Ya'kub al-Mansur, the defeat was attributed to the king's love-affair with Fermosa, and she and her relatives were murdered in Toledo by the nobility. After the victory at Alarcos the emir Muhammad al-Nasir ravaged Castile with a powerful army and threatened to overrun the whole of Catholic Spain. The Archbishop of Toledo called to crusade to aid Alfonso. In this war against the Moors the king was greatly aided by the wealthy Jews of Toledo, especially by his "almoxarife mayor", the learned and generous Nasi Joseph ben Solomon ibn Shoshan (Al-Hajib ibn Amar). The Crusaders were hailed with joy in Toledo, but this joy was soon changed to sorrow, as far as the Jews were concerned. The Crusaders began the "holy war" in Toledo (1212) by robbing and killing the Jews, and if the knights had not checked them with armed forces all the Jews in Toledo would have been slain. When, after the battle of Las Navas de Tolosa (1212), Alfonso victoriously entered Toledo, the Jews went to meet him in triumphal procession. Shortly before his death (Oct. 1214) the king issued the fuero de Cuenca, settling the legal position of the Jews in a manner favorable to them. A turning-point in the history of the Jews of Spain was reached under Ferdinand III, (who permanently united the kingdoms of León and Castile), and under James I, the contemporary ruler of Aragon. The clergy's endeavors against the Jews became more and more pronounced. Spanish Jews of both sexes, like the Jews of France, were compelled to distinguish themselves from Catholics by wearing a yellow badge on their clothing; this order was issued to keep them from associating with Catholics, although the reason given was that it was ordered for their own safety. Some Jews such as Vidal Taroç, were also allowed to own land. The papal bull issued by Pope Innocent IV in April 1250, to the effect that Jews might not build a new synagogue without special permission, also made it illegal for Jews to proselytize, under pain of death and confiscation of property. They might not associate with the Catholics, live under the same roof with them, eat and drink with them, or use the same bath; neither might a Catholic partake of wine which had been prepared by a Jew. The Jews might not employ Catholic nurses or servants, and Catholics might use only medicinal remedies which had been prepared by competent Catholic apothecaries. Every Jew should wear the badge, though the king reserved to himself the right to exempt anyone from this obligation; any Jew apprehended without the badge was liable to a fine of ten gold maravedís or to the infliction of ten stripes. Jews were also forbidden to appear in public on Good Friday. The Jews in Spain were citizens of the kingdoms in which they resided (Castile, Aragón, and Valencia were the most important), both as regards their customs and their language. They owned real estate, and they cultivated their land with their own hands; they filled public offices, and on account of their industry they became wealthy while their knowledge and ability won them respect and influence. But this prosperity roused the jealousy of the people and provoked the hatred of the clergy; the Jews had to suffer much through these causes. The kings, especially those of Aragon, regarded the Jews as their property; they spoke of "their" Jews, "their" juderías (Jewish neighborhoods), and in their own interest they protected the Jews against violence, making good use of them in every way possible. The Jews were vassals of the king, the same as Christian commoners.[citation needed] There were about 120 Jewish communities in Catholic Spain around 1300, with somewhere around half a million or more Jews,[citation needed] mostly in Castille. Catalonia, Aragón, and Valencia were more sparsely inhabited by Jews. Even though the Spanish Jews engaged in many branches of human endeavor—agriculture, viticulture, industry, commerce, and the various handicrafts—it was the money business that procured to some of them their wealth and influence. Kings and prelates, noblemen and farmers, all needed money and could obtain it only from the Jews, to whom they paid from 20 to 25 percent interest. This business, which, in a manner, the Jews were forced to pursue[citation needed] in order to pay the many taxes imposed upon them as well as to raise the compulsory loans demanded of them by the kings,[citation needed] led to their being employed in special positions, as "almonries", bailiffs, tax collectors. The Jews of Spain formed in themselves a separate political body. They lived almost solely in the Juderias, various enactments being issued from time to time preventing them from living elsewhere. From the time of the Moors they had had their own administration. At the head of the aljamas in Castile stood the "rab de la corte", or "rab mayor" (court, or chief, rabbi), also called "juez mayor" (chief justice), who was the principal mediator between the state and the aljamas. These court rabbis were men who had rendered services to the state, as, for example, David ibn Yah.ya and Abraham Benveniste, or who had been royal physicians, as Meïr Alguadez and Jacob ibn Nuñez, or chief-tax-farmers, as the last incumbent of the court rabbi's office, Abraham Senior. They were appointed by the kings, no regard being paid to the rabbinical qualifications or religious inclination of those chosen 1300–1391 In the beginning of the fourteenth century the position of Jews became precarious throughout Spain as antisemitism increased. Many Jews emigrated from the crowns of Castile and Aragon. It was not until the reigns of Alfonso IV and Peter IV of Aragon, and of the young and active Alfonso XI of Castile (1325), that an improvement set in. In 1328, 5,000 Jews were killed in Navarre following the preaching of a mendicant friar. Peter of Castile, the son and successor of Alfonso XI, was relatively favorably disposed toward the Jews, who under him reached the zenith of their influence – often exemplified by the success of his treasurer, Samuel ha-Levi. For this reason, the king was called "the heretic" and often "the cruel". Peter, whose education had been neglected, was not quite sixteen years of age when he ascended the throne in 1350. From the commencement of his reign he so surrounded himself with Jews that his enemies in derision spoke of his court as "a Jewish court".[who?] Soon, however a civil war erupted, as Henry II of Castile and his brother, at the head of a mob, invaded on 7 May 1355 that part of the Judería of Toledo called the Alcaná; they plundered the warehouses and murdered about 1200 Jews, without distinction of age or sex. The mob did not, however, succeed in overrunning the Judería of Toledo proper, which was defended by the Jews and by knights loyal to the King. Following the succession of John I of Castile, conditions for Jews seem to have improved somewhat, with John I even making legal exemptions for some Jews, such as Abraham David Taroç. The more friendly Peter showed himself toward the Jews, and the more he protected them, the more antagonistic became the attitude of his illegitimate half-brother, who, when he invaded Castile in 1360, murdered all the Jews living in Nájera and exposed those of Miranda de Ebro to robbery and death. Everywhere the Jews remained loyal to King Peter, in whose army they fought bravely; the king showed his good-will toward them on all occasions, and when he called the King of Granada to his assistance he especially requested the latter to protect the Jews. Nevertheless they suffered greatly. Villadiego, whose Jewish community numbered many scholars, Aguilar, and many other towns were totally destroyed. The inhabitants of Valladolid, who paid homage to his half-brother Henry, robbed the Jews, destroyed their houses and synagogues, and tore their Torah scrolls to pieces. Paredes, Palencia, and several other communities met with a like fate, and 300 Jewish families from Jaén were taken prisoners to Granada. The suffering, according to a contemporary writer, Samuel Zarza of Palencia, had reached its culminating point, especially in Toledo, which was being besieged by Henry, and in which no less than 8,000 persons died through famine and the hardships of war. This civil conflict did not end until the death of Peter, of whom the victorious brother said, derisively, "Dó esta el fi de puta Judio, que se llama rey de Castilla?" ("Where is the Jewish son of a bitch, who calls himself king of Castile?") Peter was beheaded by Henry and Bertrand Du Guesclin on March 14, 1369. A few weeks before his death he reproached his physician and astrologer Abraham ibn Zarzal for not having told the truth in prophesying good fortune for him. When Henry de Trastámara ascended the throne as Henry II an era of suffering and intolerance began for the Castilian Jews, culminating in their expulsion. Prolonged warfare had devastated the land; the people had become accustomed to lawlessness, and the Jews had been reduced to poverty. But in spite of his aversion for the Jews, Henry did not dispense with their services. He employed wealthy Jews—Samuel Abravanel and others—as financial councilors and tax-collectors. His contador mayor, or chief tax-collector, was Joseph Pichon of Seville. The clergy, whose power became greater and greater under the reign of the fratricide, stirred the anti-Jewish prejudices of the masses into clamorous assertion at the Cortes of Toro in 1371. It was demanded that the Jews should be kept far from the palaces of the grandees, should not be allowed to hold public office, should live apart from the Catholics, should not wear costly garments nor ride on mules, should wear the badge, and should not be allowed to bear Catholic names. The king granted the two last-named demands, as well as a request made by the Cortes of Burgos in 1379 that Jews should neither carry arms nor sell weapons; but he did not prevent them from holding religious disputations, nor did he deny them the exercise of criminal jurisprudence. The latter prerogative was not taken from them until the reign of John I, Henry's son and successor; he withdrew it because certain Jews, on the king's coronation-day, by withholding the name of the accused, had obtained his permission to inflict the death-penalty on Joseph Pichon, who stood high in the royal favor; the accusation brought against Pichon included "harboring evil designs, informing, and treason. In the Cortes of Soria of 1380, it was enacted that rabbis, or heads of aljamas, should be forbidden, under penalty of a fine of 6,000 maravedís, to inflict upon Jews the penalties of death, mutilation, expulsion, or excommunication; but in civil proceedings they were still permitted to choose their own judges. In consequence of an accusation that the Jewish prayers contained clauses cursing the Catholics, the king ordered that within two months, on pain of a fine of 3,000 maravedís, they should remove from their prayer-books the objectionable passages. Whoever caused the conversion to Judaism of a Moor or of any one confessing another faith, or performed the rite of circumcision upon him, became a slave and the property of the treasury. The Jews no longer dared show themselves in public without the badge, and in consequence of the ever-growing hatred toward them they were no longer sure of life or limb; they were attacked and robbed and murdered in the public streets, and at length the king found it necessary to impose a fine of 6,000 maravedís on any town in which a Jew was found murdered. Against his desire, John was obliged in 1385 to issue an order prohibiting the employment of Jews as financial agents or tax-farmers to the king, queen, infantes, or grandees. To this was added the resolution adopted by the Council of Palencia ordering the complete separation of Jews and Catholics and the prevention of any association between them. Massacres and mass conversions of 1391 "The execution of Joseph Pichon and the inflammatory speeches and sermons delivered in Seville by Archdeacon Ferrand Martínez, the pious Queen Leonora's confessor, soon raised the hatred of the populace to the highest pitch. The feeble King John I, in spite of the endeavors of his physician Moses ibn Ẓarẓal to prolong his life, died at Alcalá de Henares on 9 October 1390, and was succeeded by his eleven-year-old son. The council-regent appointed by the king in his testament, consisting of prelates, grandees, and six citizens from Burgos, Toledo, León, Seville, Córdoba, and Murcia, was powerless; every vestige of respect for law and justice had disappeared. Ferrand Martínez, although deprived of his office, continued, in spite of numerous warnings, to incite the public against the Jews, and encourage it to acts of violence. As early as January 1391, the prominent Jews who were assembled in Madrid received information that riots were threatening in Seville and Córdoba. A revolt broke out in Seville in 1391. Juan Alfonso de Guzmán, Count of Niebla and governor of the city, and his relative, the "alguazil mayor" Alvar Pérez de Guzmán, had ordered, on Ash Wednesday, 15 March, according to the source the arrest and public whipping of two of the mob-leaders. If that date had been Ash Wednesday, Easter would have fallen on 30 April, which is impossible in western Christianity. The fanatical mob, still further exasperated thereby, murdered and robbed several Jews and threatened the Guzmáns with death. In vain did the regency issue prompt orders; Ferrand Martínez continued unhindered his inflammatory appeals to the rabble to kill the Jews or baptize them. On 6 June the mob attacked the Judería of Seville from all sides and killed 4000 Jews; the rest submitted to baptism as the only means of escaping death." "At this time Seville is said to have contained 7000 Jewish families. Of the three large synagogues existing in the city two were transformed into churches. In all the towns throughout the archbishopric, as in Alcalá de Guadeira, Écija, Cazalla, and in Fregenal de la Sierra, the Jews were robbed and slain. In Córdoba this butchery was repeated in a horrible manner; the entire Judería de Córdoba was burned down; factories and warehouses were destroyed by the flames. Before the authorities could come to the aid of the defenseless people, every one of them—children, young women, old men—had been ruthlessly slain; 2000 corpses lay in heaps in the streets, in the houses, and in the wrecked synagogues." From Córdova the spirit of murder spread to Jaén. A horrible butchery took place in Toledo on June 20. Among the many martyrs were the descendants of the famous Toledan rabbi Asher ben Jehiel. Most of the Castilian communities suffered from the persecution; nor were the Jews of Aragon, Catalonia, or Majorca spared. On July 9, an outbreak occurred in Valencia. More than 200 persons were killed, and most of the Jews of that city were baptized by the friar Vicente Ferrer, whose presence in the city was probably not accidental. The only community remaining in the former kingdom of Valencia was that of Murviedro. On Aug. 2 the wave of murder visited Palma, in Majorca; 300 Jews were killed, and 800 found refuge in the fort, from which, with the permission of the governor of the island, and under cover of night, they sailed to North Africa; many submitted to baptism. Three days later, on Saturday, August 5, a riot began in Barcelona. On the first day, 100 Jews were killed, while several hundred found refuge in the new fort; on the following day the mob invaded the Juderia and began pillaging. The authorities did all in their power to protect the Jews, but the mob attacked them and freed those of its leaders who had been imprisoned. On Aug. 8 the citadel was stormed, and more than 300 Jews were murdered, among the slain being the only son of Ḥasdai Crescas. The riot raged in Barcelona until Aug. 10, and many Jews (though not 11,000 as claimed by some authorities) were baptized. On the last-named day began the attack upon the Juderia in Girona; several Jews were robbed and killed; many sought safety in flight and a few in baptism. "The last town visited was Lérida (August 13). The Jews of this city vainly sought protection in the Alcázar; 75 were slain, and the rest were baptized; the latter transformed their synagogue into a church, in which they worshiped as Marranos." Several responses bearing on the widespread persecution of Iberian Jewry between the years 1390 and 1391 can be found in contemporary Jewish sources, such as in the Responsa of Isaac ben Sheshet (1326–1408), and in the seminal writing of Gedaliah ibn Yahya ben Joseph, Shalshelet haQabbalah (written ca. 1586), as also in Abraham Zacuto's Sefer Yuḥasin, in Solomon ibn Verga's Shevaṭ Yehudah, as well as in a Letter written to the Jews of Avignon by Don Hasdai Crescas in the winter of 1391 concerning the events in Spain in the year 1391. The letter is dated 19 October 1391. According to Don Hasdai Crescas, persecution against Jews began in earnest in Seville in 1391, on the 1st day of the lunar month Tammuz (June). From there the violence spread to Córdoba, and by the 17th day of the same lunar month, it had reached Toledo (then called by Jews after its Arabic name, Ṭulayṭulah). From there, the violence spread to Mallorca and by the 1st day of the lunar month Elul it had also reached the Jews of Barcelona in Catalonia, where the slain were estimated at two-hundred and fifty. So, too, many Jews who resided in the neighboring provinces of Lérida and Gironda and in the kingdom of València had been affected, as were also the Jews of al-Andalus, whereas many died a martyr's death, while others converted in order to save themselves. Incitement, disputations and anti-Jewish legislation (1391–1474) The year 1391 forms a turning-point in the history of the Spanish Jews. The persecution was the immediate forerunner of the Inquisition, which, ninety years later, was introduced as a means of watching heresy and converted Jews. The number of those who had embraced Catholicism, in order to escape death, was very large – over half of Spain's Jews according to Joseph Pérez, 200,000 converts with only 100,000 openly practicing Jews remaining by 1410.; Jews of Baena, Montoro, Baeza, Úbeda, Andújar, Talavera, Maqueda, Huete, and Molina, and especially of Zaragoza, Barbastro, Calatayud, Huesca, and Manresa, had submitted to baptism. Among those baptized were several wealthy men and scholars who scoffed at their former coreligionists; some even, as Solomon ha-Levi, or Paul de Burgos (called also Paul de Santa Maria), and Joshua Lorqui, or Gerónimo de Santa Fe, became the bitterest enemies and persecutors of their former brethren. After the bloody excesses of 1391 the popular hatred of the Jews continued unabated. The Cortes of Madrid and that of Valladolid (1405) mainly busied themselves with complaints against the Jews, so that Henry III found it necessary to prohibit the latter from practising usury and to limit the commercial intercourse between Jews and Catholics; he also reduced by one-half the claims held by Jewish creditors against Catholics. Indeed, the feeble and suffering king, the son of Leonora, who hated the Jews so deeply that she even refused to accept their money, showed no feelings of friendship toward them. Though on account of the taxes of which he was thereby deprived he regretted that many Jews had left the country and settled in Málaga, Almería, and Granada, where they were well treated by the Moors, and though shortly before his death he inflicted a fine of 24,000 doubloons on the city of Córdoba because of a riot that had taken place there (1406), during which the Jews had been plundered and many of them murdered, he prohibited the Jews from attiring themselves in the same manner as other Spaniards, and he insisted strictly on the wearing of the badge by those who had not been baptized. Many of the Jews from Valencia, Catalonia and Aragon thronged to North Africa, particularly Algiers. The forced conversions also possibly contributed to the resurgence of Kabbalah studies among the Jews of Spain in the early 15th century. At the Catholic preacher Ferrer's request a law consisting of twenty-four clauses, which had been drawn up by Paul of Burgos, né Solomon haLevi, was issued in January 1412 in the name of the child-king John II of Castile.[citation needed] The object of this law was to reduce the Jews to poverty and to further humiliate them. They were ordered to live by themselves, in enclosed Juderías, and they were to repair, within eight days after the publication of the order, to the quarters assigned them under penalty of loss of property. They were prohibited from practising medicine, surgery, or chemistry (pharmacy) and from dealing in bread, wine, flour, meat, etc. They might not engage in handicrafts or trades of any kind, nor might they fill public offices, or act as money-brokers or agents. They were not allowed to hire Catholic servants, farmhands, lamplighters, or gravediggers; nor might they eat, drink, or bathe with Catholics, or hold intimate conversation (have sexual relations) with them, or visit them, or give them presents. Catholic women, married or unmarried, were forbidden to enter the Judería either by day or by night. The Jews were allowed no self-jurisdiction whatever, nor might they, without royal permission, levy taxes for communal purposes; they might not assume the title of "Don", carry arms, or trim beard or hair. Jewish women were required to wear plain, long mantles of coarse material reaching to the feet; and it was strictly forbidden for Jews to wear garments made of better material. On pain of loss of property and even of slavery, they were forbidden to leave the country, and any grandee or knight who protected or sheltered a fugitive Jew was punished with a fine of 150,000 maravedís for the first offense. These laws, which were rigidly enforced, any violation of them being punished with a fine of 300–2,000 maravedís and flagellation, were calculated to compel the Jews to embrace Catholicism.[citation needed] The persecution of the Jews was now pursued systematically. In the hope of mass conversions, Benedict on 11 May 1415, issued a Papal bull consisting of twelve articles, which, in the main, corresponded with the decree ("Pragmática") issued by Catalina, and which had been placed on the statutes of Aragon by Fernando. By this bull Jews and neophytes were forbidden to study the Talmud, to read anti-Catholic writing, in particular the work "Macellum" ("Mar Jesu"), to pronounce the names of Jesus, Maria, or the saints, to manufacture communion-cups or other church vessels or accept such as pledges, or to build new synagogues or ornament old ones. Each community might have only one synagogue. Jews were denied all rights of self-jurisdiction, nor might they proceed against malsines (accusers). They might hold no public offices, nor might they follow any handicrafts, or act as brokers, matrimonial agents, physicians, apothecaries, or druggists. They were forbidden to bake or sell matzot, or to give them away; neither might they dispose of meat which they were prohibited from eating. They might have no intercourse (sex) with Catholics, nor might they disinherit their baptized children. They should wear the badge at all times, and thrice a year all Jews over twelve, of both sexes, were required to listen to a Catholic sermon. (the bull is reprinted, from a manuscript in the archives of the cathedral in Toledo, by Rios ["Hist." ii. 627–653]). Under the Catholic Monarchs (1474–1492) In the 1470s, the Catholic Monarchs of Spain—Isabella I of Castile (who ascended the throne in 1474) and Ferdinand II of Aragon (in 1479)—rose to power and initiated the dynastic union that would lay the foundations for a unified Spanish monarchy. Contemporary sources record that upon Isabella's coronation in Ávila, she was welcomed by the city's Jewish community with Torah scrolls, trumpets, and drums. Later, when the monarchs entered Seville, they were again greeted with enthusiasm by the local Jewish population, in stark contrast to the insults Ferdinand received from Castilian Christians, who viewed him as a foreigner. The crown soon adopted increasingly restrictive policies intending to the Jews both from the conversos and from their fellow countrymen. At the Cortes of Toledo, in 1480, all Jews were ordered to be separated in special barrios, and at the Cortes of Fraga, two years later, the same law was enforced in Navarre, where they were ordered to be confined to the Juderías at night. The same year saw the establishment of the Spanish Inquisition, the main object of which was to deal with the conversos. Though both monarchs were surrounded by Neo-Catholics, such as Pedro de Caballería and Luis de Santángel, and though Ferdinand was the grandson of a Jew, he showed the greatest intolerance to Jews, whether converted or otherwise, commanding all "conversos" to reconcile themselves with the Inquisition by the end of 1484, and obtaining a bull from Pope Innocent VIII ordering all Catholic princes to return all fugitive conversos to the Inquisition of Spain. One of the reasons for the increased rigor of the Catholic monarchs was the disappearance of the fear of any united action by Jews and Moors, the kingdom of Granada being at its last gasp. The rulers did, however, promise the Jews of the Moorish kingdom that they could continue to enjoy their existing rights in exchange for aiding the Spaniards to overthrow the Moors. This promise dated 11 February 1490, was repudiated, however, by the decree of expulsion. See the Catholic Monarchs of Spain.[citation needed] The prohibitions, persecution and eventual Jewish mass emigration from Spain and Portugal probably had adverse effects on the development of the Spanish economy. Jews and Non-Catholic Christians reportedly had substantially better numerical skills than the Catholic majority, which might be due to the Jewish religious doctrine, which focused strongly on education, for example because Torah-Reading was compulsory. Even when Jews were forced to quit their highly skilled urban occupations, their numeracy advantage persisted. However, during the inquisition, spillover-effects of these skills were rare because of forced separation and Jewish emigration, which was detrimental for economic development. In January 1483, likely with royal approval, the Inquisition ordered the expulsion of Jews from Andalusia. In the following years, several murder accusations were leveled against Jews. In 1485, the inquisitor Pedro de Arbués was assassinated at the cathedral of Zaragoza in a plot attributed primarily to conversos; Dozens were executed or punished, though records suggest some "old Christians" were also involved but largely escaped prosecution. Among the prosecuted conversos was Francisco de Santa Fe, a grandson of the well-known convert Gerónimo de Santa Fe, who committed suicide in prison; his body was burned and the ashes thrown into the river. The hands of some of the accused were cut off and nailed to the cathedral door before they were beheaded and quartered. In 1491, the infamous 'Holy Child of La Guardia' blood libel involved the false accusation of Jews and conversos for the ritual murder of a Christian child; confessions were extracted under torture, and all defendants were burned at the stake, despite no evidence that a child had disappeared. A small number of pre-expulsion synagogues survive, including the Synagogue of Santa María la Blanca and the Synagogue of El Tránsito in Toledo, the Córdoba Synagogue, the Híjar Synagogue, the Old main synagogue, Segovia, the Valencia de Alcántara Synagogue and the newly discovered Synagogue of Utrera. Edict of Expulsion (1492) Several months after the fall of Granada, an edict of expulsion called the Alhambra Decree was issued against the Jews of Spain by Ferdinand and Isabella on 31 March 1492. It ordered all Jews of whatever age to leave the kingdom by the last day of July: one day before Tisha B'Av. They were permitted to take their property provided it was not in gold, silver, or money. The reason given for this action in the preamble of the edict was the relapse of so many conversos owing to the proximity of unconverted Jews, who seduced them from Christianity and kept alive in them the knowledge and practices of Judaism. It is claimed that Isaac Abarbanel, who had previously ransomed 480 Jews of Málaga from the Catholic Monarchs by a payment of 20,000 doubloons, now offered them 600,000 crowns for the revocation of the edict. It is said also that Ferdinand hesitated, but was prevented from accepting the offer by Tomás de Torquemada, the grand inquisitor, who dashed into the royal presence and, throwing a crucifix down before the king and queen, asked whether, like Judas, they would betray their Lord for money. Torquemada was reputedly of converso ancestry, and the confessor of Isabella, Espina, was previously a Rabbi. Whatever the truth of this story, there were no signs of relaxation shown by the court, and the Jews of Spain made preparations for exile. In some cases, as at Vitoria, they took steps to prevent the desecration of the graves of their kindred by presenting the cemetery, called the Judimendi, to the municipality — a precaution not unjustified, as the Jewish cemetery of Seville was later ravaged by the people. The members of the Jewish community of Segovia passed the last three days of their stay in the city in the Jewish cemetery, fasting and wailing over being parted from their dead beloved. The number of Jews exiled from Spain is subject to controversy, with highly exaggerated figures provided by early observers and historians offering figures which numbered the hundreds of thousands. By the time of the expulsion, little more than 100,000 practicing Jews remained in Spain, since the majority had already converted to Catholicism. This in addition to the indeterminate number who managed to return has led recent academic investigations such as those of Joseph Pérez and Julio Valdeón to offer figures of somewhere between 50,000 and 80,000 practicing Jews expelled from Spanish territory. Jewish expulsion is a well established trend in European history. From the 13th to the 16th century, at least 15 European countries expelled their Jewish populations. The expulsion of the Jews from Spain was preceded by expulsions from England, France and Germany, among many others, and succeeded by at least five more expulsions. Conversos Henceforth the history of the Jews in Spain is that of the conversos, whose numbers, as has been shown, had been increased by no less than 50,000 during the period of the expulsion to a possible total of 300,000. For three centuries after the expulsion, Spanish Conversos were subject to suspicion by the Spanish Inquisition, which executed over 3000 people in the 1570–1700 period on charges of heresy (including Judaism). They were also subject to more general discriminatory laws known as "limpieza de sangre" which required Spaniards to prove their "old Christian" background in order to access certain positions of authority. During this period hundreds of conversos escaped to nearby countries such as England, France and the Netherlands, or converted back to Judaism, thus becoming part of the communities of Western Sephardim or Spanish and Portuguese Jews. Conversos played an important leadership role[which?] in the Revolt of the Comuneros (1520–1522), a popular revolt and civil war in the Crown of Castile against the imperial pretensions of Holy Roman Emperor Charles V. Conversos played a prominent role in shaping Spanish intellectual and literary culture, particularly during the period commonly referred to as the "Spanish Golden Age". Their influence began to emerge as early as the fifteenth century, well before the height of this cultural flourishing. One of the most striking examples of this influence is the authorship of La Celestina, an 1499 book by Fernando de Rojas considered the first modern play in any language. Conversos were central contributors not only to poetry and fiction but also to historical chronicles, anti-Jewish polemics, philosophical texts, and other literary forms. In the 15th century, chronicler Alonso de Palencia reported that many conversos in Andalusia continued to believe in the coming of the messiah, interpreting unusual natural events (such as the sighting of a whale off the coast near Setúbal, which they identified with the biblical sea monster Leviathan) as signs of its imminent arrival. However, it is unclear whether such beliefs referred to the Jewish messiah or to Christ's second coming. 1858 to the present Small numbers of Jews started to arrive in Spain in the 19th century, and synagogues were opened in Madrid [citation needed]. By 1900, not taking Ceuta and Melilla into account, about 1,000 Jews lived in Spain. Jews began to interact with Melilla as early as 1862, with an increasing Jewish community in the city throughout the early 20th century that grew upon the arrival of Moroccan Jews spurred on after the events of Taza under Bou Hmara, the 1909 Melillan Campaign, World War I, and the Rif War. Spanish historians started to take an interest in the Sephardim and Judaeo-Spanish, their language. There was a Spanish rediscovery of the Jews of Northern Morocco who still conserved this language and practiced old Spanish customs. The dictatorship of Miguel Primo de Rivera (1923–1930) decreed the right to Spanish citizenship to a certain number of Sephardim on 20 December 1924. The condition was that they had enjoyed Spanish protection before while living in the Ottoman Empire and that they applied before 31 December 1930. A similar measure was undertaken by the French government regarding non-Muslims in the Levant who had previously been protected by France. The decree especially addressed Jews from Thessaloniki who had refused to take either Greek or Turkish citizenship. The decree was later used by some Spanish diplomats to save Sephardi Jews from persecution and death during the Holocaust. Prior to the Spanish Civil War and not taking Ceuta and Melilla into account, about 6,000–7,000 Jews lived in Spain, mostly in Barcelona and Madrid. Likewise, by 1936, the Jewish community in Melilla amounted to 6,000, later notably decreasing because of emigration to Venezuela, Israel, mainland Spain and France. During the Spanish Civil War (1936–1939), synagogues were closed and post-war worship was kept in private homes. Jewish public life resumed in 1947 with the arrival of Jews from Europe and North Africa. In the first years of World War II, "Laws regulating their admittance were written and mostly ignored." They were mainly from Western Europe, fleeing deportation to concentration camps from occupied France, but also Jews from Eastern Europe, especially Hungary. Trudi Alexy refers to the "absurdity" and "paradox of refugees fleeing the Nazis' Final Solution to seek asylum in a country where no Jews had been allowed to live openly as Jews for over four centuries." Throughout World War II, Spanish diplomats of the Franco government extended their protection to Eastern European Jews, especially Hungary. Jews claiming Spanish ancestry were provided with Spanish documentation without being required to prove their case and either left for Spain or survived the war with the help of their new legal status in occupied countries. Once the tide of war began to turn, and Count Francisco Gómez-Jordana Sousa succeeded Franco's brother-in-law Ramón Serrano Suñer as Spain's foreign minister, Spanish diplomacy became "more sympathetic to Jews", although Franco himself "never said anything" about this. Around that same time, a contingent of Spanish doctors travelling in Occupied Poland were fully informed of the Nazi extermination plans by Governor-General Hans Frank, who was under the impression that they would share his views about the matter; when they came home, they passed the story to Admiral Luís Carrero Blanco, who told Franco. Diplomats discussed the possibility of Spain as a route to a containment camp for Jewish refugees near Casablanca but it came to naught due to lack of Free French and British support. Nonetheless, control of the Spanish border with France relaxed somewhat at this time, and thousands of Jews managed to cross into Spain (many by smugglers' routes). Almost all of them survived the war. The American Jewish Joint Distribution Committee operated openly in Barcelona. Shortly afterward, Spain began giving citizenship to Sephardi Jews in Greece, Hungary, Bulgaria, and Romania; many Ashkenazi Jews also managed to be included, as did some non-Jews. The Spanish head of mission in Budapest, Ángel Sanz Briz, saved thousands of Ashkenazim in Hungary by granting them Spanish citizenship, placing them in safe houses and teaching them minimal Spanish so they could pretend to be Sephardim, at least to someone who did not know Spanish. The Spanish diplomatic corps was performing a balancing act: Alexy conjectures that the number of Jews they took in was limited by how much German hostility they were willing to engender. Toward the war's end, Sanz Briz had to flee Budapest, leaving these Jews open to arrest and deportation. An Italian diplomat, Giorgio Perlasca, who was himself living under Spanish protection, used forged documents to persuade the Hungarian authorities that he was the new Spanish Ambassador. As such, he continued Spanish protection of Hungarian Jews until the Red Army arrived. Although Spain effectively undertook more to help Jews escape deportation to the concentration camps than most neutral countries did, there has been debate about Spain's wartime attitude towards refugees. Franco's regime, despite its aversion to Zionism and "Judeo-Marxist"-Freemasonry conspiracy, does not appear to have shared the rabid antisemitic ideology promoted by the Nazis. About 25,000 to 35,000 refugees, mainly Jews, were allowed to transit through Spain to Portugal and beyond. Some historians argue that these facts demonstrate a humane attitude by Franco's regime, while others point out that the regime only permitted Jewish transit through Spain.[citation needed] After the war, Franco's regime was quite hospitable to those who had been responsible for the deportation of the Jews, notably Louis Darquier de Pellepoix, Commissioner for Jewish Affairs (May 1942 – February 1944) in Vichy France, and to many other former Nazis, such as Otto Skorzeny and Léon Degrelle, and other former Fascists. José María Finat y Escrivá de Romaní, Franco's chief of security, issued an official order dated 13 May 1941 to all provincial governors requesting a list of all Jews, both local and foreign, present in their districts. After the list of six thousand names was compiled, Romani was appointed Spain's ambassador to Germany, enabling him to deliver it personally to Heinrich Himmler. Following the defeat of Germany in 1945, the Spanish government attempted to destroy all evidence of cooperation with the Nazis, but this official order survived. A Jewish newspaper cited a report published 22 June 2010 in the Spanish daily El País. At around the same time, synagogues were opened and the communities could hold a discreet degree of activity. On 29 December 1948, the official state bulletin (BOE) published a list of Sefardím family surnames from Greece and Egypt to which a special protection should be granted. The Alhambra Decree that had expelled the Jews were formally rescinded on 16 December 1968. Between 1948, the year Israel was founded, and 2010, 1,747 Spanish Jews made aliyah to Israel. There are currently around 50,000 Spanish Jews, with the largest communities in Barcelona and Madrid each with around 3,500 members. There are smaller communities in Alicante, Málaga, Tenerife, Granada, Valencia, Benidorm, Cádiz, Murcia and many more. Barcelona, with a Jewish community of 3,500, has the largest concentration of Jews in Spain. Melilla on the African continent maintains an old community of Sephardic Jews. The city of Murcia in the southeast of the country has a growing Jewish community and a local synagogue. Kosher olives are produced in this region and exported to Jews around the world. Also there is a new Jewish school in Murcia as a result of the growth in Jewish population immigrating to the Murcia community PolarisWorld. The modern Jewish community in Spain consists mainly of Sephardim from Northern Africa, especially the former Spanish colonies.[citation needed] In the 1970s, there was also an influx of Argentine Jews, mainly Ashkenazim, escaping from the military junta. With the birth of the European community, Jews from other countries in Europe moved to Spain because of its weather, lifestyle as well as for its cost of living relative to the north of Europe. Some Jews see Spain as an easier life for retirees and for young people. Mazarron has seen its Jewish community grow as well as La Manga, Cartagena and Alicante. Moreover, Reform and liberal communities have arisen in cities like Barcelona or Oviedo during the last decade. Some famous Spaniards of Jewish descent are the businesswomen Alicia and Esther Koplowitz, the politician Enrique Múgica Herzog, and Isak Andic, founder of the clothing design and manufacturing company Mango, though only the latter is of Sephardic origin. There are rare cases of Jewish converts, like the writer Jon Juaristi. Today there is an interest by some Jewish groups working in Spain to encourage the descendants of Conversos to return to Judaism. This has resulted in a limited number of conversions to the Jewish faith. Like other religious communities in Spain, the Federation of Jewish Communities of Spain (FCJE) has established agreements with the Spanish government, regulating the status of Jewish clergy, places of worship, teaching, marriages, holidays, tax benefits, and heritage conservation. In 2014, residents of a village in Spain called Castrillo Matajudios voted to change the name of their town due to risk of confusion resulting from the etymology of the name. "Mata" is a common suffix of placenames in Spain, meaning "forested patch". In this case, it is likely to be a corruption of "mota" meaning "hill". Confusion arises from the word "mata" also meaning "kill", thus rendering a name that could be interpreted as "kill the jews". The name was changed back to its earlier name which would be less subject to surprise by newcomers Castrillo Mota de Judíos (Castrillo Hill of the Jews). Although a mere anecdote in Spain, where it barely made the national press, this story was widely covered in the English speaking press of the United States, United Kingdom and Israel, often misrepresenting the name of the village as "Camp Kill the Jews". In 2020, Spain's parliament adopted the Working definition of antisemitism. In 2014 it was announced that the descendants of Sephardic Jews who were expelled from Spain by the Alhambra Decree of 1492 would be offered Spanish citizenship, without being required to move to Spain and/or renounce any other citizenship they may have. The law lapsed on 1 October 2019, and by that point the justice ministry claimed to have received 132,226 applications and approved 1,500 applicants. In order to be approved applicants needed to take "tests in Spanish language and culture ... prove their Sephardic heritage, establish or prove a special connection with Spain, and then pay a designated notary to certify their documents." Most applications came from nationals of countries with high levels of insecurity and violence in Latin America (mainly Mexico, Colombia and Venezuela). See also References Further reading External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.