text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Youth_marketing] | [TOKENS: 5712] |
Contents Youth marketing In the marketing and advertising industry, youth marketing consists of activities to communicate with young people, typically in the age range of 11 to 35. More specifically, there is teen marketing, targeting people age 11 to 17; college marketing, targeting college-age consumers, typically ages 18 to 24; and young adult marketing, targeting ages 25 to 34. The youth market is critical because of the demographic's buying power and its members' influence on the spending of family members. In addition, teens and young adults often set trends that are adopted by other demographic groups. Formats Youth marketing formats commonly include television advertising, magazine advertising and online marketing. Today, young people expect to be able to learn about, interact with and be entertained via brands or services targeting them online. Other common youth marketing tactics include entertainment marketing, music marketing, sports marketing, event marketing, viral marketing, school and college programs, product sampling and influencer marketing. Examples of brands embraced by youth and used as examples in marketing cases are Vans, which used youth marketing tactics to grow from a niche skateboard shoe brand to a successful international business, Mountain Dew, a well-known soft drink brand that expanded market share through youth marketing tactics in the 1990s, and Nike, namely utilizing endorsements from athletes, celebrities and influencers, combined with deliberately powerful language in its advertising. While frowned upon for teens and young adults, another common way advertisers target the older youth market is through product placement, which occurs when a brand name product appears in a medium not necessarily related to the product itself. Companies often pay for their products to be placed in a movie or on a television show. This act, while not an overt form of advertising, seeks to target teens and children in a subtle manner. Strategies Youth trends, according to Professor Frank Biocca of the New Jersey Institute of Technology, are part of an environment pertaining to information regarding youth marketing rapidly evolving and interconnecting with the advances in technology alongside content quality. Besides word-of-mouth interaction, marketing can easily be seen through new media formats such as social networking, allowing youth marketing to occur on a multi-sensory level. Products and brands with Social Power encompass the notion that “Corporate cool hunters are searching for teens that have the respect, trust, and admiration of their friends.” The American Psychological Association said, “Advertisers understand the teen's desire to be "cool," and manipulate it to sell their wares, a concept that's been offered to marketers by psychologists including James McNeal. Marketers assume a silent role as manipulators and the role they manage to play is not only in the purchases of teens but also in the social statuses of teens. A key aspect to youth marketing or any targeted demographic marketing is that these products are supposed to fulfill the needs or desires of the consumer. A large portion of sales promotion is dedicated to accomplishing this. However, according to Ainsworth Anthony Bailey of University of Toledo in "The Interplay of Social Influence and Nature of Fulfillment: Effects on Consumer Attitudes," not much of this research has focused on non-fulfillment of promotional promises which in turn, breaks the trust of the consumer and hurts the entire image of the brand and its product. The role of brand loyalty and/or belonging to a brand becomes a primary act for the young consumers. Promotion is always positive; commodities are presented as the road to happiness. In short, advertising uses existing values and symbols rather than reflecting them. Child psychologist Allen Kanner states that “The problem, is that marketers manipulate that attraction, encouraging teens to use materialistic values to define who they are and aren't.” It's key that we acknowledge the need for teens to not only identify but to let the brand identify them. It's what feeds into the notion that Marketing and Branding affects teen consumerism. Salancik & Pfeffer's (1978) Social information processing theory addresses mechanisms by which peers influence individuals' behavior and attitudes. According to this theory, social information consists of comments and observations made by people whose views an individual considers relevant. The literature on social influence suggests that this could impact consumers' perceptions. The role of brand loyalty is the notion of a consumer preferring to buy a specific branded product in favour over another. This is becoming to play an important role in a consumers purchase behaviour. Celebrities who hold expertise and trustworthiness are typically used to positively promote a brand. The portrayal of celebrities through brand endorsed advertisements, result in the youth admiring them. Salanik & Pferrer's social information processing theory addresses mechanisms by which peers influence individuals behaviour and attitudes. This includes the social information consisting of comments and observations made by people, who another individual considers relevant. This encompasses the notion of key opinion leaders, who are very influential in a peer group as they are perceived to have higher social standing, credibility and trust. Social media networking sites have allowed these individuals to filter information through to peers from either imagery posts on the popular Facebook and Instagram, or written posts on Twitter. Consumers will become aware of their peer's brand preferences and this can influence and change their perception of a brand. Instead of posting information themselves, many young consumers are concerned with reading and observing what other people are posting. This has an influence on consumer purchase behaviour, as young consumers are more inclined to purchase products that their peers are purchasing and posting. Through social-mediated communication, an individual can be influenced towards buying a certain brand. This will create brand loyalty within the individual. Brands that hold social power can influence the behaviour of consumers in what they are and are not purchasing. This plays an important part in youth consumerism and can impact the social statuses of teens. The desire for teens to be classified as “cool” are taken into account by advertisers who then use this idea and manipulate it to sell their products. Products are supposed to fulfill the needs and desires of the consumer. This relates to the idea of self-concept. Self-concept consists of actual self, how an individual sees themselves and ideal self, how an individual would like to be seen. Brands play an important role in how consumers are identifying themselves. They use brands as a tool to portray their own personal image and goals. Young consumers are still finding their own identity and are using brands to define who they are. Psychologists say that this results in children developing more materialistic values and have the tendency to own endless amounts of new products, otherwise they will feel inferior. Hence, having the latest products relates positively to their social status. For example, fashion is a powerful social symbol because when a certain trend is successfully adopted by a number of people, this can impact the perceived value of the product. A consumers social identification through a brand is important as it shows loyalty for that specific brand name. This can affect the way consumers who wear certain products and brands are perceived by others. This can determine an individual's fashion preference and the purchasing behaviour of consumers in regards to brand loyalty. According to the Media Awareness Network, a huge space where young adults can be targeted is in the setting of education or classroom. Whether it be through sponsored health educational assemblies, or as simple as the vending machines in the lunch room, or contests/incentive programs, and the companies that supply the schools with new technologies such as Mac computers. The academic setting becomes a prime marketing tool in reaching our youth because the classroom provides a captive audience for any product or brand to be modeled in front of. One example that the Media Awareness Network provides to explain how the academic environment can be used to silently speak and market to the youth is contests and incentive programs like the Pizza Hut reading incentives program in which children receive certificates for free pizza if they achieve a monthly reading goal. Similarly, Campbell's Labels for Education project, in which Campbell provides educational resources for schools in exchange for soup labels collected by students. Company advertisers have exploited ‘legislative loopholes’ and have continued targeting the youth population with advertisements through the Internet. Youth Marketing has expanded to online platforms because marketers understand the importance of the youth population and how developing a strong customer relationship will allow companies to promote products when youth become adults. Advertising online has become an efficacious technique for ensuring brand awareness and persuading the youth demographic to purchase products due to the fact that advertising online can be longer, repetitive, and engaging. Youth marketing is controversial because children are unaware that they are intentionally being targeted by company advertisers to ensure brand awareness, establish positive attitudes and encourage brand endorsement. Exposure of online advertising to the youth demographic has the ability to shape children's attitudes, cognitions and behaviour, which concerns parents because they want to be able to influence their children's buying decision, and the products and brands exposed to their children. Besides shaping children's attitudes and behaviour, parents are apprehensive about who their children talk to online, the personal information that they share and being exposed to inappropriate content. The youth demographic prefers to be targeted by personalised advertisements because they want to be informed of a brand or product which offers a similar or related product they have previously purchased. Interruptive advertising such as television, radio, email and telemarketing are considered ineffective techniques for youth marketing because they can be ‘annoying’. The youth population approve of Internet banner ads due to the fact that they do not feel pressured by youth marketing and feel capable of assessing their own purchasing decision. A majority of the youth population do not feel that youth marketing has inflicted any substantial pressured through advertisements. However, a minority of the youth population may not be able to comprehend the persuasive intent of an advertisement and do not necessarily feel pressured but are at risk of deception. Some argue that companies should stop targeting children with online advertisements because children have become brand savvy from being constantly exposed to them on the Internet and insist their parents purchase a certain product. However, others argue that it is important for children to develop analytical skills of consumerism and children should know the distinction between advertising and other media content, so they are not vulnerable to manipulation. The Internet has provided marketers opportunities to communicate and rapidly spread information about a product using persuasive techniques through social networking platforms which is called viral marketing. Companies are targeting young consumers because technology has become influential and nearly everyone in the world owns an electronic device which allows them to access the Internet. The youth demographic has become reliant on technology because of the digital culture which was established at a very early age in their childhood. They grow up to be highly active online, therefore, making it easy for youth to be directly reached through games and social networking sites which are full of ads. Youth consumers are continuously on the Internet and are unconsciously encouraged to promote brand awareness and spread positive word of mouth about a product or brand to their friends and family. Youth marketing is mainly embedded in phone apps and online games, even in virtual worlds that state there are no ads. Although the virtual worlds may state the site is ad free, players are unaware of internal advertising that encourages them to purchase upgrades to improve the player's gaming experience. It is easier to detect youth marketing in phone apps because they tend to pop up as little advertisement banners and users are having to close the ad because it is disruptive when playing. According to the director of Saatchi & Saatchi Interactive, "[the internet] is a medium for advertisers that is unprecedented... there's probably no other product or service that we can think of that is like it in terms of capturing kids' interest." Advertisers reach the young demographic by eliciting personal information. It's as easy as getting them to fill out quick, simple surveys prior to playing these games. They offer prizes such as T-shirts for filling in "lengthy profiles that ask for purchasing behavior, preferences and information on other family members." Advertisers, then take the information they obtain from these polls and surveys to "craft individualized messages and ads" in order to draw and hook them into a world centered around a certain product or brand. The ads that surround the individual in these "cyberworlds" are meant to keep a firm grip on each individual. It provides the setting for them to be completely consumed by the advertisers messages, products, and brands around them. These games are not just games. They're "advergames", CBS News correspondent John Blackstone reports for "Gotta Have It: The Hard Sell To Kids." Advergames allow for marketers to incorporate brands and products into a game-like setting where the child playing it, is exposed constantly to these brands and products. A 10-year-old girl who was interviewed by CBS, says she can score with Skittles, race with Chips Ahoy or hang out with SpongeBob. "You think about that 30-second commercial, basically a lot of those games are pretty fun to play and kids really get engaged in them," Ted Lempert, president of Children Now, a group that has successfully pushed for limits on TV advertising to kids, says. "So really it ends up becoming a 30-minute commercial." The influence that youth have on purchases made in a household are extremely high, even on high-end items such as what vehicle the family decides to purchase. For example, one study estimated that children influenced $9 billion worth of car sales in 1994. One car dealer explains: "Sometimes, the child literally is our customer. I have watched the child pick out the car." According to James U. McNeal, author of "Kids as Customers: A Handbook of Marketing to Children," car manufacturers cannot afford to ignore the children in their marketing. Nissan is one of many companies know to do this. They sponsor the American Youth Soccer Organization and a traveling geography exhibit in order to promote and get eyes on their brand name and logo in child-friendly settings. There's[clarification needed] analysis of the process of the development of a child and how it relates to how marketers know they can have a great deal of power in the field of persuasion on them at such young ages. At the age of five or six, children have trouble distinguishing fantasy from reality and make-believe from lying. They do not distinguish programs from ads, and may even prefer the ads. Between seven and ten years-old, children are most vulnerable to "televised manipulation". At age seven, the child can usually distinguish reality from fantasy, and at nine, he or she might suspect deception. This could come from any personal experience where products have turned out not to be as advertised. However, they cannot fully decipher this logic and continue to have "high hopes" for future products produced by a particular brand. By the age of ten, the individual starts to have a cynical perception of ads, in that "ads always lie". Around eleven or twelve, a toleration of adults lying in advertisements starts to develop. At this stage, it's the true coming of the adolescent's "enculturation" into a system of social hypocrisy. Analysis Due to the increase of spending on youth marketing strategies since the 1980s, several studies have been conducted by both researchers and advertisers to determine the effectiveness and impact of advertising campaigns directed at a young audience. Tim Kasser of Knox College conducted a 2004 study regarding the public perception of youth marketing, stating that since the late 1990s there have only been two large-scale opinion surveys conducted, leaving a gap in knowledge about the scientific and psychological aspects of such tactics. The purpose of the survey was to assess a participant's attitude towards a variety of youth marketing issues in three sections: Negative behaviors in youth, acceptability of marketing in schools, and areas of change. Respondents to the survey were asked a range of questions regarding the ethics of youth marketing in all three categories. The public opinion on youth marketing ethics according to the survey was mostly negative. Among negative behaviors witnessed by respondents that was most unfavorable included increased materialism, nagging parents to purchase advertised products, increased sexual behavior, and consumption of foods that could contribute to obesity. The second section's most disagreed-upon strategy was marketing unhealthy foods in schools, followed up in the third section with most respondents stating that schools should be free of advertising. A Viacom Brand Solutions International study titled "The Golden Age of Youth" focuses on youth marketing tactics and their application towards those in the older demographics of youth marketing, focusing on adults from 20 to 34 years old who partake in excess spending on brand-name products. According to this study, 16 to 19-year-olds are considered to be going through a "discovery period", involving their discovery of luxury and name-brand products that could potentially be seen as status symbols. As people grow older, they usually phase out of the "discovery period" and into the "experimentation period", when they hit the age range of 20 to 24 years old and focus spending on necessities, limiting frivolous spending. Those that do not fit into the age groups mentioned are part of the "golden" category, which consists of anyone 25 to 34. Some of the key results that were produced from this case study were that 25 to 34-year-olds majorly didn't respond to the same marketing techniques as teens and those who outright refuse to engage with such techniques, whereas, in reality, only 8% involved the study were legally defined teenagers. Noted in the study was the notion that those in the "golden" category were the happiest out of all the categories and drawn towards more expensive brands compared to teens, the reason why is speculated to be due to the negative perception of materialism and branding amongst teenagers. In regards to the public opinion of youth marketing, one side that has not been well-represented is that of the youth marketing industry. John C. Geraci, through the article “What do youth marketers think of selling to kids?”, conducted an online poll consisting of 878 interviews, each around 30 minutes long. The interviews covered topics such as educational backgrounds to ethics in youth marketing. According to the polling, those that work in youth-oriented careers are 92% more likely to have a four-year degree and less likely to have academic skills specifically for dealing with children. Most of those polled also feel that the ethical standards are on par with other industries, but at the same time, they feel that ethics can be a matter of intentions and not results, including campaigns denied by companies due to a lack of accessible demographic and focus testing data utilized by marketers to appeal to youth easier. Most ethical procedures in the youth marketing industry occur behind office walls and are usually not seen by the public, media, or politicians, which Geraci surmises that most issues with youth marketing don't originate from those creating ads, but are the result of multiple causes, citing childhood obesity as a health concern that has developed due to multiple factors, influencing how the public reacts to certain ads and products created and promoted by these companies. As early adopters of new technologies, the youth in many ways are the defining users of the digital media that are embracing this new culture. "The burgeoning digital marketplace has spawned a new generation of market research companies which are introducing an entire lexicon of marketing concepts (e.g., “viral marketing,” “discovery marketing”) to describe some of the unorthodox methods for influencing brand loyalty and purchasing decisions." The research that is done on youth marketing quickly becomes outdated by the time it is published as a result of the growth of digital media as educators and health professionals continue to get a grasp on the situation. Youth advertising is an important determinant of consumer behavior; it has been shown to have an influence on a youths' product preference and purchase requests. There are some scientists[who?] that believe studying youth consumer behavior is a negative thing because it impacts their beliefs, values, and moral judgments. They argue this because they believe that youth are more influenced by advertising messages than adults are. Advertising impacts usually are conducted by focusing on three specific effects: cognitive, behavioral, and affective. Usually cognitive effect studies are more focused on children's abilities to distinguish commercials from reality and their ability to understand the difference between the two. When cognitive studies are being done they will follow Piaget's theory to track the concrete development of children. Piaget's theory is divided into stages; these stages are known as the pre-operational stage, and concrete operational stage. The first stage focuses on the age group of 2- to 7-year-olds whereas the second focuses on 7- to 12-year-olds. On the other hand, there are some scientists[who?] that believe youth marketing is a good thing because it helps to define who they are as a consumer. On that note, it has been proven that requests by youth for advertised products decrease as they mature (1,14,24,26). Youth-oriented audiences tend to become more critical about their purchases and less susceptible to media advertising as they grow up. Gender also tends to have a role in a youth's thought process when requesting an advertised product. In most cases, boys are more persistent in their requests than girls. Other factors that may co-determine children's consumer behavior include socioeconomic level of the family, frequency and kind of parent–child interaction, and involvement with peer groups. These are just a few of the issues regarding youth consumer behavior and it is not going on in just our country[clarification needed] but in other countries as well such as the Netherlands. The Netherlands is a perfect example[according to whom?] to show how youth marketing is viewed in another country. In the Netherlands youth advertising may not mislead about characteristics or the price of the product in addition to this products aimed at children cannot have too much authority or trust amongst children. But there are loopholes in the way the Netherlands protects children from direct youth marketing. These loopholes usually question concepts such as “misleading”, “authority”, and “trust”. The introduction of advanced technology has guided media fragmentation, which allows consumers to view information on different platforms. This has enabled the Internet to move towards a new digital media culture, allowing different forms of media such as email, websites and social media sites to converge. There are more opportunities for marketers to reach young consumers through different media platforms. This generation can be portrayed as native speakers of a digital language. As early adopters of new technology, the youth in many ways are the defining users of this digital media. The primary focus for marketers are on young people, as they are enthusiastic users of the new media. The launch of smart phones can represent this new hybrid technology which allows users alternative ways of accessing their information. Using smart phones are more commonly seen among young people than any other age group. Consumers can use smart phones for a wide range of purposes. This can range from browsing through websites, reading news articles or checking emails. The popular social networking sites such as Facebook, Instagram, Twitter, Pinterest and YouTube have social media applications especially designed for smart phones. This allows the user to have easier access to the site and thus, making it easier to communicate information across. These applications are becoming more popular among the youth as shown by a study in Canada, where most smart phones were devoted to social networking. The Internet and social networking sites have allowed the marketing concept of viral marketing to be more relevant due to the increasing number of communication channels. Viral marketing occurs primarily on the Internet from word-of-mouth communication among consumers. Through the use of technology, there are more opportunities for consumers to voice their opinions and give information on certain brands, products or services. The positive results of word-of-mouth communication can ultimately add value to the brand. It is easy for young consumers to talk about products and this type of marketing creates dialogue among other young consumers. This can lead to consumers influencing brand and product preferences onto other individuals. Youth advertising is an important determinant of consumer behaviour. It can influence an individual's product preferences and purchases. Studying the consumer behaviour of youth may be seen as a negative thing. It could impact and contribute to changing a child's set of values and morals, whilst in the midst of shaping their character. Young consumers are more susceptible to marketing as part of the brain, the prefrontal cortex is not fully mature until early adulthood. This may lead to an individual making uninformed decisions and acting impulsively. The vulnerability of young consumers can be taken advantage of as they are more influenced by advertising messages than adults. Marketers believe that the brands consumers build relationships with when they are young will carry over and be maintained when they get older. This would increase the chances of those individuals staying brand loyal. Cognitive studies use Piaget's Theory to analyse age-based differences in a child's ability to process, understand and comprehend television content. There are three stages consisting of preoperational for ages 2 to 7, concrete operational for ages 7 to 11 and formal operational for ages above 12. In the preoperational stage, a child focusses on a products appearance, while initiating animistic thinking. In the concrete operational stage, a child now has the ability to understand the world more realistically and the intention of advertisers to sell products. In the formal operational stage, a child has the ability to distinguish motives of the advertiser. Factors that may co-determine a child's consumer behaviour include socio-economic level of the family and parent-child interaction. Young consumers may not have disposable incomes and many rely solely on parents as a source of finance. Research shows that people with higher incomes tend to have a higher level of price acceptance towards consumables. Depending on certain factors, purchasing certain brands may depend on the cost associated. Issues regarding youth marketing and advertising and the effect they have on children are taken into consideration all across the world. The regulations to protect young child audiences against certain advertisements vary across different countries. In Greece, commercials for toys cannot be aired before 10 pm, and in Belgium, it is forbidden to broadcast commercials during children's shows. Putting a restriction on when certain advertisements are allowed to be shown, will result in less young consumers watching and absorbing the information shown. This will reduce the degree of influence the advertisement would have had on them. Studies of social adolescents in social marketing media are usually concerned with activities that have heavy consequences. For example, things like smoking, violent entertainment, alcohol abuse, and fast food consumption are all things that are negatively going to affect a young consumer's consumption behavior. Recently though the demarketing of these harmful behaviors has started to occur slowly over the years, the focus of social and youth marketing has shifted from reinforcing positive behavior in favor of discouraging abusive behaviors. Since social and youth marketing are trying to head in this direction it indicates to the industry that youth marketing can be used for positive benefits. For example, rather than just a company associating itself with a non-profit or global aid organization is easy to understand. But youth more often than not want to actively get engaged in experiences that directly affect the world such as world hunger for example. Which indicates that companies should not just associate themselves with non-profit but actually offer their own non-profit experiences that young consumers can get involved with. Overall this idea and how it relates to youth marketing might seem a bit abstract[according to whom?] but it potentially links to a young consumer's behavior. This idea of creating cause-related experiences is important for the industry to take note of when it comes to youth marketing. By influencing a young consumer view of a specific company as a well known supporter of a positive non-profit can create brand loyalty beyond traditional brand utilities. This loyalty to the brand in a sense makes the volunteer or youth-oriented customer are aiding in the production of more loyal customers to the brand. In the long run, these non-effort opportunities can become embedded in a generation and become self-producing for the company as long as they maintain the events that cause consumer loyalty. In order to understand the public's opinion on youth marketing, one must be able to understand the experiences that each generation has been exposed to while growing up. Generation Y is very similar to the baby boomer generation especially at different points in life. So it is essential to see what experiences each generation has experienced while growing up. But different formative experiences affect each person of Generation Y. For example, the events that made the biggest impression on members of Generation Y who graduated from school in 2000 were Columbine, the war in Kosovo, and Princess Diana's death. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Special:EditPage/Template:Internet] | [TOKENS: 1458] |
Editing Template:Internet Copy and paste: – — ° ′ ″ ≈ ≠ ≤ ≥ ± − × ÷ ← → · § Sign your posts on talk pages: ~~~~ Cite your sources: <ref></ref> {{}} {{{}}} | [] [[]] [[Category:]] #REDIRECT [[]] <s></s> <sup></sup> <sub></sub> <code></code> <pre></pre> <blockquote></blockquote> <ref></ref> <ref name="" /> {{Reflist}} <references /> <includeonly></includeonly> <noinclude></noinclude> {{DEFAULTSORT:}} <nowiki></nowiki> <!-- --> <span class="plainlinks"></span> Symbols: ~ | ¡ ¿ † ‡ ↔ ↑ ↓ • ¶ # ∞ ‹› «» ¤ ₳ ฿ ₵ ¢ ₡ ₢ $ ₫ ₯ € ₠ ₣ ƒ ₴ ₭ ₤ ℳ ₥ ₦ ₧ ₰ £ ៛ ₨ ₪ ৳ ₮ ₩ ¥ ♠ ♣ ♥ ♦ 𝄫 ♭ ♮ ♯ 𝄪 © ¼ ½ ¾ Latin: A a Á á À à  â Ä ä Ǎ ǎ Ă ă Ā ā à ã Å å Ą ą Æ æ Ǣ ǣ B b C c Ć ć Ċ ċ Ĉ ĉ Č č Ç ç D d Ď ď Đ đ Ḍ ḍ Ð ð E e É é È è Ė ė Ê ê Ë ë Ě ě Ĕ ĕ Ē ē Ẽ ẽ Ę ę Ẹ ẹ Ɛ ɛ Ǝ ǝ Ə ə F f G g Ġ ġ Ĝ ĝ Ğ ğ Ģ ģ H h Ĥ ĥ Ħ ħ Ḥ ḥ I i İ ı Í í Ì ì Î î Ï ï Ǐ ǐ Ĭ ĭ Ī ī Ĩ ĩ Į į Ị ị J j Ĵ ĵ K k Ķ ķ L l Ĺ ĺ Ŀ ŀ Ľ ľ Ļ ļ Ł ł Ḷ ḷ Ḹ ḹ M m Ṃ ṃ N n Ń ń Ň ň Ñ ñ Ņ ņ Ṇ ṇ Ŋ ŋ O o Ó ó Ò ò Ô ô Ö ö Ǒ ǒ Ŏ ŏ Ō ō Õ õ Ǫ ǫ Ọ ọ Ő ő Ø ø Œ œ Ɔ ɔ P p Q q R r Ŕ ŕ Ř ř Ŗ ŗ Ṛ ṛ Ṝ ṝ S s Ś ś Ŝ ŝ Š š Ş ş Ș ș Ṣ ṣ ß T t Ť ť Ţ ţ Ț ț Ṭ ṭ Þ þ U u Ú ú Ù ù Û û Ü ü Ǔ ǔ Ŭ ŭ Ū ū Ũ ũ Ů ů Ų ų Ụ ụ Ű ű Ǘ ǘ Ǜ ǜ Ǚ ǚ Ǖ ǖ V v W w Ŵ ŵ X x Y y Ý ý Ŷ ŷ Ÿ ÿ Ỹ ỹ Ȳ ȳ Z z Ź ź Ż ż Ž ž ß Ð ð Þ þ Ŋ ŋ Ə ə Greek: Ά ά Έ έ Ή ή Ί ί Ό ό Ύ ύ Ώ ώ Α α Β β Γ γ Δ δ Ε ε Ζ ζ Η η Θ θ Ι ι Κ κ Λ λ Μ μ Ν ν Ξ ξ Ο ο Π π Ρ ρ Σ σ ς Τ τ Υ υ Φ φ Χ χ Ψ ψ Ω ω {{Polytonic|}} Cyrillic: А а Б б В в Г г Ґ ґ Ѓ ѓ Д д Ђ ђ Е е Ё ё Є є Ж ж З з Ѕ ѕ И и І і Ї ї Й й Ј ј К к Ќ ќ Л л Љ љ М м Н н Њ њ О о П п Р р С с Т т Ћ ћ У у Ў ў Ф ф Х х Ц ц Ч ч Џ џ Ш ш Щ щ Ъ ъ Ы ы Ь ь Э э Ю ю Я я ́ IPA: t̪ d̪ ʈ ɖ ɟ ɡ ɢ ʡ ʔ ɸ β θ ð ʃ ʒ ɕ ʑ ʂ ʐ ç ʝ ɣ χ ʁ ħ ʕ ʜ ʢ ɦ ɱ ɳ ɲ ŋ ɴ ʋ ɹ ɻ ɰ ʙ ⱱ ʀ ɾ ɽ ɫ ɬ ɮ ɺ ɭ ʎ ʟ ɥ ʍ ɧ ʼ ɓ ɗ ʄ ɠ ʛ ʘ ǀ ǃ ǂ ǁ ɨ ʉ ɯ ɪ ʏ ʊ ø ɘ ɵ ɤ ə ɚ ɛ œ ɜ ɝ ɞ ʌ ɔ æ ɐ ɶ ɑ ɒ ʰ ʱ ʷ ʲ ˠ ˤ ⁿ ˡ ˈ ˌ ː ˑ ̪ {{IPA|}} Wikidata entities used in this page Pages transcluded onto the current version of this page (help): This page is a member of 1 hidden category (help): |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Oxford_English_Dictionary] | [TOKENS: 8885] |
Contents Oxford English Dictionary The Oxford English Dictionary (OED) is the principal historical dictionary of the English language, published by Oxford University Press (OUP), a University of Oxford publishing house. The dictionary, which began publication in 1884, traces the historical development of the English language, providing a comprehensive resource to scholars and academic researchers, and provides ongoing descriptions of English language usage in its variations around the world. Work began on the dictionary in 1857, although publication did not commence until 1884. The work then began to be issued incrementally in unbound fascicles (instalments), as work continued on other parts of the project. The original title was A New English Dictionary on Historical Principles; Founded Mainly on the Materials Collected by The Philological Society. In 1895, the title The Oxford English Dictionary was first used unofficially on the covers of the series, and in 1928 the full dictionary was republished in 10 bound volumes. In 1933, the title The Oxford English Dictionary fully replaced the former name in all occurrences in its reprinting as 12 volumes with a one-volume supplement. More supplements came over the years until 1989, when the second edition was published, comprising 21,728 pages in 20 volumes. Since 2000, compilation of a third edition of the dictionary has been underway, approximately half of which was complete by 2018. In 1988, the first electronic version of the dictionary was made available, and the online version has been available since 2000. By April 2014, it was receiving over two million visits per month. The third edition of the dictionary is expected to be available exclusively in electronic form; the CEO of OUP has stated that it is unlikely that it will ever be printed. Historical nature As a historical dictionary, the Oxford English Dictionary features entries in which the earliest ascertainable recorded sense of a word, whether current or obsolete, is presented first, and each additional sense is presented in historical order according to the date of its earliest ascertainable recorded use. Following each definition are several brief illustrating quotations presented in chronological order from the earliest ascertainable use of the word in that sense to the last ascertainable use for an obsolete sense, to indicate both its life span and the time since its desuetude, or to a relatively recent use for current ones. The format of the OED's entries has influenced numerous other historical lexicography projects. The forerunners to the OED, such as the early volumes of the Deutsches Wörterbuch, had initially provided few quotations from a limited number of sources, whereas the OED editors preferred larger groups of quite short quotations from a wide selection of authors and publications. This influenced later volumes of this and other lexicographical works. Entries and relative size As of January 2026, the Oxford English Dictionary contained 520,779 entries, 888,251 meanings, 3,927,862 quotations, and 821,712 thesaurus entries. The dictionary's latest, complete print edition (second edition, 1989) was printed in 20 volumes, comprising 291,500 entries in 21,730 pages. The longest entry in the OED2 was for the verb set, which required 60,000 words to describe some 580 senses (430 for the bare verb, the rest in phrasal verbs and idioms). As entries began to be revised for the OED3 in sequence starting from M, the record was progressively broken by the verbs make in 2000, then put in 2007, then run in 2011 with 645 senses. Despite its considerable size, the OED is neither the world's largest nor the earliest exhaustive dictionary of a language. Another earlier large dictionary is the Grimm brothers' 33-volume dictionary of the German language, Deutsches Wörterbuch, begun in 1838 and completed in 1961. The first edition of the Vocabolario degli Accademici della Crusca is the first great dictionary devoted to a modern European language (Italian) and was published in 1612; the first edition of Dictionnaire de l'Académie française dates from 1694. The official dictionary of Spanish is the Diccionario de la lengua española (produced, edited, and published by the Royal Spanish Academy), and its first edition was published in 1780. The Kangxi Dictionary of Chinese was published in 1716. The largest dictionary by number of pages is believed to be the Dutch Woordenboek der Nederlandsche Taal. By number of definitions the online dictionary Wiktionary has the most, with 1,680,897 English definitions and 11,361,634 total definitions across all languages as of January 2026. History The dictionary began as a Philological Society project of a small group of intellectuals in London (and unconnected to Oxford University):: 103–104, 112 Richard Chenevix Trench, Herbert Coleridge, and Frederick Furnivall, who were dissatisfied with the existing English dictionaries. The society expressed interest in compiling a new dictionary as early as 1844, but it was not until June 1857 that they began by forming an "Unregistered Words Committee" to search for words that were unlisted or poorly defined in current dictionaries. In November, Trench's report was not a list of unregistered words; instead, it was the study On Some Deficiencies in our English Dictionaries, which identified seven distinct shortcomings in contemporary dictionaries: The society ultimately realized that the number of unlisted words would be far more than the number of words in the English dictionaries of the 19th century, and shifted their idea from covering only words that were not already in English dictionaries to a larger project. Trench suggested that a new, truly comprehensive dictionary was needed. On 7 January 1858, the society formally adopted the idea of a comprehensive new dictionary.: 107–108 Volunteer readers would be assigned particular books, copying passages illustrating word usage onto quotation slips. Later the same year, the society agreed to the project in principle, with the title A New English Dictionary on Historical Principles (NED).: ix–x Richard Chenevix Trench (1807–1886) played the key role in the project's first months, but his appointment as Dean of Westminster meant that he could not give the dictionary project the time that it required. He withdrew and Herbert Coleridge became the first editor.: 8–9 On 12 May 1860, Coleridge's dictionary plan was published and research was started. His house was the first editorial office. He arrayed 100,000 quotation slips in a 54 pigeon-hole grid.: 9 In April 1861, the group published the first sample pages; later that month, Coleridge died of tuberculosis, aged 30.: x Thereupon Furnivall became editor; he was enthusiastic and knowledgeable, but temperamentally ill-suited for the work.: 110 Many volunteer readers eventually lost interest in the project, as Furnivall failed to keep them motivated. Furthermore, many of the slips were misplaced. Furnivall believed that, since many printed texts from earlier centuries were not readily available, it would be impossible for volunteers to efficiently locate the quotations that the dictionary needed. As a result, he founded the Early English Text Society in 1864 and the Chaucer Society in 1868 to publish old manuscripts.: xii Furnivall's preparatory efforts lasted 21 years and provided numerous texts for the use and enjoyment of the general public, as well as crucial sources for lexicographers, but they did not actually involve compiling a dictionary. Furnivall recruited more than 800 volunteers to read these texts and record quotations. While enthusiastic, the volunteers were not well trained and often made inconsistent and arbitrary selections. Ultimately, Furnivall handed over nearly two tons of quotation slips and other materials to his successor. In the 1870s, Furnivall unsuccessfully attempted to recruit both Henry Sweet and Henry Nicol to succeed him. He then approached James Murray, who accepted the post of editor. In the late 1870s, Furnivall and Murray met with several publishers about publishing the dictionary. In 1878, Oxford University Press agreed with Murray to proceed with the massive project; the agreement was formalized the following year.: 111–112 20 years after its conception, the dictionary project finally had a publisher. It would take another 50 years to complete. Late in his editorship, Murray learned that one especially prolific reader, W. C. Minor, was confined to a mental hospital for (in modern terminology) schizophrenia.: xiii Minor was a Yale University–trained surgeon and a military officer in the American Civil War who had been confined to Broadmoor Asylum for the Criminally Insane after killing a man in London. He invented his own quotation-tracking system, allowing him to submit slips on specific words in response to editors' requests. The story of how Murray and Minor worked together to advance the OED was retold in the 1998 book The Surgeon of Crowthorne (US title: The Professor and the Madman), which was the basis for a 2019 film, The Professor and the Madman, starring Mel Gibson and Sean Penn. During the 1870s, the Philological Society was concerned with the process of publishing a dictionary with such an immense scope. They had pages printed by publishers, but no publication agreement was reached; both the Cambridge University Press and the Oxford University Press were approached. The OUP finally agreed in 1879 (after two years of negotiating by Sweet, Furnivall, and Murray) to publish the dictionary and to pay Murray, who was both the editor and the Philological Society president. The dictionary was to be published as interval fascicles, with the final form in four volumes, totalling 6,400 pages. They hoped to finish the project in ten years.: 1 Murray started the project, working in a corrugated iron outbuilding called the "Scriptorium" which was lined with wooden planks, bookshelves, and 1,029 pigeon-holes for the quotation slips.: xiii He tracked and regathered Furnivall's collection of quotation slips, which were found to concentrate on rare, interesting words rather than common usages. For instance, there were ten times as many quotations for abusion as for abuse. He appealed, through newspapers distributed to bookshops and libraries, for readers who would report "as many quotations as you can for ordinary words" and for words that were "rare, obsolete, old-fashioned, new, peculiar or used in a peculiar way". Murray had American philologist and liberal arts college professor Francis March manage the collection in North America; 1,000 quotation slips arrived daily to the Scriptorium and, by 1880, there were 2,500,000.: 15 The first dictionary fascicle was published on 1 February 1884—twenty-three years after Coleridge's sample pages. The full title was A New English Dictionary on Historical Principles; Founded Mainly on the Materials Collected by The Philological Society; the 352-page volume, words from A to Ant, cost 12s 6d: 251 (equivalent to $82 in 2023). The total sales were only 4,000 copies.: 169 The OUP saw that it would take too long to complete the work unless editorial arrangements were revised. Accordingly, new assistants were hired, and two new demands were made on Murray.: 32–33 The first was that he move from Mill Hill to Oxford to work full-time on the project, which he did in 1885. Murray had his Scriptorium re-erected in the back garden of his new property.: xvii Murray resisted the second demand: that if he could not meet the schedule, he must hire a second, senior editor to work in parallel to him, outside his supervision, on words from elsewhere in the alphabet. Murray did not want to share the work, feeling that he would accelerate his work pace with experience. That turned out not to be so, and Philip Gell of the OUP forced the promotion of Murray's assistant Henry Bradley (hired by Murray in 1884), who worked independently in the British Museum in London beginning in 1888. In 1896, Bradley moved to Oxford University. Gell continued harassing Murray and Bradley with his business concerns – containing costs and speeding production – to the point where the project's collapse seemed likely. Newspapers reported the harassment, particularly the Saturday Review, and public opinion backed the editors.: 182–83 Gell was fired, and the university reversed his cost policies. If the editors felt that the dictionary would have to grow larger, it would; it was an important work, and worth the time and money to finish properly. Neither Murray nor Bradley lived to see it. Murray died in 1915, having been responsible for words starting with A–D, H–K, O–P, and T, nearly half the finished dictionary; Bradley died in 1923, having completed E–G, L–M, S–Sh, St, and W–We. By then, two additional editors had been promoted from assistant work to independent work, continuing without much trouble. William Craigie started in 1901 and was responsible for N, Q–R, Si–Sq, U–V, and Wo–Wy.: xix The OUP had previously thought London too far from Oxford but, after 1925, Craigie worked on the dictionary in Chicago, where he was a professor.: xix The fourth editor was Charles Talbut Onions, who compiled the remaining ranges starting in 1914: Su–Sz, Wh–Wo, and X–Z. In 1919–1920, J. R. R. Tolkien was employed by the OED, researching etymologies of the Waggle to Warlock range; later he parodied the principal editors as "The Four Wise Clerks of Oxenford" in the story Farmer Giles of Ham. By early 1894, a total of 11 fascicles had been published, or about one per year: four for A–B, five for C, and two for E. Of these, eight were 352 pages long, while the last one in each group was shorter to end at the letter break (which eventually became a volume break). At this point, it was decided to publish the work in smaller and more frequent instalments: once every three months beginning in 1895 there would be a fascicle of 64 pages, priced at 2s 6d. If enough material was ready, 128 or even 192 pages would be published together. This pace was maintained until World War I forced reductions in staff.: xx Each time enough consecutive pages were available, the same material was also published in the original larger fascicles.: xx Also in 1895, the title Oxford English Dictionary was first used. It then appeared only on the outer covers of the fascicles; the original title was still the official one and was used everywhere else.: xx The 125th and last fascicle covered words from Wise to the end of W and was published on 19 April 1928, and the full dictionary in bound volumes followed immediately.: xx William Shakespeare is the most-quoted writer in the completed dictionary, with Hamlet his most-quoted work. George Eliot (Mary Ann Evans) is the most-quoted female writer. Collectively, the Bible is the most-quoted work (in many translations); the most-quoted single work is Cursor Mundi. Additional material for a given letter range continued to be gathered after the corresponding fascicle was printed, with a view towards inclusion in a supplement or revised edition. A one-volume supplement of such material was published in 1933, with entries weighted towards the start of the alphabet where the fascicles were decades old. The supplement included at least one word (bondmaid) accidentally omitted when its slips were misplaced; many words and senses newly coined (famously appendicitis, coined in 1886 and missing from the 1885 fascicle, which came to prominence when Edward VII's 1902 appendicitis postponed his coronation); and some previously excluded as too obscure (notoriously radium, omitted in 1903, months before its discoverers Pierre and Marie Curie won the Nobel Prize in Physics). Also in 1933 the original fascicles of the entire dictionary were re-issued, bound into 12 volumes, under the title "The Oxford English Dictionary". This edition of 13 volumes including the supplement was subsequently reprinted in 1961 and 1970. In 1933, Oxford had finally put the dictionary to rest; all work ended, and the quotation slips went into storage. However, the English language continued to change and, by 20 years later, the dictionary was outdated. There were three possible ways to update it. The cheapest would have been to leave the existing work alone and simply compile a new supplement of perhaps one or two volumes, but then anyone looking for a word or sense and unsure of its age would have to look in three different places. The most convenient choice for the user would have been for the entire dictionary to be re-edited and retypeset, with each change included in its proper alphabetical place; but this would have been the most expensive option, with perhaps 15 volumes required to be produced. The OUP chose a middle approach: combining the new material with the existing supplement to form a larger replacement supplement. Robert Burchfield was hired in 1957 to edit the second supplement; Charles Talbut Onions turned 84 that year but was still able to make some contributions as well. The work on the supplement was expected to take about seven years. It actually took 29 years, by which time the new supplement (OEDS) had grown to four volumes, starting with A, H, O, and Sea. They were published in 1972, 1976, 1982, and 1986 respectively, bringing the complete dictionary to 16 volumes, or 17 counting the first supplement. Burchfield emphasized the inclusion of modern-day language and, through the supplement, the dictionary was expanded to include a wealth of new words from the burgeoning fields of science and technology, as well as popular culture and colloquial speech. Burchfield said that he broadened the scope to include developments of the language in English-speaking regions beyond the United Kingdom, including North America, Australia, New Zealand, South Africa, India, Pakistan, and the Caribbean. Burchfield also removed, for unknown reasons, many entries that had been added to the 1933 supplement. In 2012, an analysis by lexicographer Sarah Ogilvie revealed that many of these entries were foreign loan words, despite Burchfield's claim that he included more such words. The proportion was estimated from a sample calculation to amount to 17% of the foreign loan words and words from regional forms of English. Some of these had only a single recorded usage, but many had multiple recorded citations, and it ran against what was thought to be the established OED editorial practice and a perception that he had opened up the dictionary to "World English". As work on the supplement neared completion, the management of Oxford University Press began to consider the future of the OED. The copyright of the 1933 first edition of the dictionary was due to come to an end in 1983. There was also a desire to retain, for the benefit of the future development of Oxford dictionaries, the skilled team of lexicographers employed by the Press. Crucially, there was a widely recognized need to revise the dictionary. As a solution to these issues, integration of the first edition and the supplement into one dictionary (which could then be revised) became a priority, though how this should be achieved was uncertain.: 379 Richard Charkin, Head of Reference at the OUP, argued in favour of computerizing the two texts, and subsequently combining and editing them electronically. Burchfield commissioned research into the activities and resources that would be required to revise and update the OED using computer technology, which concluded that the conversion and integration of the texts was feasible.: 379 In order to find the expertise necessary for the project, in June 1983 a request for tender was submitted by the OUP to various computer companies, software houses, government agencies, and academic departments. Many responses were received; Charkin and OED editor Edmund Weiner made numerous visits in the UK and North America. It was determined that no single agency could carry out the whole task, and that the OUP would have to take on the central managing role. For this purpose a department, known as the New OED Project, was set up under Tim Benbow, with Weiner and John Simpson as co-editors of the new dictionary.: 380–381 The project was divided into two phases: After the first phase, the integrated edition was to be published on paper – and subsequently in electronic form – as the second edition of the OED. The second phase, a proper revision of the dictionary that would create a third edition, was deferred to a later stage.: 382 As partners assisting in the project, the International Computaprint Corporation (now Reed Tech) was chosen to carry out the conversion of the text into electronic form; IBM UK were to supply software and hardware and assist with the building of a computer system for integrating the texts; and the University of Waterloo in Ontario, Canada agreed to work on designing a database system for updating and disseminating the OED after integration. The university set up the Centre for the New Oxford English Dictionary, led by Frank Tompa and Gaston Gonnet; this project went on to become the basis for the Open Text Corporation. A. Walton Litz, an English professor at Princeton University who served on the Oxford University Press advisory council, was quoted in Time as saying "I've never been associated with a project, I've never even heard of a project, that was so incredibly complicated and that met every deadline.": 381 : li Electronic conversion of the text was done by keyboarding. Optical character recognition was not practical due to the poor quality of the printed text of the first edition.: 383 Mark-up was then introduced into the text: ICC typists entered a system of tags into the text that included major structural tags (such as headword, pronunciation, etymology, sense section, quotation etc.) as well as typographic tags. This mark-up would add to the dictionary "a whole new world of information".: xi According to Weiner:: 384 The aim from the start, then, was to transform the text into an electronic database, in which every part of the text had its own identifying tag; these tags would form the basis both for complex text searching and for versatile text representation. 18 monthly batches of proofs were returned to Oxford between 1985 and 1986, and checked by over 50 proof-readers. In all, 350 million characters were keyed, taking 120 person-years; proof-reading took a total of 60 person-years.: lii The dictionary text was treated as a language with a rule-governed syntax, and a grammar of the text was written at Oxford. The text was parsed by a parser developed by the research group at the University of Waterloo.: 386 The tagging system was automatically converted to adopt SGML conventions.: 388 The integration of the two texts was largely automated; complete entries from the supplement were easily slotted into place in the integrated text. In order to integrate partial entries, a software component was built that used the mark-up to match corresponding pieces of text by headword, part of speech, homonym number and sense number. This process was facilitated by instructions already present in the printed text of the supplement, such as "add to def.:", followed by supplementary definition text. Automatic integration was completed in May 1987; it successfully handled about 80 per cent of the text, saving 50–60 per cent of manual editorial and keyboarding work.: 388–389 : liv Since programs to edit large textual databases had not been developed, a program named LEXX was developed for the project by IBM scientist Mike Cowlishaw, and renamed OEDIPUS ("OED Integration, Publishing, and Updating System").: 389 The target addresses of many cross-references would be invalidated by the integration, for example due to updated sense numbers. Cross-references were therefore identified by the parser and numbered; after integration, they were checked against their targets and adjusted where appropriate.: 390 The headword of each entry was no longer capitalized, allowing the user to readily see those words that actually require a capital letter.: xv Murray had devised his own notation for pronunciation, there being no standard available at the time, whereas the OED2 adopted the modern International Phonetic Alphabet.: xviii Unlike the earlier edition, all foreign alphabets except Greek were transliterated.: xiii The word "new" was again dropped from the name, and the OED2 was published on paper in 20 volumes in March 1989, with 21,730 pages, 290,500 entries, and 59 million words of text.: xxiii Supplementing the entry headwords, there are 157,000 bold-type combinations and derivatives, 169,000 italicized-bold phrases and combinations, making a total of 616,500 word-forms. There are 137,000 pronunciations, 249,300 etymologies, 577,000 cross-references, and 2,412,400 usage quotations. According to the publishers, the text required 540 megabytes of electronic storage. Up to a very late stage, all the volumes of the first edition were started on initial letter boundaries. For the second edition, there was no attempt to start them on letter boundaries, and they were made roughly equal in size. The 20 volumes started with A, B.B.C., Cham, Creel, Dvandva, Follow, Hat, Interval, Look, Moul, Ow, Poise, Quemadero, Rob, Ser, Soot, Su, Thru, Unemancipated, and Wave. On top of integration, five thousand new words and senses were included in the new edition. The first edition retronymically became the OED1. Following page 832 of Volume XX, Wave-Zyxt, there is a 143-page separately paginated bibliography. This is a conflation of the bibliography of OED1 with that of the 1986 Supplement. When the print version of the second edition was published in 1989, the response was enthusiastic. Author Anthony Burgess declared it "the greatest publishing event of the century", as quoted by the Los Angeles Times. Time dubbed the book "a scholarly Everest", and Richard Boston, writing for The Guardian, called it "one of the wonders of the world". The supplements and their integration into the second edition were a great improvement to the OED as a whole, but it was recognized that most of the entries were still fundamentally unaltered from the first edition. Much of the information in the dictionary published in 1989 was already decades out of date, though the supplements had made good progress towards incorporating new vocabulary. Yet many definitions contained disproven scientific theories, outdated historical information, and moral values that were no longer widely accepted. Furthermore, the supplements had failed to recognize many words in the existing volumes as obsolete by the time of the second edition's publication, meaning that thousands of words were marked as current despite no recent evidence of their use. Accordingly, it was recognized that work on a third edition would have to begin to rectify these problems. The first attempt to produce a new edition came with the Oxford English Dictionary Additions Series, a new set of supplements to complement the OED2 with the intention of producing a third edition from them. The previous supplements appeared in alphabetical instalments, whereas the new series had a full A–Z range of entries within each individual volume, with a complete alphabetical index at the end of all words revised so far, each listed with the volume number which contained the revised entry. However, in the end only three Additions volumes were published this way, two in 1993 and one in 1997, each containing about 3,000 new definitions. The possibilities of the World Wide Web and new computer technology in general meant that the processes of researching the dictionary and of publishing new and revised entries could be vastly improved. New text search databases offered vastly more material for the editors of the dictionary to work with, and with publication on the Web as a possibility, the editors could publish revised entries much more quickly and easily than ever before. A new approach was called for, and for this reason it was decided to embark on a new, complete revision of the dictionary. Beginning with the launch of the first OED Online site in 2000, the editors of the dictionary began a major revision project to create a completely revised third edition of the dictionary (OED3), expected to be completed in 2037 at a projected cost of circa £34 million. Revisions were started at the letter M, with new material appearing every three months on the OED Online website. The editors chose to start the revision project from the middle of the dictionary in order that the overall quality of entries be made more even, since the later entries in the OED1 generally tended to be better than the earlier ones. However, in March 2008, the editors announced that they would alternate each quarter between moving forward in the alphabet as before and updating "key English words from across the alphabet, along with the other words which make up the alphabetical cluster surrounding them". With the relaunch of the OED Online website in December 2010, alphabetical revision was abandoned altogether. The revision is expected roughly to double the dictionary in size. Apart from general updates to include information on new words and other changes in the language, the third edition brings many other improvements, including changes in formatting and stylistic conventions for easier reading and computerized searching, more etymological information, and a general change of focus away from individual words towards more general coverage of the language as a whole. While the original text drew its quotations mainly from literary sources such as novels, plays, and poetry, with additional material from newspapers and academic journals, the new edition will reference more kinds of material that were unavailable to the editors of previous editions, such as wills, inventories, account books, diaries, journals, and letters. John Simpson was the first chief editor of the OED3. He retired in 2013 and was replaced by Michael Proffitt, who is the eighth chief editor of the dictionary. The production of the new edition exploits computer technology, particularly since the inauguration in June 2005 of the "Perfect All-Singing All-Dancing Editorial and Notation Application", or "Pasadena". With this XML-based system, lexicographers can spend less effort on presentation issues such as the numbering of definitions. This system has also simplified the use of the quotations database, and enabled staff in New York to work directly on the dictionary in the same way as their Oxford-based counterparts. Other important computer uses include internet searches for evidence of current usage and email submissions of quotations by readers and the general public. Wordhunt was a 2005 appeal to the general public for help in providing citations for 50 selected recent words, and produced antedatings for many. The results were reported in a BBC TV series, Balderdash and Piffle. The OED's readers contribute quotations: the department currently receives about 200,000 a year. OED currently contains over 500,000 entries. The online OED is updated on a quarterly basis, with the addition of new words and senses, and the revision of existing entries. Formats In 1971, the 13-volume OED1 (1933) was reprinted as a two-volume Compact Edition, by photographically reducing each page to one-half its linear dimensions; each compact edition page held four OED1 pages in a four-up ("4-up") format. The two-volume letters were A and P; the first supplement was at the second volume's end. The Compact Edition included, in a small slip-case drawer, a Bausch & Lomb magnifying glass to help in reading reduced type. Many copies were inexpensively distributed through book clubs. In 1987, the second supplement was published as a third volume to the Compact Edition. The 20-volume OED2 (1989) was republished in 1991 as a compact edition (ISBN 978-0-19-861258-2). The format was re-sized to one-third of original linear dimensions, a nine-up ("9-up") format requiring a stronger magnifying glass (included), but allowing publication of a single-volume dictionary. This version includes definitions of 500,000 words, in 290,000 main entries, with 137,000 pronunciations, 249,300 etymologies, 577,000 cross-references, and 2,412,000 illustrative quotations. It is accompanied by A User's Guide to the "Oxford English Dictionary" by Donna Lee Berg. After this version was published, however, book club offers commonly continued to sell the two-volume 1971 Compact Edition. Once the dictionary was digitized and online, it was also available to be published on CD-ROM. The text of the first edition was made available in 1987. Afterward, three versions of the second edition were issued. Version 1 (1992) was identical in content to the printed second edition, and the CD itself was not copy-protected. Version 2 (1999) included the Oxford English Dictionary Additions of 1993 and 1997. These CD-ROM editions are for Microsoft Windows only. Version 3.0 was released in 2002 with additional words from the OED3 and software improvements. Version 3.1.1 (2007) added support for hard disk installation, so that the user does not have to insert the CD to use the dictionary. It has been reported that this version will work on operating systems other than Windows, using emulation programs. Version 4.0 of the CD was released in June 2009 and has applications for both Windows (7 and later) and MacOS X (10.4 and later). This version uses the CD drive for installation, running only from the hard drive. On 14 March 2000, the Oxford English Dictionary Online (OED Online) became available to subscribers. The online database containing the OED2 is updated quarterly with revisions that will be included in the OED3 (see above). The online edition is the most up-to-date version of the dictionary available. The OED website is not optimized for mobile devices, but the developers have stated that there are plans to provide an API to facilitate the development of interfaces for querying the OED. The price for an individual to use this edition is £100 or US$100 a year; consequently, most subscribers are large organizations such as universities. Some public libraries and companies have also subscribed, including public libraries in the United Kingdom, where access is funded by the Arts Council, and public libraries in New Zealand. Individuals who belong to a library which subscribes to the service are able to use the service from their own homes without charge. Relationship to other Oxford dictionaries The OED's utility and renown as a historical dictionary have led to numerous offspring projects and other dictionaries bearing the Oxford name, though not all are directly related to the OED itself. The Shorter Oxford English Dictionary, originally started in 1902 and completed in 1933, is an abridgement of the full work that retains the historical focus, but does not include any words which were obsolete before 1700 except those used by Shakespeare, Milton, Spenser, and the King James Bible. A completely new edition was produced from the OED2 and published in 1993, with revisions in 2002 and 2007. The Concise Oxford Dictionary is a different work, which aims to cover current English only, without the historical focus. The original edition, mostly based on the OED1, was edited by Francis George Fowler and Henry Watson Fowler and published in 1911, before the main work was completed. Revised editions appeared throughout the twentieth century to keep it up to date with changes in English usage. The Pocket Oxford Dictionary of Current English was originally conceived by F. G. Fowler and H. W. Fowler to be compressed, compact, and concise. Its primary source is the Oxford English Dictionary, and it is nominally an abridgement of the Concise Oxford Dictionary. It was first published in 1924. In 1998 the New Oxford Dictionary of English (NODE) was published. While also aiming to cover current English, NODE was not based on the OED. Instead, it was an entirely new dictionary produced with the aid of corpus linguistics. Once NODE was published, a similarly brand-new edition of the Concise Oxford Dictionary followed, this time based on an abridgement of NODE rather than the OED; NODE (under the new title of the Oxford Dictionary of English, or ODE) continues to be principal source for Oxford's product line of current-English dictionaries, including the New Oxford American Dictionary, with the OED now only serving as the basis for scholarly historical dictionaries. Spelling The OED lists British headword spellings (e.g., labour, centre) with variants following (labor, center, etc.). For the suffix more commonly spelt -ise in British English, OUP policy dictates a preference for the spelling -ize, e.g., realize vs. realise and globalization vs. globalisation. The rationale is etymological, in that the English suffix is mainly derived from the Greek suffix -ιζειν, (-izein), or the Latin -izāre. However, -ze is also sometimes treated as an Americanism insofar as the -ze suffix has crept into words where it did not originally belong, as with analyse (British English), which is spelt analyze in American English. Reception and criticism British prime minister Stanley Baldwin described the OED as a "national treasure". Author Anu Garg, founder of Wordsmith.org, has called it a "lex icon". Tim Bray, co-creator of Extensible Markup Language (XML), credits the OED as the developing inspiration of that markup language. However, despite its claims of authority, the dictionary has been criticized since the 1960s because of its scope, its claims to authority, its British-centredness and relative neglect of World Englishes, its implied but unacknowledged focus on literary language and, above all, its influence. The OED, as a commercial product, has always had to steer a line between scholarship and marketing. In his review of the 1982 supplement, University of Oxford linguist Roy Harris writes that criticizing the OED is extremely difficult because "one is dealing not just with a dictionary but with a national institution", one that "has become, like the English monarchy, virtually immune from criticism in principle". He further notes that neologisms from respected "literary" authors such as Samuel Beckett and Virginia Woolf are included, whereas those found in newspapers or other less "respectable" sources hold less sway, regardless of their usefulness. He writes that the OED's "[b]lack-and-white lexicography is also black-and-white in that it takes it upon itself to pronounce authoritatively on the rights and wrongs of usage", faulting the dictionary's prescriptive rather than descriptive usage. To Harris, this prescriptive classification of certain usages as "erroneous" and the complete omission of various forms and usages cumulatively represent the "social bias[es]" of the (presumably well-educated and wealthy) compilers. However, the Guide to the Third Edition of the OED has stated that "Oxford English Dictionary is not an arbiter of proper usage, despite its widespread reputation to the contrary" and that the dictionary "is intended to be descriptive, not prescriptive". The identification of "erroneous and catachrestic" usages is being removed from third edition entries, sometimes in favour of usage notes describing the attitudes to language which have previously led to these classifications. Another avenue of criticism is the dictionary's non-inclusion of etymologies for words of AAVE or African language origin such as jazz, dig or badmouth (the latter two are possibly of Wolof and Mandinka languages, respectively). As of 2022, OUP is preparing a specialized Oxford Dictionary of African American English in collaboration with Harvard University's Hutchins Center for African and African American Research, with literary critic Henry Louis Gates Jr. being the project's editor-in-chief. Harris also faults the editors' "donnish conservatism" and their adherence to prudish Victorian morals, citing as an example the non-inclusion of "various centuries-old 'four-letter words'" until 1972. However, no English dictionary included such profanity, for fear of possible prosecution under British obscenity laws, until after the conclusion of the Lady Chatterley's Lover obscenity trial in 1960. The Penguin English Dictionary of 1965 was the first dictionary that included the word fuck. Joseph Wright's English Dialect Dictionary had included shit in 1905. The OED's claims of authority have also been questioned by linguists such as Pius ten Hacken, who notes that the dictionary actively strives toward definitiveness and authority but can only achieve those goals in a limited sense, given the difficulties of defining the scope of what it includes. Founding editor James Murray was also reluctant to include scientific terms, despite their documentation, unless he felt that they were widely enough used. In 1902, he declined to add the word radium to the dictionary. Research using the OED The OED has been used to support research in fields such as linguistics, psycholinguistics, and psychology. Examples include the extension of word meanings via metaphor, the evolution of measurement terms like "foot" from concrete to abstract meanings, and the identification of systematic patterns in word blends (e.g., "brunch" from a blend of "breakfast" and "lunch"). The OED in popular culture The British quiz show Countdown awarded the leather-bound complete version to the champions of each series between its inception in 1982 and Series 63 in 2010. The prize was axed[clarification needed] after Series 83, completed in June 2021, as it was considered out of date. The 2020 novel The Dictionary of Lost Words by Pip Williams centres on the creation of the OED, the fictional narrator spending much time in the Scriptorium as a child, the daughter of a fictional widowed lexicographer, and later becoming an assistant there. It has been adapted for the stage, and a television series is planned. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/UTC%2B04:00] | [TOKENS: 162] |
Contents UTC+04:00 UTC+04:00 is an identifier for a time offset from UTC of +04:00. In ISO 8601, the associated time would be written as 2019-02-07T23:28:34+04:00. This time is used in: As standard time (year-round) Principal cities: Abu Dhabi, Dubai, Baku, Tbilisi, Yerevan, Samara, Muscat, Port Louis, Victoria, Saint-Denis, Stepanakert Discrepancies between official UTC+04:00 and geographical UTC+04:00 Using UTC+03:00: Using UTC+03:30: Using UTC+04:30: Using UTC+05:00: Using UTC+06:00: References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Zaza_language] | [TOKENS: 7607] |
Contents Zaza language Zaza (endonym: Zazakî, Dimlî, Dimilkî, Kirmanckî, Kirdkî, Zonê ma, lit. 'Our language'), also known by its endonym Zazaki, is an Iranian language belonging to the Northwestern Iranian branch and spoken in various regions of Turkey by the Zaza people. The language has three main dialects; northern, southern, and central and these dialects are spoken in Bingöl, Elazığ, Erzincan, Erzurum, Malatya, Muş, Bitlis and Tunceli provinces in Eastern Anatolia; Adıyaman, Diyarbakır and Şanlıurfa provinces in Southeastern Anatolia; Kars and Ardahan in Northeastern Anatolia; Sivas, Kayseri, Aksaray in Central Anatolia and Tokat and Gümüşhane in Black Sea regions of Turkey. International linguistic authorities such as SIL Global, Glottolog and Ethnologue divide the language into northern and southern dialects with numerous sub-dialects. In terms of grammar, genetics (diachronic) and core vocabulary, the Zaza language is closely related to Tati, Talysh, Sangsari, Semnani, Mazandarani and Gilaki. The language shares also significant grammatic similarities with Parthian and Bactrian, two ancient and extinct Iranian languages spoken in antiquity. The glossonym Zaza originated as a pejorative. According to Ethnologue, Zaza is spoken by around 1.48 million people, and the language is considered threatened due to a declining number of speakers, with many shifting to Turkish. Nevins, however, puts the number of Zaza speakers between two and three million. Macrolanguage Zaza language is classified as a macrolanguage by international linguistic authorities. SIL International classifies Zaza language as a macrolanguage, including the varieties of Southern Zaza (diq) and Northern Zaza (kiu). Other international linguistic authorities, the Ethnologue and the Glottolog, also classify the Zaza language as a macrolanguage composed of two distinct languages: Southern Zaza and Northern Zaza. Classification The first linguist to linguistically study and analyze the Zaza language was the German linguist Oskar Mann [de]. Commissioned by the Prussian Academy of Sciences in 1905/1906 to document and linguistically analyze Western Iranian languages, Oskar Mann conducted extensive Zaza compilations and language records in the Bingöl and Siverek regions. He analyzed the Zaza language from phonological, morphological, lexical and etymological aspects and demonstrated that Zaza is a northwestern Iranian language in its own right, among the Iranian languages. His work was subsequently published by Karl Hadank, who also classified Zaza as a distinct northwestern Iranian language. Since then, the language has been classified as a distinct northwestern Iranian language within the Northwest Iranian languages and is classified as a distinct northwestern Iranian language by international linguistic authorities. The Ethnologue classifies Zaza within a genetic subgroup called Zaza-Gorani, along with Gorani, within the Northwestern Iranian languages. This classification is contested. There are significant linguistic differences between the Zaza and Gorani languages, despite some similarities. Zaza shares many linguistic features with the Caspian languages that are not found in Gorani. No unifying characteristics have been found from Zaza and the Gorani group to demonstrate that they constitute a group on their own in contrast to other Northwestern language groups. The Glottolog database proposes a more detailed classification and classifies Zaza within the Adharic subgroup (related to Old Azeri), along with languages such as Talysh, Tati and its dialects such as Harzandi, Kajali and Kilit, spoken on the southern shores of the Caspian Sea. Belgian philologist and Iranologist Pièrre Lecoq classifies Zaza within the Medo-Caspian subgroup along with languages such as Tati, Talysh, Gilaki, Semnani and Balochi. German linguist Jost Gippert, has demonstrated that the Zaza language is very closely related to the Parthian language in terms of phonetics, morphology, syntax and lexicon and that it has many words in common with the Parthian language. According to him, the Zaza language may be a residual dialect of the Parthian language that has survived to the present day. He classifies Zaza within the Hyrcanian subgroup, referring to the historical Hyrcania region south of the Caspian Sea, and includes languages such as Sangsari and Balochi in the same subgroup as Zaza. Gippert also demonstrated that the Zaza language is genetically very close to Semnani and suggested that both languages may have originated from a common ancestor. According to linguist Ludwig Paul [de] the Zaza language is a northwestern Iranian language in its own right within the Northwestern Iranian languages and it is linguistically close to Tati and its dialects (modern Azeri dialects), Talysh and Gorani. Instead of grouping the Zaza language with another language, he classified Zaza as a standalone language within the Northwestern Iranian languages. Encyclopædia Iranica classifies the Zaza language within the Caspian subgroup of the Northwestern Iranian languages, along with Talysh, Tati dialects, Harzandi, Gilaki, Mazanderani, Gurani and Semnani dialects and states that historically all of these Caspian dialects are related to the Parthian language. The Glottolog database proposes the following phylogenetic classification: The Zaza language is considered a branch of the Kurdic subgroup within the Northwestern Iranian languages in a study. The varieties of Kurdic do not directly descend from any known Middle Iranian languages, such as Middle Persian or Parthian, or from Old Iranian languages, such as Avestan or Old Persian. Zaza is considered a macrolanguage, consisting of Southern and Northern Zaza. Glottolog database classifies Zaza under the Adharic branch of Northwestern Iranian languages. Linguistically, the classification of Zazaki as either a Kurdish dialect or a distinct language is a topic of debate among scholars. Some, such as Ludwig Paul, do not consider Zazaki and Gorani to be Kurdish dialects. According to him, they can only be classified as Kurdish dialects in a political and ethnic context, and it would be more accurate to refer to them as Kurdish languages. The differences between them arise from the Kurdish adoption of Persian linguistic features due to historical contact. Other scholars contend that the classification of Zazaki as a separate language from Kurdish is based on insufficient data, and a detailed comparison between Zazaki and Kurmanji in terms of phonology, morphology, syntax, and lexicon reveals a significant degree of shared features, suggesting that Zazaki and Kurmanji are dialects of the same language. Furthermore, arguments regarding the classification of both Zazaki and Gorani highlight that the distinction between a dialect and a language is a social construct influenced by factors such as shared identity, history, beliefs, and living conditions, rather than being based solely on linguistic evidence. Therefore, Kurdish can be seen as a socio-cultural umbrella that encompasses both recognized Kurdish dialects (such as Kurmanji, Sorani, and Southern Kurdish) as well as the Zaza and Gorani languages. The term "Kurdic" is used to refer to this broad grouping. Endangerment Many Zaza speakers resided in conflict-affected regions of eastern Turkey and have been significantly impacted by both the current and historical political situations. Only a few elderly monolingual Zaza speakers remain, while the younger generation predominantly speaks other languages. Turkish laws enacted from the mid-1920s until 1991 banned Kurdish language, including Zazaki, from being spoken in public, written down, or published. The Turkish state's efforts to enforce the use of Turkish have led many Zaza speakers to leave Turkey and migrate to other countries, primarily Germany, Sweden, Netherlands and the United States, and Australia. Efforts to preserve and revitalize Zazaki are ongoing. Many Kurdish writers in Turkey are fighting to save Zazaki with children's books and others with newspapers, but the language faces an uncertain future. The decline of Zazaki speakers could also lead the Zazas to lose their identity and shift to a Turkish identity. According to a study led by Dr. Nadire Güntaş Aldatmaz, an academic at Ankara University, 402 people aged between 15 and 75 from Mamekîye in Dersim province, were interviewed. Respondents younger than 18 mostly stated their ethnicity as 'Turk', their mother language as 'Turkish', and their religion as 'Islam', despite having some proficiency in Zaza. History Writing in Zaza is a recent phenomenon. The first literary work in Zaza is Mewlîdu'n-Nebîyyî'l-Qureyşîyyî by Ehmedê Xasi in 1899, followed by the work Mawlûd by Osman Efendîyo Babij in 1903. As the Kurdish language was banned in Turkey during a large part of the Republican period, no text was published in Zaza until 1963. That year saw the publication of two short texts by the Kurdish newspaper Roja Newe, but the newspaper was banned and no further publication in Zaza took place until 1976, when periodicals published a few Zaza texts. Modern Zaza literature appeared for the first time in the journal Tîrêj in 1979 but the journal had to close as a result of the 1980 coup d'état. Throughout the 1980s and 1990s, most Zaza literature was published in Germany, France and especially Sweden until the ban on the Kurdish language was lifted in Turkey in 1991. This meant that newspapers and journals began publishing in Zaza again. The next book to be published in Zaza (after Mawlûd in 1903) was in 1977, and two more books were published in 1981 and 1986. From 1987 to 1990, five books were published in Zaza. The publication of books in Zaza increased after the ban on the Kurdish language was lifted and a total of 43 books were published from 1991 to 2000. As of 2018, at least 332 books have been published in Zaza. Due to the above-mentioned obstacles, the standardization of Zaza could not have taken place and authors chose to write in their local or regional Zaza variety. In 1996, however, a group of Zaza-speaking authors gathered in Stockholm and established a common alphabet and orthographic rules which they published. Some authors nonetheless do not abide by these rules as they do not apply the orthographic rules in their oeuvres. In 2010, Zaza was classified as a "vulnerable" language by UNESCO. The institution of Higher Education of Turkey approved the opening of the Zaza Language and Literature Department in Munzur University in 2011 and began accepting students in 2012 for the department. In the following year, Bingöl University established the same department. TRT Kurdî also broadcast in the language. Some TV channels which broadcast in Zaza were closed after the 2016 coup d'état attempt. Dialects There are three main Zaza dialects: Zaza shows many similarities with other Northwestern Iranian languages: Ludwig Paul divides Zaza into three main dialects. In addition, there are transitions and edge accents that have a special position and cannot be fully included in any dialect group. Grammar In terms of grammar, genetics, linguistics (diachronic) and core vocabulary the Zaza language is closely related to Old Azeri, Tati of Iran, Talysh, Sangsari, Semnani, Mazandarani and Gilaki languages spoken on the shores of the Caspian Sea and northern Iran. Zaza also has distinctive and significant grammatical similarities with Parthian and Bactrian languages, which are two Iranian languages of late antiquity. Zaza, along with Talysh, Tati, Semnani, Sangesari, Gilaki and some other central Iranian dialects, forms a belt of Northwestern Iranian languages among Northwestern Iranian languages. This belt is geographically divided by speakers of Persian, Azerbaijani and Kurdish into two parts: Zaza, Talysh and Tati languages in the western part and Semnani, Sangesari, Gilaki (and other Caspian/Central dialects) in the eastern part. The Zaza language, along with Tati, Talysh and some northwestern dialects, has strongly preserved its Northwestern Iranian isogloss roots and is quite distant from Persian and Kurdish. Overall, from Zaza, Tat and Talysh downward to Kurdish and Persian, the Western Iranian languages are successively less "archaic". Zaza, along with Talysh and Tati, is located at the westernmost part of the Western Iranian languages while Persian and Kurdish are positioned at the easternmost part: Like most other languages of the belt, the Zaza language shows a two-case system in the nouns with an oblique ending generally going back to the Old Iranian language genitive ending *-ahya. Linguist W. B. Henning demonstrated about a 100 years ago that Zaza, Talysh, Tati/Azeri, Semnani and Gilaki, and Caspian dialects derive their present stem from the same old Iranian present participle ending in *-ant-. Zaza, Talyshi, Azeri, Semnani, Gilaki and some other Caspian dialects derive their present stem from the same old Iranian present participle ending in *-ant: In contrast to these languages, in Kurdish and Persian languages the present tense is formed by adding the prefix می mî- (mi-ravam), -di (di-çim) (I go), as a modal prefix to the present stem. Morphologically, like most of the languages of the belt, the dialects of the Zaza language show two-case system of nouns. In Zaza, the oblique ending -ī (that going back to the Old Iranian language genitive ending *-ahya) is only attached to masculines. In Southern Zaza (Çermik-Siverek dialects) there is an ending -e(r) attached to feminine nouns in the oblique case and its origin is the old stem expansion in *-a(r) of relationship terms. Zaza -e(r) actually denoting the oblique case of relationship terms of both genders, probably have started spreading to feminines in general later. Just like Zaza, in Tati dialects, the oblique case of relationship terms -r also has spread from relationship terms to other terms. Like Zaza, other members of the belt, Talysh, Semnani, Tati also have the same oblique case of relationship terms: Additionally, mother (nom.) and mother (obl.) are mā -> mār in Zaza, mâ -> mâr in Tati, mā -> moār in Talysh and brother (nom.) and brother (obl.) are bıra -> bırar in Zaza, bera -> berar in Tati and bäre -> bärār in Semnani. Henning also demonstrated that the Harzandi dialect of the Tati language has many common linguistic features with Zaza and Talysh and classified it with these languages. Zaza, as with a number of other Iranian languages like Talysh, Tati, central Iranian languages and dialects like Semnani, Kahangi, Vafsi, Balochi and Kurmanji features split ergativity in its morphology, demonstrating ergative marking in past and perfective contexts, and nominative-accusative alignment otherwise. Syntactically it is nominative-accusative. The grammatical gender forms of Old Iranian -except for the neuter form- remain largely the same in the Zaza language. The distinction between masculine and feminine forms is present in the entire morphology of the Zaza language, including nouns, adjectives, pronouns, cases and verb conjugations. In the Old Iranian era, the Old Iranian languages like Avestan, Old Persian featured a grammatical gender system that included masculine, feminine, and neuter. And in Zaza, the feminine suffix of Old Iranian –ā remained as the unstressed suffix –e [-ə] in the northern dialect and as -ı in the southern dialect of Zaza. Along with Zaza, the Semnani and Tati languages also exhibit the same feminine suffix form. For example, the word for donkey her in Zaza and xar in Semnani and Tati: While the words her and xar refer to jack or jackass, a male donkey in Zaza, Semnani and Tati; feminine forms of the words her and xar, respectivey, the word with unstressed suffix –e, here in Zaza and xára in Semnani and Tati refer to a jenny or jennet, a female donkey. Among all Western Iranian languages, Zaza, Semnani, Sangsari, Tati dialects, Hazārrūdi, Cālī, Tākestāni, Kajali, Khalkhali, Karani, Lerdi, Diz, Sagzābādi, Eštehārdi, Ashtiani, Amorei, Alviri, Abyānei and central Iranian languages like Jowšaqāni, Abuzeydābādi, Fārzāndī, Delījanī and Kurmanji distinguish between masculine and feminine grammatical gender. In Zaza, each noun belongs to one of those two genders. In order to correctly decline any noun and any modifier or other type of word affecting that noun, one must identify whether the noun is feminine or masculine. Most nouns have inherent gender. However, some nominal roots have variable gender, i.e. they may function as either masculine or feminine nouns. As a unique linguistic feature, among all Northwestern Iranian languages, only in Zaza, Semnani, Sangsari and Tati languages, grammatical gender is marked on verbs. And unlike other Northwestern Iranian languages, Zaza and some Tati dialects do distinguish gender in second singular person too. In addition to nouns, adjectives and verbs, in Zaza, Semnani and Tati dialects grammatical gender is marked on demonstrative pronouns too. For instance: The Zaza verbal forms are based on three stems: subjunctive, present, and past. The subjunctive and past stems generally continue inherited Iranian present stems, while the present stems are derived from the Zaza subjunctive stems by the formant -(e)n. Another feature of the Zaza language dating back to the Old and Middle Iranian era is that the passive stem (diathesis) is formed synthetic. The Old Indo-Iranian passive stem -iiā/-i and its reflection in Pahlavi -īh- is still preserved in Zaza, Tati, Talysh, Semnani, Central and Judeo-Iranian dialects. The old passive stem appears as -i- in Zaza and the passive stem is derived by adding -i to the verb stem. Just like Zaza, in other members of the belt, in Tati dialects (e.g. Eštehārdī, Ashtiani, Alviri, Čālī, Čarza etc.) and Talysh (e.g. Asālem) and Semnani[f] the passive stem is formed by adding the -i to the verb stem and -i/-y to the verb stem in Central plateau and Judeo-Iranian languages and -ī to the verb stem in Eastern Balochi. Examples of passive voices are: nan weriyeno: bread (masc.) is being eaten, şıt şımiyeno: milk (masc.) is being drunk, nuşte nuşiyeno: the text (masc.) is being written, keye viniyeno: the house (masc.) is being seen. The causative stem is derived by -n, which derives from the causative suffix -ēn of the Middle Iranian period. Examples of the causative voice are: veşneno: (he) burns, vurneno: (he) changes, musneno: (he) teaches. The causative stem -n- of Zaza appears as -(e)n in Semnani, -en- in Tati and Talysh, -en(d)- in Mazanderani, -an in Gilaki and -ēn in Balochi. The infinitive ending is formed with -ene in the north dialect and -enı in the south dialect of the Zaza language.The basic stem of the verb is formed by deleting this ending. The present tense is formed by taking the present stem of the verb, adding the present participle ending and conjugating it. Zaza, Semnani, Talysh, Tati/Azeri and Gilaki derive their present stem from the same old Iranian present participle ending in *-ant-. Grammatical gender is marked on verbs, similar to Semnani and Tati/Azeri. For example, the present stem of the verbs şiyaene 'to go'" and vınderdene "to stop": The present continuous is used in several instances. Its most common use is to describe something that is happening at the exact moment of speech. Present continuous can also describe an event planned in the future when combined with a time indicator for the future. The present continuous in Zaza is formed by conjugating the copula in accordance with the subject and conjugating the verb in accordance with the present tense: Nouns in Zaza are unmarked for the singular and marked with the unstressed -i in the plural. For instance, kerg (hen) kergi (hens), verg (wolf) vergi (wolves), merdım (man) merdımi (men), vaş (grass) vaşi (grasses), estor (horse) estori (horses). Just like Zaza, in Semnani, another member of the belt, nouns are marked with the plural suffix -i in the nominative plural. For example, trees/horses = dari/estori in Zaza and dåri/asbi in Semnani. In addition to mutual nominative plural suffix -i in two languages, both in Zaza and Semnani nouns are marked with the plural suffix -un in the oblique plural. For instance: Among all Western Iranian languages, only in Zaza and closely related languages like Semnani (and its dialects like Sorkhei, Lasgerdi, Biyabunaki) and Tati (and its dialects like Harzandi, Kilit) listed below, the number three is cognate with Parthian hry/hrē. Old Iranian *θr further became *hr, in initial position acquired a supporting vowel here. In these languages, the v -> b and s -> h consonant change (vist and das in Zaza, Semnani, Tati, Parthian vs. bist and dah in Persian and Kurdish) is also clearly evident. As a unique linguistic feature, only in Zaza and Semnani the number one takes both masculine and feminine forms. In Avestan, which is an extinct Old Iranian language, numbers took gender specific forms. Cardinal numbers in Zaza and other closely related languages are as follows: The cardinal numbers from 10 to 20 and numbers in tens in Zaza exhibit strong similarities with Avestan, which, together with Old Persian, is one of two directly attested languages of the Old Iranian era and Parthian, which is an extinct Northwestern Iranian language of the Middle Iranian era: The stressed suffix "-ıj" added to nouns of place in Zaza denotes origin or relationship. Just like Zaza, in Tati and Talysh languages of the belt, suffix ""-ij" and -ıj", respectively, added to nouns to denote origin or relationship. This suffix is thought to be a relic of Daylami language. The word "dehche" in the Daylami language had the meaning of peasant, someone from village, and the farmer. Its derivation was deh (village) + che (the suffix denoting origin or relationship). The suffix "-che", that is the same as the modern "-ij" in Caspian dialects. "-ij" is a suffix for attributing to a place, such as Yoshij, someone from Yosh. For instance; Soyreg -> Soyreg-ıj- in Zaza, Lankon -> Lankon-ıj- in Talysh, Teron -> Teron-ij in Tati and Yosh -> Yosh-ij- in Caspian dialects (someone from Soyreg, Lankon, Tehran and Yosh respectively) and dew -> dew-ıj- (village -> villager) in Zaza, di -> div-oj- (village -> villager) in Talysh. The suffix "-iš" forms verbal nouns in Zaza, by adding it to the preterite stem and verbal nouns derived from this suffix have masculine gender. In Parthian and Middle Persian, a similar suffix, "-išn" has existed. For instance: Vocabulary Zaza language distinguishes gender for third person pronoun in both the direct and oblique case. The masculine third person pronoun is o, the feminine one is a. Among all western Iranian languages, Zaza, Semnani, Sangsari, Tati dialects, Hazārrūdi, Cālī, Tākestāni, Kajali, Khalkhali, Karani, Lerdi, Diz, Sagzābādi, Eštehārdi, Ashtiani, Amorei, Alviri, Abyānei, Jowšaqāni, Abuzeydābādi, Farizandi distinguish gender for third person pronoun: Phonology The vowel /e/ may also be realized as [ɛ] when occurring before a consonant. /ɨ/ may become lowered to [ɪ] when occurring before a velarized nasal /n/ [ŋ], or occurring between a palatal approximant /j/ and a palato-alveolar fricative /ʃ/. Vowels /ɑ/, /ɨ/, or /ə/ become nasalized when occurring before /n/, as [ɑ̃], [ɨ̃], and [ə̃], respectively. /n/ becomes a velar [ŋ] when following a velar consonant. Distinctive (diachronic) phonological changes of Western Iranian languages in terms of their historical evolution: Writing systems Zaza texts written during the Ottoman era were written in Arabic letters. The works of this era had religious content. The first Zaza text, written by Sultan Efendi, in 1798, was written in Arabic letters in the Nesih font, which was also used in Ottoman Turkish. Following this work, the first Zaza language Mawlid, written by the Ottoman-Zaza cleric, writer and poet Ahmed el-Hassi in 1891–1892, was also written in Arabic letters and published in 1899. Another Mawlid in Zaza language, written by another Ottoman-Zaza cleric Osman Esad Efendi between 1903–1906, was also written in Arabic letters. The Arabic letters used for Zaza were also used by Karl Hadank for Zaza, in his prominent linguistic work on Zaza, "Mundarten der Zâzâ". The Zaza alphabet based on Arabic letters was similar to the Ottoman Turkish and Persian alphabets and included the letters پ چ ژ in addition to the standard Arabic letters. Even after the alphabet reform in Turkey, some Zaza writers continued to write using the Arabic alphabet. The Arabic-based Zaza alphabet included the following letters: After the Republic, Zaza works began to be written in Latin letters, largely abandoning the Arabic-based Zaza alphabet. However, today Zaza does not have a common alphabet used by all Zazas. An alphabet called the Jacabson alphabet was developed with the contributions of the American linguist C. M Jacobson and is used by the Zaza Language Institute in Frankfurt, which works on the standardization of Zaza language. Another alphabet used for the language is the Bedirxan alphabet. Another Zaza alphabet, prepared by Zülfü Selcan and started to be used at Munzur University as of 2012, is another writing system developed for Zaza, consisting of 32 letters, 8 of which are vowels and 24 of which are consonants. The Zaza alphabet is an extension of the Latin alphabet used for writing the Zaza language, consisting of 32 letters, six of which (ç, ğ, î, û, ş, and ê) have been modified from their Latin originals for the phonetic requirements of the language. Literature Zaza literature consists of oral and written texts produced in the Zaza language. Before it began to be written, it was passed on through oral literature types. In this respect, Zaza literature is very rich in terms of oral works. The language has many oral literary products such as deyr (folk song), kilam (song), dêse (hymn), şanıke (fable), hêkati (story), qesê werênan (proverbs and idioms). Written works began to appear during the Ottoman Empire, and the early works had a religious/doctrinal nature. After the Republic, long-term language and cultural bans caused the revival of Zaza literature, which developed in two centers, Turkey and Europe, mainly in Europe. After the loosened bans, Zaza literature developed in Turkey. The first known written works of Zaza literature were written during the Ottoman period. Written works in the Zaza language produced during the Ottoman period were written in Arabic letters and had a religious nature. The first written work in Zaza during this period was written in the late 1700s. This first written text of the Zaza language was written by İsa Beg bin Ali, nicknamed Sultan Efendi, an Islamic history writer, in 1212 Hijri (1798). The work was written in Arabic letters and in the Naskh script, which is also used in Ottoman Turkish. The work consists of two parts III. It includes the Eastern Anatolia region during the reign of Selim III, the life of Ali (caliph), Alevi doctrine and history, the translation of some parts of Nahj al-balagha into Zaza language, apocalyptic subjects and poetic texts. About a hundred years after this work, another work in the Zaza language, Mevlit (Mewlid-i Nebi), was written by the Ottoman-Zaza cleric, writer and poet Ahmed el-Hassi (1867–1951) in 1891–1892. The first Mevlit work in the Zaza language was written in Arabic letters and published in 1899. The mawlid, written using the Arabic prosody (aruz), resembles the mawlid of Süleyman Çelebi and the introduction includes the life of the Islamic prophet Muhammad and the details of Allah, tawhid, munacaat, ascension, birth, birth and creation, etc. It includes religious topics and consists of 14 chapters and 366 couplets. Another written work written during this period is another Mevlit written by Siverek mufti Osman Esad Efendi (1852–1929). The work called Biyişa Pexemberi (Birth of the Prophet) consists of chapters on the Islamic prophet Muhammad and the Islamic religion and was written in Zaza language in Arabic letters in 1901 (1903 according to some sources). The work was published in 1933, after the author's death. Apart from Zaza writers, non-Zaza/Ottoman writers/researchers such as Peter Ivanovich Lerch (1827–1884), Robert Gordon Latham (1812–1888) Dr. Humphry Sandwith (1822–1881), Wilhelm Strecker (1830–1890), Otto Blau (1828–1879), Friedrich Müller (1864) and Oskar Mann (1867–1917) included Zaza content (story, fairy tales dictionary) in their works in the pre-Republican period. Post-Republican Zaza literature developed through two branches, Turkey-centered and Europe-centered. During this period, the development of Zaza literature stagnated in Turkey due to long-term language and cultural bans. Zaza migration to European countries in the 1980s and the relatively free environment enabled the revival of Zaza literature in Europe. One of the works in the Zaza language written in post-Republican Turkey are two verse works written in the field of belief and fiqh in the 1940s. Following this work, another Mevlit containing religious subjects and stories was written by Mehamed Eli Hun in 1971. Zaza Divan, a 300-page manuscript consisting of Zaza poems and odes, started to be written by Mehmet Demirbaş in 1975 and completed in 2005, is another literary work in the divan genre written in this period. Mevlids and sirahs of Abdulkadir Arslan (1992–1995), Kamil Pueği (1999), Muhammed Muradan (1999-2000) and Cuma Özusan (2009) are other literary works with religious content. Written Zaza literature is rich in mawlid and religious works, and the first written works of the language are given in these genres. The development of Zaza literature through magazine publishing took place through magazines published by Zazas who immigrated to Europe after 1980 and published exclusively in the Zaza language, magazines that were predominantly in the Zaza language but published multilingually, and magazines that were not in the Zaza language but included works in the Zaza language. Kormışkan, Tija Sodıri, Vate are magazines published entirely in Zaza language. Apart from these, Ayre (1985–1987), Piya (1988–1992) and Raa Zazaistani (1991), which were published as language, culture, literature and history magazines by Ebubekir Pamukçu, the leading name of Zaza nationalism, are important magazines in this period that were predominantly Zaza and published multilingually. Ware, ZazaPress, Pir, Raştiye, Vengê Zazaistani, Zazaki, Zerq, Desmala Sure, Waxt, Çıme are other magazines that are Zaza-based and multilingual. In addition to these magazines published in European countries, Vatı (1997–1998), which is the first magazine published entirely in Zaza language and published in Turkey, and Miraz (2006) and Veng u Vaj (2008) are other important magazines published in Zaza language in Turkey. Magazines that are mainly published in other languages but also include works in Zaza language are magazines published in Kurdish and Turkish languages. Roja Newé (1963), Riya Azadi (1976), Tirêj (1979) and War (1997) are in the Kurdish language; Ermin (1991), Ateş Hırsızı (1992), Ütopya, Işkın, Munzur (2000), Bezuvar (2009) are magazines in Turkish language that include texts in Zaza language. Today, works in different literary genres such as poetry, stories and novels in Zaza language are published by different publishing houses in Turkey and European countries.[citation needed] Gallery See also References notes Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Pharaoh] | [TOKENS: 4547] |
Contents Pharaoh The Pharaoh[a] was the monarch of ancient Egypt. The title came into use from the Eighteenth Dynasty onwards and was subsequently attributed to all the previous kings of Egypt. Before this Pharaoh was a term that meant more of the kings' administration. The earliest confirmed instance of the title used contemporaneously for a ruler is a letter to Akhenaten (reigned c. 1353–1336 BCE), possibly preceded by an inscription referring to Thutmose III (c. 1479–1425 BCE). Although the title only came into use in the Eighteenth Dynasty during the New Kingdom, scholars today use it for all the rulers of Egypt from the First Dynasty (c. 3150 BCE) until the annexation of Egypt by the Roman Republic in 30 BCE. In the early dynasties, ancient Egyptian kings had as many as three titles: the Horus, the Sedge and Bee (nswt-bjtj), and the Two Ladies or Nebty (nbtj) name. The Golden Horus and the nomen titles were added later. In Egyptian society, religion was central to everyday life. One of the roles of the king was as an intermediary between the deities and the people. The king thus was deputised for the deities in a role that was both as civil and religious administrator. The king owned all of the land in Egypt, enacted laws, collected taxes, and served as commander-in-chief of the military. Religiously, the king officiated over religious ceremonies and chose the sites of new temples. The king was responsible for maintaining Maat (mꜣꜥt), or cosmic order, balance, and justice, and part of this included going to war when necessary to defend the country or attacking others when it was believed that this would contribute to Maat, such as to obtain resources. During the early days prior to the unification of Upper and Lower Egypt, the Deshret or the "Red Crown", was a representation of the kingdom of Lower Egypt, while the Hedjet, the "White Crown", was worn by the kings of Upper Egypt. After the unification of both kingdoms, the Pschent, the combination of both the red and white crowns became the official crown of the pharaoh. With time new headdresses were introduced during different dynasties such as the Khat, Nemes, Atef, Hemhem crown, and Khepresh. At times, a combination of these headdresses or crowns worn together was depicted. Etymology The word pharaoh ultimately derives from the Egyptian compound pr ꜥꜣ, */ˌpaɾuwˈʕaʀ/ "great house", written with the two biliteral hieroglyphs pr "house" and ꜥꜣ "column", here meaning "great" or "high". It was the title of the royal palace and was used only in larger phrases such as smr pr-ꜥꜣ "Courtier of the High House", with specific reference to the buildings of the court or palace. From the Twelfth Dynasty onward, the word appears in a wish formula "Great House, May it Live, Prosper, and be in Health", but again only with reference to the royal palace and not a person. Sometime during the era of the New Kingdom, pharaoh became the form of address for a person who was king. The earliest confirmed instance where pr ꜥꜣ is used specifically to address the ruler is in a letter to the eighteenth dynasty king, Akhenaten (reigned c. 1353–1336 BCE), that is addressed to "Great House, L, W, H, the Lord". However, there is a possibility that the title pr ꜥꜣ first might have been applied personally to Thutmose III (c. 1479–1425 BCE), depending on whether an inscription on the Temple of Armant may be confirmed to refer to that king. During the Eighteenth dynasty (sixteenth to fourteenth centuries BCE) the title pharaoh was employed as a reverential designation of the ruler. About the late Twenty-first Dynasty (tenth century BCE), however, instead of being used alone and originally just for the palace, it began to be added to the other titles before the name of the king, and from the Twenty-Fifth Dynasty (eighth to seventh centuries BCE, during the declining Third Intermediate Period) it was, at least in ordinary use, the only epithet prefixed to the royal appellative. From the Nineteenth dynasty onward pr-ꜥꜣ on its own, was used as regularly as ḥm, "Majesty". The term, therefore, evolved from a word specifically referring to a building to a respectful designation for the ruler presiding in that building, particularly by the time of the Twenty-Second Dynasty and Twenty-third Dynasty.[citation needed] The first dated appearance of the title "pharaoh" being attached to a ruler's name occurs in Year 17 of Siamun (tenth century BCE) on a fragment from the Karnak Priestly Annals, a religious document. Here, an induction of an individual to the Amun priesthood is dated specifically to the reign of "Pharaoh Siamun". This new practice was continued under his successor, Psusennes II, and the subsequent kings of the twenty-second dynasty. For instance, the Large Dakhla stela is specifically dated to Year 5 of king "Pharaoh Shoshenq, beloved of Amun", whom all Egyptologists concur was Shoshenq I—the founder of the Twenty-second Dynasty—including Alan Gardiner in his original 1933 publication of this stela. Shoshenq I was the second successor of Siamun. Meanwhile, the traditional custom of referring to the sovereign as, pr-ˤ3, continued in official Egyptian narratives.[citation needed] The title is reconstructed to have been pronounced *[parʕoʔ] in the Late Egyptian language, from which the Greek historian Herodotus derived the name of one of the Egyptian kings, Koine Greek: Φερων. In the Hebrew Bible, the title also occurs as Hebrew: פרעה [parʕoːh]; from that, in the Septuagint, Koine Greek: φαραώ, romanized: pharaō, and then in Late Latin pharaō, both -n stem nouns. The Qur'an likewise spells it Arabic: فرعون firʿawn with n (here, always referring to the one evil king in the Book of Exodus story, by contrast to the good king in surah Yusuf's story). The Arabic combines the original ayin from Egyptian along with the -n ending from Greek. In English, the term was at first spelled "Pharao", but the translators for the King James Bible revived "Pharaoh" with "h" from the Hebrew. Meanwhile, in Egypt, *[par-ʕoʔ] evolved into Sahidic Coptic ⲡⲣ̅ⲣⲟ pərro and then ərro by rebracketing p- as the definite article "the" (from ancient Egyptian pꜣ). Other notable epithets are nswt, translated to "king"; ḥm, "Majesty"; jty for "monarch or sovereign"; nb for "lord";[note 2] and ḥqꜣ for "ruler". Functions As a central figure of the state, the pharaoh was the obligatory intermediary between the gods and humans. To the former, he ensured the proper performance of rituals in the temples; to the latter, he guaranteed agricultural prosperity, the defense of the territory and impartial justice. In the sanctuaries, the image of the sovereign is omnipresent through parietal scenes and statues. In this iconography, the pharaoh is invariably represented as the equal of the gods. In the religious speech, he is however only their humble servant, a zealous servant who makes multiple offerings. This piety expresses the hope of a just return of service. Filled with goods, the gods must favorably activate the forces of nature for a common benefit to all Egyptians. The only human being admitted to dialogue with the gods on an equal level, the Pharaoh was the supreme officiant; the first of the priests of the country. More widely, the pharaonic gesture covered all the fields of activity of the collective and ignored the separation of powers. Also, every member of the administration acts only in the name of the royal person, by delegation of power. From the Pyramid Texts, the political actions of the sovereign were framed by a single maxim: "Bring Maat and repel Isfet", that is to say, promote harmony and repel chaos. As the nurturing father of the people, the Pharaoh ensured prosperity by calling upon the gods to regulate the waters of the Nile, by opening the granaries in case of famine and by guaranteeing a good distribution of arable land. Chief of the armies, the pharaoh was the brave protector of the borders. Like Ra who fights the serpent Apophis, the king of Egypt repels the plunderers of the desert, fights the invading armies and defeats the internal rebels. The Pharaoh was always the sole victor; standing up and knocking out a bunch of prisoners or shooting arrows from his battle chariot. As the only legislator, the laws and decrees he promulgated were seen as inspired by divine wisdom. This legislation, kept in the archives and placed under the responsibility of the vizier, applied to all, for the common good and social agreement. Regalia Sceptres and staves were a general symbol of authority in ancient Egypt. One of the earliest royal scepters was discovered in the tomb of Khasekhemwy in Abydos. Kings were also known to carry a staff, and Anedjib is shown on stone vessels carrying a so-called mks-staff. The scepter with the longest history seems to be the heqa-sceptre, sometimes described as the shepherd's crook. The earliest examples of this piece of regalia dates to prehistoric Egypt. A scepter was found in a tomb at Abydos that dates to Naqada III. Another scepter associated with the king is the was-sceptre. This is a long staff mounted with an animal head. The earliest known depictions of the was-scepter date to the First Dynasty. The was-scepter is shown in the hands of both kings and deities. The flail later was closely related to the heqa-scepter (the crook and flail), but in early representations the king was also depicted solely with the flail, as shown in a late pre-dynastic knife handle that is now in the Metropolitan museum, and on the Narmer Macehead. The earliest evidence known of the Uraeus—a rearing cobra—is from the reign of Den from the first dynasty. The cobra supposedly protected the king by spitting fire at its enemies. Crowns and headdresses The red crown of Lower Egypt, the Deshret crown, dates back to pre-dynastic times and symbolised chief ruler. A red crown has been found on a pottery shard from Naqada, and later, Narmer is shown wearing the red crown on both the Narmer Macehead and the Narmer Palette. The white crown of Upper Egypt, the Hedjet, was worn in the Predynastic Period by Scorpion II, and, later, by Narmer. This is the combination of the Deshret and Hedjet crowns into a double crown, called the Pschent crown. It is first documented in the middle of the First Dynasty of Egypt. The earliest depiction may date to the reign of Djet, and is otherwise surely attested during the reign of Den. The khat headdress consists of a kind of "kerchief" whose end is tied similarly to a ponytail. The earliest depictions of the khat headdress comes from the reign of Den, but is not found again until the reign of Djoser. The Nemes headdress dates from the time of Djoser. It is the most common type of royal headgear depicted throughout Pharaonic Egypt. Any other type of crown, apart from the Khat headdress, has been commonly depicted on top of the Nemes. The statue from his Serdab in Saqqara shows the king wearing the nemes headdress. Osiris is shown to wear the Atef crown, which is an elaborate Hedjet with feathers and disks. Depictions of kings wearing the Atef crown originate from the Old Kingdom. The Hemhem crown is usually depicted on top of Nemes, Pschent, or Deshret crowns. It is an ornate, triple Atef with corkscrew sheep horns and usually two uraei. The depiction of this crown begins among New Kingdom rulers during the Early Eighteenth Dynasty of Egypt. Also called the blue crown, the Khepresh crown has been depicted in art since the New Kingdom. It is often depicted being worn in battle, but it was also frequently worn during ceremonies. It used to be called a war crown by many, but modern historians refrain from defining it thus. Egyptologist Bob Brier has noted that despite their widespread depiction in royal portraits, no ancient Egyptian crown has ever been discovered. The tomb of Tutankhamun that was discovered largely intact, contained such royal regalia as a crook and flail, but no crown was found among his funerary equipment. Diadems have been discovered. It is presumed that crowns would have been believed to have magical properties and were used in rituals. Brier's speculation is that crowns were religious or state items, so a dead king likely could not retain a crown as a personal possession. The crowns may have been passed along to the successor, much as the crowns of modern monarchies. Titles During the Early Dynastic Period kings had three titles. The Horus name is the oldest and dates to the late pre-dynastic period. The Nesu Bity name was added during the First Dynasty. The Nebty name (Two Ladies) was first introduced toward the end of the First Dynasty. The Golden falcon (bik-nbw) name is not well understood. The prenomen and nomen were introduced later and are traditionally enclosed in a cartouche. By the Middle Kingdom, the official titulary of the ruler consisted of five names; Horus, Nebty, Golden Horus, nomen, and prenomen for some rulers, only one or two of them may be known. The Horus name was adopted by the king, when taking the throne. The name was written within a square frame representing the palace, named a serekh. The earliest known example of a serekh dates to the reign of king Ka, before the First Dynasty. The Horus name of several early kings expresses a relationship with Horus. Aha refers to "Horus the fighter", Djer refers to "Horus the strong", etc. Later kings express ideals of kingship in their Horus names. Khasekhemwy refers to "Horus: the two powers are at peace", while Nebra refers to "Horus, Lord of the Sun". The Nesu Bity name, also known as prenomen, was one of the new developments from the reign of Den. The name would follow the glyphs for the "Sedge and the Bee". The title is usually translated as king of Upper and Lower Egypt. The nsw bity name may have been the birth name of the king. It was often the name by which kings were recorded in the later annals and king lists. The earliest example of a Nebty (Two Ladies) name comes from the reign of king Aha from the First Dynasty. The title links the king with the goddesses of Upper and Lower Egypt, Nekhbet and Wadjet. The title is preceded by the vulture (Nekhbet) and the cobra (Wadjet) standing on a basket (the neb sign). The Golden Horus or Golden Falcon name was preceded by a falcon on a gold or nbw sign. The title may have represented the divine status of the king. The Horus associated with gold may be referring to the idea that the bodies of the deities were made of gold and the pyramids and obelisks are representations of (golden) sun-rays. The gold sign may also be a reference to Nubt, the city of Set. This would suggest that the iconography represents Horus conquering Set. The prenomen and nomen were contained in a cartouche. The prenomen often followed the King of Upper and Lower Egypt (nsw bity) or Lord of the Two Lands (nebtawy) title. The prenomen often incorporated the name of Re. The nomen often followed the title, Son of Re (sa-ra), or the title, Lord of Appearances (neb-kha). Divinity In Ancient Egypt, the Pharaoh was often considered to be divine. This precept originated before 3000 BCE and the Egyptian office of divine kingship would go on to influence many other societies and kingdoms, surviving into the modern era. The Pharaoh also became a mediator between the gods and man. This institution represents an innovation over that of Sumerian city-states where, though the clan leader or king mediated between his people and the gods, did not himself represent a god on Earth. The few Sumerian exceptions to this would post-date the origins of this practice in ancient Egypt. For example, the legendary king Gilgamesh, thought to have reigned in Uruk as a contemporary of the Egyptian ruler Djoser, was cast as having had his mother as the Mesopotamian goddess Ninsun alongside his father, the previous human ruler of Uruk. Another Mesopotamian example of a god-king was Naram-Sin of Akkad. During the Early Dynastic Period, the Pharaoh was represented as the divine incarnation of Horus, and the unifier of Upper and Lower Egypt. By the time of Djedefre (26th century BCE), the Pharaoh also ceased to have a father, as his mother was magically impregnated by the solar deity Ra. According to Pyramid Text Utterance 571, "... the King was fashioned by his father Atum before the sky existed, before earth existed, before men existed, before the gods were born, before death existed ..." According to an inscription on the statue of Horemheb (14th–13th centuries BCE): "he [Horemheb] already came out of his mother's bosom adorned with the prestige and the divine color ..." Inscriptions regularly described the Pharaoh as the "good god" or "perfect god" (nfr ntr). By the time of the New Kingdom, the divinity of the king was imbued as he possessed the manifestation of the god Amun-Re; this was referred to as his 'living royal ka' which he received during the coronation ceremony. The divinity of Pharaoh was still held to during the period of Persian domination of Egypt. The Persian emperor Darius the Great (522–486 BCE) was referred to as a divine being in Egyptian temple texts. Such descriptions continued and were designated to Alexander the Great after his conquest of Egypt, and later still for the rulers of the Ptolemaic Kingdom that succeeded Alexander's rule. Descriptions of the divinity of the Pharaoh are much more infrequent in sources from Classical Greece. One Ptolemaic-era hymn describes the divinity of the Pharaoh, though this may reflect Greek notions of divine kingship just as much as it could reflect Egyptian ones. The historian Herodotus explicitly denies this, claiming that Egyptian priests rejected any notion of the divinity of the king. The only explicit classical Greek source which describes the divinity of Pharaoh is contained in the writings of Diodorus Siculus in the 1st century BCE, who in turn relies on Hecataeus of Abdera as his source of information. Diodorus slightly contradicts himself in a different passage where he asserts that Darius I was the first ruler of Egypt to be honored as a king. Even after the reign of the Egyptian kings and pharaohs, the notion of Pharaoh's self-notion as a divine being survived and is described in rabbinic literature. In these sources, the Pharaoh is described as hubristically asserting his own divinity and yet, compared to the one true God, is no more than an impotent human. Mekhilta of Rabbi Ishmael, Shirah 8:32 names Pharaoh among those who proclaimed themselves as gods, alongside Sennacherib and Nebuchadnezzar. Genesis Rabbah 89:3 invokes Pharaoh describing himself as the god over the Nile river. In Exodus Rabbah 10:2, Pharaoh boasts that he is the creator and owner of the Nile. God is then said to have responded to this statement by challenging the Pharaoh over who owns the Nile, as God proceeds to create a disaster by bringing forth frogs from it that consume Egypt's agriculture. In other midrashic texts, Pharaoh asserts himself as the creator of the universe and even of himself. In the Tanhuma, in commentary on Ezekiel 29:9, Pharaoh is said to have proclaimed himself as lord of the universe. Pharaoh is represented as a heretical figure who presents himself as divine, and these texts then claim that his claims were exposed when he had to go to the Nile to relieve himself. See also Notes References Sources Open access pdf download. Further reading External links |
======================================== |
[SOURCE: https://www.reddit.com/r/MinecraftCommands/comments/1radb0a/lightning_smite_command/] | [TOKENS: 49] |
lightning smite command? so im trying to smite all the mobs around the tagged player but im not sure how to do so Create your account and connect with a world of communities. Anyone can view, post, and comment to this community |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Confederation_of_United_States_of_America] | [TOKENS: 9040] |
Contents Confederation period The Confederation period was the era of the United States' history in the 1780s after the American Revolution and prior to the ratification of the United States Constitution. In 1781, the United States ratified the Articles of Confederation and Perpetual Union and prevailed in the Battle of Yorktown, the last major land battle between British and American Continental forces in the American Revolutionary War. American independence was confirmed with the 1783 signing of the Treaty of Paris. The fledgling United States faced several challenges, many of which stemmed from the lack of an effective central government and unified political culture. The period ended in 1789 following the ratification of the United States Constitution, which established a new, more effective, federal government. The Articles of Confederation established a loose confederation of states with a weak confederated government. An assembly of delegates acted on behalf of the states they represented. This unicameral body, officially referred to as the United States in Congress Assembled, had little authority, and could not accomplish anything independent of the states. It had no chief executive, and no court system. Congress lacked the power to levy taxes, regulate foreign or interstate commerce, or effectively negotiate with foreign powers. The weakness of Congress proved self-reinforcing, as the leading political figures of the day served in state governments or foreign posts. The failure of the confederated government to handle the challenges facing the United States led to calls for reform and frequent talk of secession. The Treaty of Paris left the United States with a vast territory spanning from the Atlantic Ocean to the Mississippi River. Settlement of the trans-Appalachian territories proved difficult, in part due to the resistance of Native Americans and the neighboring foreign powers of Great Britain and Spain. The British refused to evacuate US territory, while the Spanish used their control of the Mississippi River to stymie Western settlement. In 1787, Congress passed the Northwest Ordinance, which set an important precedent by establishing the first organized territory under the control of the confederated government. After Congressional efforts to amend the Articles failed, numerous American leaders met in Philadelphia in 1787 to establish a new constitution. The new constitution was ratified in 1788, and the new federal government began meeting in 1789, marking the end of the Confederation period. Background The American Revolutionary War broke out against British rule in April 1775 with the Battles of Lexington and Concord. The Second Continental Congress met in May 1775, and established an army funded by Congress and under the leadership of George Washington, a Virginian who had fought in the French and Indian War. On July 4, 1776, as the war continued and two days after endorsing the Lee Resolution to break from British control, Congress adopted the Declaration of Independence. At exactly the same time that Congress declared independence, it also created a committee to craft a constitution for the new nation. Though some in Congress hoped for a strong centralized state, most Americans wanted legislative power to rest primarily with the states and saw the central government as a mere wartime necessity. The resulting constitution, which came to be known as the Articles of Confederation and Perpetual Union, provided for a weak central government with little power to coerce the state governments. The first article of the new constitution established a name for the new federation – the United States of America. The first draft of the Articles of Confederation, written by John Dickinson, was presented to Congress on July 12, 1776, but Congress did not send the proposed constitution to the states until November 1777. Three major constitutional issues divided Congress: state borders, including claims to lands west of the Appalachian Mountains, state representation in the new Congress, and whether tax levies on states should take slaves into account. Ultimately, Congress decided that each state would have one vote in Congress and that slaves would not affect state levies. By 1780, as the war continued, every state but Maryland had ratified the Articles; Maryland refused to ratify the constitution until all of the other states relinquished their western land claims to Congress. The success of Britain's Southern strategy, along with pressure from America's French allies, convinced Virginia to cede its claims north of the Ohio River, and Maryland finally ratified the Articles in January 1781. The new constitution took effect in March 1781 and the Congress of the Confederation technically replaced the Second Continental Congress as the central government, but in practice the structure and personnel of the new Congress was quite similar to that of the old Congress. After the American victory at the Battle of Yorktown in September 1781 and the collapse of British Prime Minister North's ministry in March 1782, both sides sought a peace agreement. The American Revolutionary War ended with the signing of the 1783 Treaty of Paris. The treaty granted the United States independence, as well as control of a vast region south of the Great Lakes and extending from the Appalachian Mountains west to the Mississippi River. Although the British Parliament had attached this trans-Appalachian region to Quebec in 1774 as part of the Quebec Act, several states had land claims in region based on royal charters and proclamations that defined their boundaries as stretching "from sea to sea." Some Americans had hoped the treaty would provide for the acquisition of Florida, but that territory was restored to Spain, which had joined the U.S. and France in the war against Britain and demanded its spoils. The British fought hard and successfully to keep Canada, so the treaty acknowledged that. Observers at the time and historians ever since emphasize the generosity of British territorial concessions. Historians such as Alvord, Harlow, and Ritcheson have emphasized that Britain's generous territorial terms were based on a statesmanlike vision of close economic ties between Britain and the United States. The treaty was designed to facilitate the growth of the American population and create lucrative markets for British merchants, without any military or administrative costs to Britain. As the French foreign minister Vergennes later put it, "The English buy peace rather than make it". The treaty also addressed several additional issues. The United States agreed to honor debts incurred prior to 1775, while the British agreed to remove their soldiers from American soil. Privileges that the Americans had received because of their membership in the British Empire no longer applied, most notably protection from pirates in the Mediterranean Sea. Neither the Americans nor the British would consistently honor these additional clauses. Individual states ignored treaty obligations by refusing to restore confiscated Loyalist property, and many continued to confiscate Loyalist property for "unpaid debts". Some states, notably Virginia, maintained laws against payment of debts to British creditors. The British often ignored the provision of Article 7 regarding removal of slaves. American leadership "Each state retains its sovereignty, freedom, and independence, and every power, jurisdiction, and right, which is not by this Confederation expressly delegated to the United States, in Congress assembled." The Articles of Confederation created a loose union of states. The confederation's central government consisted of a unicameral Congress with legislative and executive function, and was composed of delegates from each state in the union. Congress received only those powers which the states had previously recognized as belonging to king and parliament. Each state had one vote in Congress, regardless of its size or population, and any act of Congress required the votes of nine of the 13 states to pass; any decision to amend the Articles required the unanimous consent of the states. Each state's legislature appointed multiple members to its delegation, allowing delegates to return to their homes without leaving their state unrepresented. Under the Articles, states were forbidden from negotiating with other nations or maintaining a military without Congress's consent, but almost all other powers were reserved for the states. Congress lacked the power to raise revenue, and was incapable of enforcing its own legislation and instructions. As such, Congress was heavily reliant on the compliance and support of the states. Following the conclusion of the Revolutionary War, which had provided the original impetus for the Articles, Congress's ability to accomplish anything of material consequence declined significantly. Rarely did more than half of the roughly sixty delegates attend a session of Congress at any given time, causing difficulties in raising a quorum. Many of the most prominent American leaders, such as Washington, John Adams, John Hancock, and Benjamin Franklin, retired from public life, served as foreign delegates, or held office in state governments. One leader who did emerge during this period was James Madison, who became convinced of the need for a stronger central government after serving in the Congress of the Confederation from 1781 to 1783. He would continue to call for a stronger central government for the remainder of the 1780s. Congress met in Philadelphia from 1778 until June 1783, when it moved to Princeton, New Jersey due to the Pennsylvania Mutiny of 1783. Congress would also convene in Annapolis, Maryland and Trenton, New Jersey before settling in New York City in 1785. The lack of strong leaders in Congress, as well as the body's impotence and itinerant nature, embarrassed and frustrated many American federalists, including Washington. The weakness of Congress also led to frequent talk of secession, and many believed that the United States would break into four confederacies, consisting of New England, the Mid-Atlantic states, the Southern states, and the trans-Appalachian region, respectively. The Congress of the Confederation was the sole federal governmental body created by the Articles of Confederation, but Congress established other bodies to undertake executive and judicial functions. In 1780, Congress created the Court of Appeals in Cases of Capture, which acted as the lone federal court during the Confederation period. In early 1781, Congress created executive departments to handle Foreign Affairs, War, and Finance. A fourth department, the Post Office Department, had existed since 1775 and continued to function under the Articles. Congress also authorized the creation of a Marine Department, but chose to place the naval forces under the Finance Department after Alexander McDougall declined to lead the Marine Department. The four departments were charged with administering the federal civil service, but they had little power independent of Congress. Pennsylvania merchant Robert Morris served as the Superintendent of Finance from 1781 to 1784. Though Morris had become somewhat unpopular during the war due to his successful business ventures, Congress hoped that he would be able to ameliorate the country's ruinous financial state. After his proposals were blocked, Morris resigned in frustration in 1784, and was succeeded by a three-person Treasury Board. Benjamin Lincoln served as Secretary of War from 1781 until the end of the Revolutionary War in 1783. He was eventually succeeded by Henry Knox, who held the position from 1785 to 1789. Robert Livingston served as the Secretary of Foreign Affairs from 1781 to 1783, and he was followed in office by John Jay, who served from 1784 to 1789. Jay proved to be an able administrator, and he took control of the nation's diplomacy during his time in office. Ebenezer Hazard served as the United States Postmaster General from 1782 to 1789. State governments After the thirteen colonies declared their independence and sovereignty in 1776, each was faced with the task of replacing royal authority with institutions based on popular rule. To varying degrees, the states embraced egalitarianism during and after the war. Each state wrote a new constitution, all of which established an elected executive, and many of which greatly expanded the franchise. The Pennsylvania Constitution of 1776 was perhaps the most democratic of these constitutions, as it granted suffrage to all taxpaying male citizens. Many of the new constitutions included a bill of rights that guaranteed freedom of the press, freedom of speech, trial by jury, and other freedoms. Conservative patriots such as Oliver Wolcott, who had fought for independence from Britain but did not favor major changes to the social order, looked with alarm on the new influence of the lower classes and the rise of politicians independent from the upper class. Following the end of the Revolutionary War, the states embarked on various reforms. Several states enshrined freedom of religion in their constitutions, and every Southern state ended the Anglican Church's status as the state religion. Several states established state universities, while private universities also flourished. Numerous states reformed their criminal codes to reduce the number of capital crimes. Northern states invested in infrastructure projects, including roads and canals that provided access to Western settlements. The states also took action regarding slavery, which appeared increasingly hypocritical to a generation that had fought against what they saw as tyranny. During and after the Revolution, every Northern state either passed laws or experienced court decisions providing for gradual emancipation or the immediate abolition of slavery. Though no Southern states provided for emancipation, they did pass laws restricting the slave trade. The states continued to carry the burden of heavy debt loads acquired during the Revolutionary War. With the partial exceptions of New York and Pennsylvania, which received revenue from import duties, most states relied on individual and property taxes for revenue. To cope with the war-time debts, several states were forced to raise taxes to a level several times higher than it had been prior to the war. These taxes sparked anger among the populace, particularly in rural areas, and in Massachusetts led to an armed uprising known as Shays' Rebellion. As both Congress and the government of Massachusetts proved unable to suppress the rebellion, former Secretary of War Benjamin Lincoln raised a private army which put an end to the insurgency. Britain relinquished its claim to Vermont in the Treaty of Paris, but Vermont did not join the United States. Though most in Vermont wanted to become the fourteenth state, New York and New Hampshire, which both claimed parts of Vermont, blocked this ambition. Throughout the 1780s, Vermont acted as an independent state, known as the Vermont Republic. Fiscal policies The United States had acquired huge debts during the Revolutionary War, in part due to Congress's lack of taxation powers; under the Articles, only the states could levy taxes or regulate commerce. In 1779, Congress had relinquished most of its economic power to the states, as it stopped printing currency and requested that the states directly pay the soldiers, but the states also suffered from fiscal instability. Robert Morris, appointed as superintendent of finance in 1781, won passage of major centralizing reforms such as the partial assumption of state debt, the suspension of payments to military personnel, and the creation of the Bank of North America. Morris emerged as perhaps the most powerful individual in the central government, with some referring to him as "The Financier", or even "The Dictator". In 1783, Morris, with the support of congressmen such as Madison and Alexander Hamilton, won congressional approval of a five percent levy on imports, which would grant the central government a consistent and independent source of revenue. However, with the signing of the Treaty of Paris, the states became more resistant to granting power to Congress. Though all but two states approved the levy, it never won the unanimous backing of the states and thus Congress struggled to find revenue throughout the 1780s. Common defense As the Revolutionary War came to an end, the officers and enlisted men of the Continental Army became increasingly disgruntled over their lack of pay, as Congress had suspended payment due to the poor financial state of the central government. Congress had promised the officers a lifetime pension in 1780, but few of the officers believed that they would receive this benefit. In December 1782, several officers, led by Alexander McDougall, petitioned Congress for their benefits. The officers hoped to use their influence to force the states to allow the federal government to levy a tariff, which in turn would provide revenue to pay the soldiers. Historians such as Robert Middlekauff have argued that some members of the central government, including Congressman Alexander Hamilton and Superintendent of Finance Robert Morris, attempted to use this growing dissatisfaction to increase the power of Congress. An anonymous letter circulated among the officers; the document called for the payment of soldiers and threatened mutiny against General Washington and Congress. In a gathering of army officers in March 1783, Washington denounced the letter, but promised to lobby Congress for payment. Washington's speech defused the brewing Newburgh Conspiracy, named for the New York town in which the army was encamped, but dissatisfaction among the soldiers remained high. In May 1783, fearing a mutiny, Washington furloughed most of his army. After Congress failed to pass an amendment granting the central government the power to levy an impost on imports, Morris paid the army with certificates that the soldiers labeled "Morris notes." The notes promised to pay the soldiers in six months, but few of the soldiers believed that they would ever actually receive payment, and most Morris notes were sold to speculators. Many of the impoverished enlisted men were forced to beg for help on their journeys home. In June, the Pennsylvania Mutiny of 1783 broke out among angry soldiers who demanded payment, causing Congress to relocate the capital to Princeton. Upon re-convening, Congress reduced the size of the army from 11,000 to 2,000. Though security was a top priority of American leaders, in the short term a smaller Continental Army would suffice because Americans had confidence that the Atlantic Ocean would provide protection from European powers. On December 23, 1783, Washington resigned from the army, earning the admiration of many for his willingness to relinquish power. In August 1784, Congress established the First American Regiment, the nation's first peacetime regular army infantry unit, which served primarily on the American frontier. Even so, the size of the army continued to shrink, down to a mere 625 soldiers, while Congress effectively disbanded the Continental Navy in 1785 with the sale of the USS Alliance. The small, poorly equipped army would prove powerless to prevent squatters from moving onto Native American lands, further inflaming a tense situation on the frontier. Western settlement Partly due to the restrictions imposed by the Royal Proclamation of 1763, only a handful of Americans had settled west of the Appalachian Mountains prior to the outbreak of the American Revolutionary War. The start of that war lifted the barrier to settlement, and by 1782 approximately 25,000 Americans had settled in Transappalachia. After the war, American settlement in the region continued. Though life in these new lands proved hard for many, western settlement offered the prize of property, an unrealistic aspiration for some in the East. Westward expansion stirred enthusiasm even in those who did not move west, and many leading Americans, including Washington, Benjamin Franklin, and John Jay, purchased lands in the west. Land speculators founded groups like the Ohio Company, which acquired title to vast tracts of land in the west and often came into conflict with settlers. Washington and others co-founded the Potomac Company to build a canal linking the Potomac River with the Ohio River. Washington hoped that this canal would provide a cultural and economic link between the east and west, thus ensuring that the West would not ultimately secede. In 1784, Virginia formally ceded its claims north of the Ohio River, and Congress created a government for the region now known as the Old Northwest with the Land Ordinance of 1784 and the Land Ordinance of 1785. These laws established the principle that Old Northwest would be governed by a territorial government, under the aegis of Congress, until it reached a certain level of political and economic development. At that point, the former territories would enter the union as states, with rights equal to that of any other state. The federal territory stretched across most of the area west of Pennsylvania and north of the Ohio River, though Connecticut retained a small part of its claim in the West in the form of the Connecticut Western Reserve, a strip of land south of Lake Erie. In 1787, Congress passed the Northwest Ordinance, which granted Congress greater control of the region by establishing the Northwest Territory. Under the new arrangement, many of the formerly elected officials of the territory were instead appointed by Congress. In order to attract Northern settlers, Congress outlawed slavery in the Northwest Territory, though it also passed a fugitive slave law to appease the Southern states. While the Old Northwest fell under the control of the federal government, Georgia, North Carolina, and Virginia retained control of the Old Southwest; each state claimed to extend west to the Mississippi River. In 1784, settlers in western North Carolina sought statehood as the State of Franklin, but their efforts were denied by Congress, which did not want to set a precedent regarding the secession of states. By the 1790 Census, the populations of Tennessee and Kentucky had grown dramatically to 73,000 and 35,000, respectively. Kentucky, Tennessee, and Vermont would all gain statehood between 1791 and 1795. With the aid of Britain and Spain, Native Americans resisted western settlement. Though Southern leaders and many federalists lent their political support to the settlers, most Northern leaders were more concerned with trade than with western settlement, and the weak central government lacked the power to compel concessions from foreign governments. The 1784 closure of the Mississippi River by Spain denied access to the sea for the exports of Western farmers, greatly impeding efforts to settle the West, and they provided arms to Native Americans. The British had restricted settlement of the trans-Appalachian lands prior to 1776, and they continued to supply arms to Native Americans after the signing of the Treaty of Paris. Between 1783 and 1787, hundreds of settlers died in low-level conflicts with Native Americans, and these conflicts discouraged further settlement. As Congress provided little military support against the Native Americans, most of the fighting was done by the settlers. By the end of the decade, the frontier was engulfed in the Northwest Indian War against a confederation of Native American tribes. These Native Americans sought the creation of an independent Indian barrier state with the support of the British, posing a major foreign policy challenge to the United States. Economy and trade A brief economic recession followed the war, but prosperity returned by 1786. About 80,000 Loyalists left the U.S. for elsewhere in the British Empire, leaving the lands and properties behind. Some returned after the war, especially to more welcoming states like New York and South Carolina. Economically mid-Atlantic states recovered particularly quickly and began manufacturing and processing goods, while New England and the South experienced more uneven recoveries. Trade with Britain resumed, and the volume of British imports after the war matched the volume from before the war, but exports fell precipitously. Adams, serving as the ambassador to Britain, called for a retaliatory tariff in order to force the British to negotiate a commercial treaty, particularly regarding access to Caribbean markets. However, Congress lacked the power to regulate foreign commerce or compel the states to follow a unified trade policy, and Britain proved unwilling to negotiate. While trade with the British did not fully recover, the U.S. expanded trade with France, the Netherlands, Portugal, and other European countries. Despite these good economic conditions, many traders complained of the high duties imposed by each state, which served to restrain interstate trade. Many creditors also suffered from the failure of domestic governments to repay debts incurred during the war. Though the 1780s saw moderate economic growth, many experienced economic anxiety, and Congress received much of the blame for failing to foster a stronger economy. Foreign affairs In the decade after the end of the Revolutionary War, the United States benefited from a long period of peace in Europe, as no country posed a direct threat and immediate threat to the United States. Nevertheless, the weakness of the central government, and the desire of anti-federalists to keep a federal government from assuming powers held by the state governments, greatly hindered diplomacy. In 1776, the Continental Congress had drafted the Model Treaty, which served as a guide for U.S. foreign policy during the 1780s. The treaty sought to abolish trade barriers such as tariffs, while avoiding political or military entanglements. In this, it reflected the foreign policy priorities of many Americans, who sought to play a large role in the global trading community while avoiding war. Lacking a strong military, and divided by differing sectional priorities, the U.S. was often forced to accept unfavorable terms of trade during the 1780s. William Petty, 2nd Earl of Shelburne, served as Prime Minister during the negotiations that led to the Treaty of Paris. Shelburne favored peaceful relations and increased trade with the U.S., but his government fell in 1783, and his successors were less intent on amicable relations with the United States. Many British leaders hoped that the U.S. would ultimately collapse due to its lack of cohesion, at which point Britain could re-establish hegemony over North America. In western territories—chiefly in present-day Wisconsin and Michigan—the British retained control of several forts and continued to cultivate alliances with Native Americans. These policies impeded U.S. settlement and allowed Britain to extract profits from the lucrative fur trade. The British justified their continued occupation of the forts on the basis that the Americans had blocked the collection of pre-war debts owed to British citizens, which a subsequent investigation by Jay confirmed. As there was little the powerless Congress could do to coerce the states into action, the British retained their justification for the occupation of the forts until the matter was settled by the Jay Treaty in 1795. Jay emphasized the need for expanded international trade, specifically with Great Britain, which conducted by far the most international trade. However, Britain continued to pursue mercantilist economic policies, excluded the U.S. from trading with its Caribbean colonies, and flooded the U.S. with manufactured goods. U.S. merchants responded by opening up an entirely new market in China. Americans eagerly purchased tea, silks, spices, and chinaware, while the Chinese were eager for American ginseng and furs. Spain fought the British as an ally of France during the Revolutionary War, but it distrusted the ideology of republicanism and was not officially an ally of the United States. Spain controlled the territories of Florida and Louisiana, positioned to the south and west of the United States. Americans had long recognized the importance of navigation rights on the Mississippi River, as it was the only realistic outlet for many settlers in the trans-Appalachian lands to ship their products to other markets, including the Eastern Seaboard of the United States. Despite having fought a common enemy in the Revolutionary War, Spain saw U.S. expansionism as a threat to its empire. Seeking to stop the American settlement of the Old Southwest, Spain denied the U.S. navigation rights on the Mississippi River, provided arms to Native Americans, and recruited friendly American settlers to the sparsely populated territories of Florida and Louisiana. Working with Alexander McGillivray, Spain signed treaties with Creeks, the Chickasaws, and the Choctaws to make peace among themselves and ally with Spain, but the pan-Indian coalition proved unstable. Spain also bribed American General James Wilkinson in a plot to make much of the southwestern United States secede, but nothing came of it. Despite geopolitical tensions, Spanish merchants welcomed trade with the United States and encouraged the U.S. to set up consulates in Spain's New World colonies. A new line of commerce emerged in which American merchants imported goods from Britain and then resold them to the Spanish colonies. The U.S. and Spain reached the Jay–Gardoqui Treaty, which would have required the U.S. to renounce any right to access the Mississippi River for twenty-five years in return for a commercial treaty and the mutual recognition of borders. In 1786, Jay submitted the treaty to Congress, precipitating a divisive debate. Southerners, led by James Monroe of Virginia, opposed the provision regarding the Mississippi and accused Jay of favoring Northeastern commercial interests over western growth. Ratification of treaties required nine votes under the Articles of Confederation, and all five Southern states voted against ratification, dooming the treaty. Under the leadership of Foreign Minister Vergennes, France had entered the Revolutionary War, in large part to damage the British. The French were an indispensable ally during the war, providing supplies, finances, and a powerful navy. In 1778, France and the United States signed the Treaty of Alliance, establishing a "perpetual" military alliance, as well as the Treaty of Amity and Commerce, which established commercial ties. In the Treaty of Paris, Britain consented to relatively favorable terms to the United States partly out of a desire to weaken U.S. dependency on France. After the war, the U.S. sought increased trade with France, but commerce between the two countries remained limited. The U.S. also requested French aid in pressuring the British to evacuate their forts in U.S. territory, but the French were not willing to intervene in Anglo-American relations again. John Adams, as ambassador to the Netherlands, managed to convince the small country to break its alliance with Britain, join the war alongside France, and provide funding and formal recognition to the United States in 1782. The Netherlands, along with France, became the major American ally in Europe. The Barbary pirates, who operated out of the North African states of Morocco, Algiers, Tunis, and Tripoli, posed a threat to shipping in the Mediterranean Sea during the late 18th century. The major European powers paid the Barbary pirates tribute to avoid their raids, but the U.S. was not willing to meet the terms sought by the pirates, in part due to the central government's lack of money. As such, the pirates preyed on U.S. shipping during the 1780s. Creation of a new constitution The end of the war in 1783 temporarily ended any possibility of the states giving up power to a central government, but some in and out of Congress continued to favor an stronger or more effective federal government. Soldiers and former soldiers formed a powerful bloc calling for a better federal government, which they believed would have allowed for better war-time leadership. They were joined by merchants, who wanted a strong federal government to provide order and sound economic policies, and many expansionists, who believed the central government could best protect American lands in the West. Additionally, John Jay, Henry Knox, and others called for an independent executive who could govern more decisively than a large, legislative body like Congress. Despite growing feelings of nationalism, particularly among younger Americans, the efforts of federalism to grant Congress greater powers were defeated by those who preferred the continued supremacy of the states. Most Americans saw the Revolutionary War as a struggle against a strong government, and few state leaders were willing to surrender their own state's sovereignty. In 1786, Charles Cotesworth Pinckney of South Carolina led the creation of a grand congressional committee to consider constitutional amendments. The committee proposed seven amendments, and its proposals would have granted the central government the power to regulate commerce and fine states that failed to supply adequate funding to Congress. Congress failed to act on these proposals, and reformers began to take action outside of Congress. In 1785, Washington hosted the Mount Vernon Conference, which established an agreement between Maryland and Virginia regarding several commercial issues. Encouraged by this example of interstate cooperation, Madison convinced the Virginia assembly to host another conference, the Annapolis Convention, with the goal of promoting interstate trade. Only five state delegations attended the convention, but the delegates that did attend largely agreed on the need to reform the federal government. The delegates called for a second convention to take place in 1787 in Philadelphia to consider constitutional reform. In the months after the Annapolis Convention, reformers took steps to ensure better turnout at the next convention. They secured the blessing of Congress to consider constitutional reform and made sure to invite Washington, the most prominent American leader. The federalist call for a constitutional convention was bolstered by the outbreak of Shays' Rebellion, which convinced many of the need for a federal government powerful enough to help suppress uprisings. Though there was not a widespread feeling in the population that the Articles of Confederation needed major reform, the leaders of each state recognized the problems posed by the weak central government. When the Philadelphia Convention opened in May 1787, every state but Rhode Island sent a delegation. Three quarters of the delegates had served in Congress, and all recognized the difficulty, and importance, of amending the Articles. Though each delegate feared the loss of their own state's power, there was wide agreement among the delegates that the United States required a more effective federal government capable of effectively managing foreign relations and ensuring security. Many also hoped to establish a uniform currency and common copyright and immigration laws. With the attendance of powerful and respected leaders like Washington and Franklin, who helped provide some measure of legitimacy to the convocation, the delegates agreed to pursue sweeping changes to the central government. Shortly after the convention began in September 1787, delegates elected Washington to preside over the convention and agreed that the meetings would not be open to the public. The latter decision allowed for the consideration of an entirely new constitution, as open consideration of a new constitution would likely have inspired great public outcry. Led by James Madison, Virginia's delegates introduced a set of reforms known as the Virginia Plan, which called for a more effective central government with three independent branches of government: executive, legislative, and judicial. The plan envisioned a strong federal government with the power to nullify state laws. Madison's plan was well-received and served as the basis for the convention's discussion, though several of its provisions were altered over the course of the convention. During the convention, Madison and James Wilson of Pennsylvania emerged as two of the most important advocates of a new constitution based on the Virginia Plan, while prominent opponents to the final document would include Edmund Randolph, George Mason, and Elbridge Gerry. The balance of power between the federal government and the state governments emerged as the most debated topic of the convention, and the convention ultimately agreed to a framework in which the federal and state governments shared power. The federal government would regulate interstate and foreign commerce, coin money, and oversee foreign relations, but states would continue to exercise power in other areas. A second major issue was the allocation of congressional representatives. Delegates from large states wanted representation in Congress to be proportional to population, while delegates from smaller states preferred that each state receive equal representation. In the Connecticut Compromise, the delegates agreed to create a bicameral Congress in which each state received equal representation in the upper house (the Senate), while representation in the lower house (the House of Representatives) was apportioned by population. The issue of slavery also threatened to derail the convention, though abolition was not a priority for Northern delegates. The delegates agreed to the Three-Fifths Compromise, which counted three-fifths of the slave population for the purposes of taxation and representation. Southerners also won inclusion of the Fugitive Slave Clause, which allowed owners to recover their escaped slaves from free states, as well as a clause that forbid Congress from banning the Atlantic slave trade until 1808. The delegates of the convention also sought to limit the democratic nature of the new constitution, with indirect elections established for the Senate and the office of the President of the United States, who would lead the executive branch. The proposed constitution contained several other important differences from the Articles of Confederation. States saw their economic power severely curtailed, and notably were barred from impairing contracts. While members of the Congress of the Confederation and most state legislators served one-year terms, members of the House would serve for two-year terms and members of the Senate would serve for six-year terms. Neither house of Congress would be subject to term limits. Though the states would elect members of the Senate, the House of Representatives would be elected directly by the people. The president would be elected independent of the legislature, and hold broad powers over foreign affairs, military policy, and appointments. The president also received the power to veto legislation. The judicial power of the United States would be vested in the Supreme Court of the United States and any inferior courts established by Congress, and these courts would have jurisdiction over federal issues. The amendment process would no longer require unanimous consent of the states, although it still required the approval of Congress and a majority of states. Ratification of the Constitution written at the Philadelphia Convention was not assured, as opponents of a stronger federal government mobilized against ratification. Even by the end of the convention, sixteen of the fifty-five delegates had either left the convention or refused to sign the document. Article Seven of the Constitution provided for submission of the document to state conventions, rather than Congress or the state legislatures, for ratification. Though Congress had not authorized consideration of a new Constitution, most members of Congress respected the stature of the leaders who had assembled in Philadelphia. Roughly one-third of the members of Congress had been delegates at the Philadelphia Convention, and these former delegates proved to be powerful advocates for the new constitution. After debating for several days, Congress transmitted the Constitution to the states without recommendation, letting each state decide for itself whether or not to ratify the document. Ratification of the Constitution required the approval of nine states. The ratification debates in Massachusetts, New York, Pennsylvania, and Virginia were of particular importance, as they were the four largest and most powerful states in the nation. Those who advocated ratification took the name Federalists. To sway the closely divided New York legislature, Hamilton, Madison, and Jay anonymously published The Federalist Papers, which became seminal documents that affected the debate in New York and other states. Opponents of the new constitution became known as Anti-Federalists. Though most Anti-Federalists acknowledged the need for changes to the Articles of Confederation, they feared the establishment of a powerful, and potentially tyrannical, central government. Members of both camps held wide ranges of views; for example, some Anti-Federalists like Luther Martin wanted only minor changes to the Articles of Confederation, while others such as George Mason favored a less powerful version of the federal government proposed by the Constitution. Federalists were strongest in eastern, urban counties, while Anti-Federalists tended to be stronger in rural areas. Each faction engaged in a spirited public campaign to shape the ratification debate, though the Federalists tended to be better financed and organized. Over time, the Federalists were able to convince many in the skeptical public of the merits of the new Constitution. The Federalists won their first ratification victories in December 1787, when Delaware, Pennsylvania, and New Jersey all ratified the Constitution. By the end of February 1788, six states, including Massachusetts, had ratified the Constitution. In Massachusetts, the Federalists won over skeptical delegates by promising that the first Congress of the new Constitution would consider amendments limiting the federal government's power. This promise to amend the Constitution after its ratification proved to be extremely important in other ratification debates, as it helped Federalists win the votes of those who saw the need for the Constitution but opposed some of its provisions. In the following months, Maryland and South Carolina ratified the Constitution, but North Carolina voted against ratification, leaving the document just one state short of taking effect. In June 1788, New Hampshire and Virginia both ratified the document. In Virginia, as in Massachusetts, Federalists won support for the Constitution by promising ratification of several amendments. Though Anti-Federalism was strong in New York, its constitutional convention nonetheless ratified the document in July 1788 since failure to do so would leave the state outside of the union. Rhode Island, the lone state which had not sent a delegate to the Philadelphia Convention, was viewed as a lost cause by the Federalists due to its strong opposition to the proposed constitution, and it would not ratify the Constitution until 1790. Inauguration of a new government In September 1788, the Congress of the Confederation formally certified that the Constitution had been ratified. It also set the date for the presidential election and the first meeting of the new federal government. Additionally, Congress engaged in debate regarding where the incoming government would meet, with Baltimore briefly emerging as the favorite. To the displeasure of Southern and Western interests, Congress ultimately chose to retain New York City as the seat of government. Though Washington desired to resume his retirement following the Constitutional Convention, the American public at large anticipated that he would be the nation's first president. Federalists such as Hamilton eventually coaxed him to accept the office. On February 4, 1789, the Electoral College, the mechanism established by the Constitution to conduct the indirect presidential elections, met for the first time, with each state's presidential electors gathering in their state's capital. Under the rules then in place, each elector could vote for two persons (but the two people chosen by the elector could not both inhabit the same state as that elector), with the candidate who won the most votes becoming president and the candidate with the second-most becoming vice president. Each elector cast one vote for Washington, while John Adams won the most votes of all other candidates, and thus won election as vice president. Electors from 10 of the 13 states cast votes. There were no votes from New York, because the New York legislature failed to appoint its allotted electors in time; North Carolina and Rhode Island did not participate as they had not yet ratified the Constitution. The Federalists performed well in the concurrent House and Senate elections, ensuring that both chambers of the United States Congress would be dominated by proponents of the federal government established by the Constitution. This in turn ensured that there would not be a constitutional convention to propose amendments, which many Federalists had feared would critically weaken the federal government. The new federal government commenced operations with the seating of the 1st Congress in March 1789 and the inauguration of Washington the following month. In September 1789, Congress approved the United States Bill of Rights, a group of Constitutional amendments designed to protect individual liberties against federal interference, and the states ratified these amendments in 1791. After Congress voted for the Bill of Rights, North Carolina and Rhode Island ratified the Constitution in 1789 and 1790, respectively. Terminology The period of American history between the end of the American Revolutionary War and the ratification of the Constitution has also been referred to as the "critical period" of American history. During the 1780s, many thought that the country was experiencing a crisis of leadership, as reflected by John Quincy Adams's statement in 1787 that the country was in the midst of a "critical period". In his 1857 book, The Diplomatic History of the Administrations of Washington and Adams, William Henry Trescot became the first historian to apply the phrase "America's Critical Period" to the era in American history between 1783 and 1789. The phrase was popularized by John Fiske's 1888 book, The Critical Period of American History. Fiske's use of the term "critical period" refers to the importance of the era in determining whether the United States would establish a stronger central government or break up into individual fully sovereign states. The term "critical period" thus implicitly accepts the Federalist critique of the Articles of Confederation. Other historians have used an alternative term, the "Confederation Period", to describe U.S. history between 1781 and 1789. Historians such as Forrest McDonald have argued that the 1780s were a time of economic and political chaos. However, other historians, including Merrill Jensen, have argued that the 1780s were actually a relatively stable, prosperous time. Gordon Wood suggests that it was the idea of the Revolution and the thought that it would bring a utopian society to the new country that made it possible for people to believe they had fallen instead into a time of crisis. Historian John Ferling argues that, in 1787, only the federalists, a relatively small share of the population, viewed the era as a "Critical Period". Michael Klarman argues that the decade marked a high point of democracy and egalitarianism, and views the ratification of the Constitution in 1789 as a conservative counter-revolution. See also References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-192] | [TOKENS: 12858] |
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Orion_(constellation)#cite_note-11] | [TOKENS: 4993] |
Contents Orion (constellation) Orion is a prominent set of stars visible during winter in the northern celestial hemisphere. It is one of the 88 modern constellations; it was among the 48 constellations listed by the 2nd-century AD/CE astronomer Ptolemy. It is named after a hunter in Greek mythology. Orion is most prominent during winter evenings in the Northern Hemisphere, as are five other constellations that have stars in the Winter Hexagon asterism. Orion's two brightest stars, Rigel (β) and Betelgeuse (α), are both among the brightest stars in the night sky; both are supergiants and slightly variable. There are a further six stars brighter than magnitude 3.0, including three making the short straight line of the Orion's Belt asterism. Orion also hosts the radiant of the annual Orionids, the strongest meteor shower associated with Halley's Comet, and the Orion Nebula, one of the brightest nebulae in the sky. Characteristics Orion is bordered by Taurus to the northwest, Eridanus to the southwest, Lepus to the south, Monoceros to the east, and Gemini to the northeast. Covering 594 square degrees, Orion ranks 26th of the 88 constellations in size. The constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 26 sides. In the equatorial coordinate system, the right ascension coordinates of these borders lie between 04h 43.3m and 06h 25.5m , while the declination coordinates are between 22.87° and −10.97°. The constellation's three-letter abbreviation, as adopted by the International Astronomical Union in 1922, is "Ori". Orion is most visible in the evening sky from January to April, winter in the Northern Hemisphere, and summer in the Southern Hemisphere. In the tropics (less than about 8° from the equator), the constellation transits at the zenith. From May to July (summer in the Northern Hemisphere, winter in the Southern Hemisphere), Orion is in the daytime sky and thus invisible at most latitudes. However, for much of Antarctica in the Southern Hemisphere's winter months, the Sun is below the horizon even at midday. Stars (and thus Orion, but only the brightest stars) are then visible at twilight for a few hours around local noon, just in the brightest section of the sky low in the North where the Sun is just below the horizon. At the same time of day at the South Pole itself (Amundsen–Scott South Pole Station), Rigel is only 8° above the horizon, and the Belt sweeps just along it. In the Southern Hemisphere's summer months, when Orion is normally visible in the night sky, the constellation is actually not visible in Antarctica because the Sun does not set at that time of year south of the Antarctic Circle. In countries close to the equator (e.g. Kenya, Indonesia, Colombia, Ecuador), Orion appears overhead in December around midnight and in the February evening sky. Navigational aid Orion is very useful as an aid to locating other stars. By extending the line of the Belt southeastward, Sirius (α CMa) can be found; northwestward, Aldebaran (α Tau). A line eastward across the two shoulders indicates the direction of Procyon (α CMi). A line from Rigel through Betelgeuse points to Castor and Pollux (α Gem and β Gem). Additionally, Rigel is part of the Winter Circle asterism. Sirius and Procyon, which may be located from Orion by following imaginary lines (see map), also are points in both the Winter Triangle and the Circle. Features Orion's seven brightest stars form a distinctive hourglass-shaped asterism, or pattern, in the night sky. Four stars—Rigel, Betelgeuse, Bellatrix, and Saiph—form a large roughly rectangular shape, at the center of which lie the three stars of Orion's Belt—Alnitak, Alnilam, and Mintaka. His head is marked by an additional eighth star called Meissa, which is fairly bright to the observer. Descending from the Belt is a smaller line of three stars, Orion's Sword (the middle of which is in fact not a star but the Orion Nebula), also known as the hunter's sword. Many of the stars are luminous hot blue supergiants, with the stars of the Belt and Sword forming the Orion OB1 association. Standing out by its red hue, Betelgeuse may nevertheless be a runaway member of the same group. Orion's Belt, or The Belt of Orion, is an asterism within the constellation. It consists of three bright stars: Alnitak (Zeta Orionis), Alnilam (Epsilon Orionis), and Mintaka (Delta Orionis). Alnitak is around 800 light-years away from Earth, 100,000 times more luminous than the Sun, and shines with a magnitude of 1.8; much of its radiation is in the ultraviolet range, which the human eye cannot see. Alnilam is approximately 2,000 light-years from Earth, shines with a magnitude of 1.70, and with an ultraviolet light that is 375,000 times more luminous than the Sun. Mintaka is 915 light-years away and shines with a magnitude of 2.21. It is 90,000 times more luminous than the Sun and is a double star: the two orbit each other every 5.73 days. In the Northern Hemisphere, Orion's Belt is best visible in the night sky during the month of January at around 9:00 pm, when it is approximately around the local meridian. Just southwest of Alnitak lies Sigma Orionis, a multiple star system composed of five stars that have a combined apparent magnitude of 3.7 and lying at a distance of 1150 light-years. Southwest of Mintaka lies the quadruple star Eta Orionis. Orion's Sword contains the Orion Nebula, the Messier 43 nebula, Sh 2-279 (also known as the Running Man Nebula), and the stars Theta Orionis, Iota Orionis, and 42 Orionis. Three stars comprise a small triangle that marks the head. The apex is marked by Meissa (Lambda Orionis), a hot blue giant of spectral type O8 III and apparent magnitude 3.54, which lies some 1100 light-years distant. Phi-1 and Phi-2 Orionis make up the base. Also nearby is the young star FU Orionis. Stretching north from Betelgeuse are the stars that make up Orion's club. Mu Orionis marks the elbow, Nu and Xi mark the handle of the club, and Chi1 and Chi2 mark the end of the club. Just east of Chi1 is the Mira-type variable red giant star U Orionis. West from Bellatrix lie six stars all designated Pi Orionis (π1 Ori, π2 Ori, π3 Ori, π4 Ori, π5 Ori, and π6 Ori) which make up Orion's shield. Around 20 October each year, the Orionid meteor shower (Orionids) reaches its peak. Coming from the border with the constellation Gemini, as many as 20 meteors per hour can be seen. The shower's parent body is Halley's Comet. Hanging from Orion's Belt is his sword, consisting of the multiple stars θ1 and θ2 Orionis, called the Trapezium and the Orion Nebula (M42). This is a spectacular object that can be clearly identified with the naked eye as something other than a star. Using binoculars, its clouds of nascent stars, luminous gas, and dust can be observed. The Trapezium cluster has many newborn stars, including several brown dwarfs, all of which are at an approximate distance of 1,500 light-years. Named for the four bright stars that form a trapezoid, it is largely illuminated by the brightest stars, which are only a few hundred thousand years old. Observations by the Chandra X-ray Observatory show both the extreme temperatures of the main stars—up to 60,000 kelvins—and the star forming regions still extant in the surrounding nebula. M78 (NGC 2068) is a nebula in Orion. With an overall magnitude of 8.0, it is significantly dimmer than the Great Orion Nebula that lies to its south; however, it is at approximately the same distance, at 1600 light-years from Earth. It can easily be mistaken for a comet in the eyepiece of a telescope. M78 is associated with the variable star V351 Orionis, whose magnitude changes are visible in very short periods of time. Another fairly bright nebula in Orion is NGC 1999, also close to the Great Orion Nebula. It has an integrated magnitude of 10.5 and is 1500 light-years from Earth. The variable star V380 Orionis is embedded in NGC 1999. Another famous nebula is IC 434, the Horsehead Nebula, near Alnitak (Zeta Orionis). It contains a dark dust cloud whose shape gives the nebula its name. NGC 2174 is an emission nebula located 6400 light-years from Earth. Besides these nebulae, surveying Orion with a small telescope will reveal a wealth of interesting deep-sky objects, including M43, M78, and multiple stars including Iota Orionis and Sigma Orionis. A larger telescope may reveal objects such as the Flame Nebula (NGC 2024), as well as fainter and tighter multiple stars and nebulae. Barnard's Loop can be seen on very dark nights or using long-exposure photography. All of these nebulae are part of the larger Orion molecular cloud complex, which is located approximately 1,500 light-years away and is hundreds of light-years across. Due to its proximity, it is one of the most intense regions of stellar formation visible from Earth. The Orion molecular cloud complex forms the eastern part of an even larger structure, the Orion–Eridanus Superbubble, which is visible in X-rays and in hydrogen emissions. History and mythology The distinctive pattern of Orion is recognized in numerous cultures around the world, and many myths are associated with it. Orion is used as a symbol in the modern world. In Siberia, the Chukchi people see Orion as a hunter; an arrow he has shot is represented by Aldebaran (Alpha Tauri), with the same figure as other Western depictions. In Greek mythology, Orion was a gigantic, supernaturally strong hunter, born to Euryale, a Gorgon, and Poseidon (Neptune), god of the sea. One myth recounts Gaia's rage at Orion, who dared to say that he would kill every animal on Earth. The angry goddess tried to dispatch Orion with a scorpion. This is given as the reason that the constellations of Scorpius and Orion are never in the sky at the same time. However, Ophiuchus, the Serpent Bearer, revived Orion with an antidote. This is said to be the reason that the constellation of Ophiuchus stands midway between the Scorpion and the Hunter in the sky. The constellation is mentioned in Horace's Odes (Ode 3.27.18), Homer's Odyssey (Book 5, line 283) and Iliad, and Virgil's Aeneid (Book 1, line 535). In old Hungarian tradition, Orion is known as "Archer" (Íjász), or "Reaper" (Kaszás). In recently rediscovered myths, he is called Nimrod (Hungarian: Nimród), the greatest hunter, father of the twins Hunor and Magor. The π and o stars (on upper right) form together the reflex bow or the lifted scythe. In other Hungarian traditions, Orion's Belt is known as "Judge's stick" (Bírópálca). In Ireland and Scotland, Orion was called An Bodach, a figure from Irish folklore whose name literally means "the one with a penis [bod]" and was the husband of the Cailleach (hag). In Scandinavian tradition, Orion's Belt was known as "Frigg's Distaff" (friggerock) or "Freyja's distaff". The Finns call Orion's Belt and the stars below it "Väinämöinen's scythe" (Väinämöisen viikate). Another name for the asterism of Alnilam, Alnitak, and Mintaka is "Väinämöinen's Belt" (Väinämöisen vyö) and the stars "hanging" from the Belt as "Kaleva's sword" (Kalevanmiekka). There are claims in popular media that the Adorant from the Geißenklösterle cave, an ivory carving estimated to be 35,000 to 40,000 years old, is the first known depiction of the constellation. Scholars dismiss such interpretations, saying that perceived details such as a belt and sword derive from preexisting features in the grain structure of the ivory. The Babylonian star catalogues of the Late Bronze Age name Orion MULSIPA.ZI.AN.NA,[note 1] "The Heavenly Shepherd" or "True Shepherd of Anu" – Anu being the chief god of the heavenly realms. The Babylonian constellation is sacred to Papshukal and Ninshubur, both minor gods fulfilling the role of "messenger to the gods". Papshukal is closely associated with the figure of a walking bird on Babylonian boundary stones, and on the star map the figure of the Rooster is located below and behind the figure of the True Shepherd—both constellations represent the herald of the gods, in his bird and human forms respectively. In ancient Egypt, the stars of Orion were regarded as a god, called Sah. Because Orion rises before Sirius, the star whose heliacal rising was the basis for the Solar Egyptian calendar, Sah was closely linked with Sopdet, the goddess who personified Sirius. The god Sopdu is said to be the son of Sah and Sopdet. Sah is syncretized with Osiris, while Sopdet is syncretized with Osiris' mythological wife, Isis. In the Pyramid Texts, from the 24th and 23rd centuries BC, Sah is one of many gods whose form the dead pharaoh is said to take in the afterlife. The Armenians identified their legendary patriarch and founder Hayk with Orion. Hayk is also the name of the Orion constellation in the Armenian translation of the Bible. The Bible mentions Orion three times, naming it "Kesil" (כסיל, literally – fool). Though, this name perhaps is etymologically connected with "Kislev", the name for the ninth month of the Hebrew calendar (i.e. November–December), which, in turn, may derive from the Hebrew root K-S-L as in the words "kesel, kisla" (כֵּסֶל, כִּסְלָה, hope, positiveness), i.e. hope for winter rains.: Job 9:9 ("He is the maker of the Bear and Orion"), Job 38:31 ("Can you loosen Orion's belt?"), and Amos 5:8 ("He who made the Pleiades and Orion"). In ancient Aram, the constellation was known as Nephîlā′, the Nephilim are said to be Orion's descendants. In medieval Muslim astronomy, Orion was known as al-jabbar, "the giant". Orion's sixth brightest star, Saiph, is named from the Arabic, saif al-jabbar, meaning "sword of the giant". In China, Orion was one of the 28 lunar mansions Sieu (Xiù) (宿). It is known as Shen (參), literally meaning "three", for the stars of Orion's Belt. The Chinese character 參 (pinyin shēn) originally meant the constellation Orion (Chinese: 參宿; pinyin: shēnxiù); its Shang dynasty version, over three millennia old, contains at the top a representation of the three stars of Orion's Belt atop a man's head (the bottom portion representing the sound of the word was added later). The Rigveda refers to the constellation as Mriga (the Deer). Nataraja, "the cosmic dancer", is often interpreted as the representation of Orion. Rudra, the Rigvedic form of Shiva, is the presiding deity of Ardra nakshatra (Betelgeuse) of Hindu astrology. The Jain Symbol carved in the Udayagiri and Khandagiri Caves, India in 1st century BCE has a striking resemblance with Orion. Bugis sailors identified the three stars in Orion's Belt as tanra tellué, meaning "sign of three". The Seri people of northwestern Mexico call the three stars in Orion's Belt Hapj (a name denoting a hunter) which consists of three stars: Hap (mule deer), Haamoja (pronghorn), and Mojet (bighorn sheep). Hap is in the middle and has been shot by the hunter; its blood has dripped onto Tiburón Island. The same three stars are known in Spain and most of Latin America as "Las tres Marías" (Spanish for "The Three Marys"). In Puerto Rico, the three stars are known as the "Los Tres Reyes Magos" (Spanish for The Three Wise Men). The Ojibwa/Chippewa Native Americans call this constellation Mesabi for Big Man. To the Lakota Native Americans, Tayamnicankhu (Orion's Belt) is the spine of a bison. The great rectangle of Orion is the bison's ribs; the Pleiades star cluster in nearby Taurus is the bison's head; and Sirius in Canis Major, known as Tayamnisinte, is its tail. Another Lakota myth mentions that the bottom half of Orion, the Constellation of the Hand, represented the arm of a chief that was ripped off by the Thunder People as a punishment from the gods for his selfishness. His daughter offered to marry the person who can retrieve his arm from the sky, so the young warrior Fallen Star (whose father was a star and whose mother was human) returned his arm and married his daughter, symbolizing harmony between the gods and humanity with the help of the younger generation. The index finger is represented by Rigel; the Orion Nebula is the thumb; the Belt of Orion is the wrist; and the star Beta Eridani is the pinky finger. The seven primary stars of Orion make up the Polynesian constellation Heiheionakeiki which represents a child's string figure similar to a cat's cradle. Several precolonial Filipinos referred to the belt region in particular as "balatik" (ballista) as it resembles a trap of the same name which fires arrows by itself and is usually used for catching pigs from the bush. Spanish colonization later led to some ethnic groups referring to Orion's Belt as "Tres Marias" or "Tatlong Maria." In Māori tradition, the star Rigel (known as Puanga or Puaka) is closely connected with the celebration of Matariki. The rising of Matariki (the Pleiades) and Rigel before sunrise in midwinter marks the start of the Māori year. In Javanese culture, the constellation is often called Lintang Waluku or Bintang Bajak, referring to the shape of a paddy field plow. The imagery of the Belt and Sword has found its way into popular Western culture, for example in the form of the shoulder insignia of the 27th Infantry Division of the United States Army during both World Wars, probably owing to a pun on the name of the division's first commander, Major General John F. O'Ryan. The film distribution company Orion Pictures used the constellation as its logo. In artistic renderings, the surrounding constellations are sometimes related to Orion: he is depicted standing next to the river Eridanus with his two hunting dogs Canis Major and Canis Minor, fighting Taurus. He is sometimes depicted hunting Lepus the hare. He sometimes is depicted to have a lion's hide in his hand. There are alternative ways to visualise Orion. From the Southern Hemisphere, Orion is oriented south-upward, and the Belt and Sword are sometimes called the saucepan or pot in Australia and New Zealand. Orion's Belt is called Drie Konings (Three Kings) or the Drie Susters (Three Sisters) by Afrikaans speakers in South Africa and are referred to as les Trois Rois (the Three Kings) in Daudet's Lettres de Mon Moulin (1866). The appellation Driekoningen (the Three Kings) is also often found in 17th and 18th-century Dutch star charts and seaman's guides. The same three stars are known in Spain, Latin America, and the Philippines as "Las Tres Marías" (The Three Marys), and as "Los Tres Reyes Magos" (The Three Wise Men) in Puerto Rico. Even traditional depictions of Orion have varied greatly. Cicero drew Orion in a similar fashion to the modern depiction. The Hunter held an unidentified animal skin aloft in his right hand; his hand was represented by Omicron2 Orionis and the skin was represented by the five stars designated Pi Orionis. Saiph and Rigel represented his left and right knees, while Eta Orionis and Lambda Leporis were his left and right feet, respectively. As in the modern depiction, Mintaka, Alnilam, and Alnitak represented his Belt. His left shoulder was represented by Betelgeuse, and Mu Orionis made up his left arm. Meissa was his head, and Bellatrix his right shoulder. The depiction of Hyginus was similar to that of Cicero, though the two differed in a few important areas. Cicero's animal skin became Hyginus's shield (Omicron and Pi Orionis), and instead of an arm marked out by Mu Orionis, he holds a club (Chi Orionis). His right leg is represented by Theta Orionis and his left leg is represented by Lambda, Mu, and Epsilon Leporis. Further Western European and Arabic depictions have followed these two models. Future Orion is located on the celestial equator, but it will not always be so located due to the effects of precession of the Earth's axis. Orion lies well south of the ecliptic, and it only happens to lie on the celestial equator because the point on the ecliptic that corresponds to the June solstice is close to the border of Gemini and Taurus, to the north of Orion. Precession will eventually carry Orion further south, and by AD 14000, Orion will be far enough south that it will no longer be visible from the latitude of Great Britain. Further in the future, Orion's stars will gradually move away from the constellation due to proper motion. However, Orion's brightest stars all lie at a large distance from Earth on an astronomical scale—much farther away than Sirius, for example. Orion will still be recognizable long after most of the other constellations—composed of relatively nearby stars—have distorted into new configurations, with the exception of a few of its stars eventually exploding as supernovae, for example Betelgeuse, which is predicted to explode sometime in the next million years. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Third_party_developer] | [TOKENS: 4631] |
Contents Video game developer A video game developer is a software developer specializing in video game development – the process and related disciplines of creating video games . A game developer can range from one person who undertakes all tasks to a large business with employee responsibilities split between individual disciplines, such as programmers, designers, artists, etc. Most game development companies have video game publisher financial and usually marketing support. Self-funded developers are known as independent or indie developers and usually make indie games. A developer may specialize in specific game engines or specific video game consoles, or may develop for several systems (including personal computers and mobile devices). Some focus on porting games from one system to another, or translating games from one language to another. Less commonly, some do software development work in addition to games. Most video game publishers maintain development studios (such as Electronic Arts's EA Canada, Square Enix's studios, Activision's Radical Entertainment, Nintendo EPD and Sony's Polyphony Digital and Naughty Dog). However, since publishing is still their primary activity they are generally described as "publishers" rather than "developers". Developers may be private as well. Types In the video game industry, a first-party developer is part of a company that manufactures a video game console and develops mainly for it. First-party developers may use the name of the company itself (such as Nintendo), have a specific division name (such as Sony's Polyphony Digital) or have been an independent studio before being acquired by the console manufacturer (such as Rare or Naughty Dog). Whether by purchasing an independent studio or by founding a new team, the acquisition of a first-party developer involves a huge financial investment on the part of the console manufacturer, which is wasted if the developer fails to produce a hit game on time. However, using first-party developers saves the cost of having to make royalty payments on a game's profits. Current examples of first-party studios include Nintendo EPD for Nintendo, PlayStation Studios for Sony, and Xbox Game Studios for Microsoft Gaming. Second-party developer is a colloquial term often used by gaming enthusiasts and media to describe game studios that take development contracts from platform holders and develop games exclusive to that platform, i.e. a non-owned developer making games for a first-party company. As a balance to not being able to release their game for other platforms, second-party developers are usually offered higher royalty rates than third-party developers. These studios may have exclusive publishing agreements (or other business relationships) with the platform holder, but maintain independence so that upon completion or termination of their contracts, they are able to continue developing games for other publishers if they choose to. For example, while HAL Laboratory initially began developing games on personal computers like the MSX, they became one of the earliest second-party developers for Nintendo, developing exclusively for Nintendo's consoles starting with the Famicom, though they would self-publish their mobile games. A third-party developer may also publish games, or work for a video game publisher to develop a title. Both publisher and developer have considerable input in the game's design and content. However, the publisher's wishes generally override those of the developer. Work for hire studios solely execute the publishers vision. The business arrangement between the developer and publisher is governed by a contract, which specifies a list of milestones intended to be delivered over a period of time. By updating its milestones, the publisher verifies that work is progressing quickly enough to meet its deadline and can direct the developer if the game is not meeting expectations. When each milestone is completed (and accepted), the publisher pays the developer an advance on royalties. Successful developers may maintain several teams working on different games for different publishers. Generally, however, third-party developers tend to be small, close-knit teams. Third-party game development is a volatile sector, since small developers may depend on income from a single publisher; one canceled game may devastate a small developer. Because of this, many small development companies are short-lived. A common exit strategy for a successful video game developer is to sell the company to a publisher, becoming an in-house developer. In-house development teams tend to have more freedom in game design and content than third-party developers. One reason is that since the developers are the publisher's employees, their interests align with those of the publisher; the publisher may spend less effort ensuring that the developer's decisions do not enrich the developer at the publisher's expense. Activision in 1979 became the first third-party video game developer. When four Atari, Inc. programmers left the company following its sale to Warner Communications, partially over the lack of respect that the new management gave to programmers, they used their knowledge of how Atari VCS game cartridges were programmed to create their own games for the system, founding Activision in 1979 to sell these. Atari took legal action to try to block the sale of these games, but the companies ultimately settled, with Activision agreeing to pay a portion of their sales as a license fee to Atari for developing for the console. This established the use of licensing fees as a model for third-party development that persists into the present. The licensing fee approach was further enforced by Nintendo when it decided to allow other third-party developers to make games for the Famicom console, setting a 30% licensing fee that covered game cartridge manufacturing costs and development fees. The 30% licensing fee for third-party developers has also persisted to the present, being a de facto rate used for most digital storefronts for third-party developers to offer their games on the platform. In recent years, larger publishers have acquired several third-party developers. While these development teams are now technically "in-house", they often continue to operate in an autonomous manner (with their own culture and work practices). For example, Activision acquired Raven (1997); Neversoft (1999), which merged with Infinity Ward in 2014; Z-Axis (2001); Treyarch (2001); Luxoflux (2002); Shaba (2002); Infinity Ward (2003) and Vicarious Visions (2005). All these developers continue operating much as they did before acquisition, the primary differences being exclusivity and financial details. Publishers tend to be more forgiving of their own development teams going over budget (or missing deadlines) than third-party developers. A developer may not be the primary entity creating a piece of software, usually providing an external software tool which helps organize (or use) information for the primary software product. Such tools may be a database, Voice over IP, or add-in interface software; this is also known as middleware. Examples of this include SpeedTree and Havoc. Independents are software developers which are not owned by (or dependent on) a single publisher. Some of these developers self-publish their games, relying on the Internet and word of mouth for publicity. Without the large marketing budgets of mainstream publishers, their products may receive less recognition than those of larger publishers such as Sony, Microsoft or Nintendo. With the advent of digital distribution of inexpensive games on game consoles, it is now possible for indie game developers to forge agreements with console manufacturers for broad distribution of their games. Digital distribution services for PC games, such as Steam, have also contributed to facilitating the distribution of indie games. Other indie game developers create game software for a number of video-game publishers on several gaming platforms.[citation needed] In recent years this model has been in decline; larger publishers, such as Electronic Arts and Activision, increasingly turn to internal studios (usually former independent developers acquired for their development needs). Quality of life Video game development is usually conducted in a casual business environment, with t-shirts and sandals as common work attire. While some workers find this type of environment rewarding and pleasant professionally there has been criticism of this "uniform" potentially adding to a hostile work environment for women. The industry also requires long working hours from its employees (sometimes to an extent seen as unsustainable). Employee burnout is not uncommon. An entry-level programmer can make, on average, over $66,000 annually only if they are successful in obtaining a position in a medium to large video game company. An experienced game-development employee, depending on their expertise and experience, averaged roughly $73,000 in 2007. Indie game developers may only earn between $10,000 and $50,000 a year depending on how financially successful their titles are. In addition to being part of the software industry,[citation needed] game development is also within the entertainment industry; most sectors of the entertainment industry (such as films and television) require long working hours and dedication from their employees, such as willingness to relocate and/or required to develop games that do not appeal to their personal taste. The creative rewards of work in the entertainment business attracts labor to the industry, creating a competitive labor market that demands a high level of commitment and performance from employees. Industry communities, such as the International Game Developers Association (IGDA), are conducting increasing discussions about the problem; they are concerned that working conditions in the industry cause a significant deterioration in employees' quality of life. Some video game developers and publishers have been accused of the excessive invocation of "crunch time". "Crunch time" is the point at which the team is thought to be failing to achieve milestones needed to launch a game on schedule. The complexity of workflow, reliance on third-party deliverables, and the intangibles of artistic and aesthetic demands in video game creation create difficulty in predicting milestones. The use of crunch time is also seen to be exploitative of the younger workforce in video games, who have not had the time to establish a family and who were eager to advance within the industry by working long hours. Because crunch time tends to come from a combination of corporate practices as well as peer influence, the term "crunch culture" is often used to discuss video game development settings where crunch time may be seen as the norm rather than the exception. The use of crunch time as a workplace standard gained attention first in 2004, when Erin Hoffman exposed the use of crunch time at Electronic Arts, a situation known as the "EA Spouses" case. A similar "Rockstar Spouses" case gained further attention in 2010 over working conditions at Rockstar San Diego. Since then, there has generally been negative perception of crunch time from most of the industry as well as from its consumers and other media. Game development had generally been a predominately male workforce. In 1989, according to Variety, women constituted only 3% of the gaming industry, while a 2017 IGDA survey found that the female demographic in game development had risen to about 20%. Taking into account that a 2017 ESA survey found 41% of video game players were female, this represented a significant gender gap in game development. The male-dominated industry, most who have grown up playing video games and are part of the video game culture, can create a culture of "toxic geek masculinity" within the workplace. In addition, the conditions behind crunch time are far more discriminating towards women as this requires them to commit time exclusively to the company or to more personal activities like raising a family. These factors established conditions within some larger development studios where female developers have found themselves discriminated in workplace hiring and promotion, as well as the target of sexual harassment. This can be coupled from similar harassment from external groups, such as during the 2014 Gamergate controversy. Major investigations into allegations of sexual harassment and misconduct that went unchecked by management, as well as discrimination by employers, have been brought up against Riot Games, Ubisoft and Activision Blizzard in the late 2010s and early 2020s, alongside smaller studios and individual developers. However, while other entertainment industries have had similar exposure through the Me Too movement and have tried to address the symptoms of these problems industry-wide, the video game industry has yet to have its Me Too-moment, even as late as 2021. There also tends to be pay-related discrimination against women in the industry. According to Gamasutra's Game Developer Salary Survey 2014, women in the United States made 86 cents for every dollar men made. Game designing women had the closest equity, making 96 cents for every dollar men made in the same job, while audio professional women had the largest gap, making 68% of what men in the same position made. Increasing the representation of women in the video game industry required breaking a feedback loop of the apparent lack of female representation in the production of video games and in the content of video games. Efforts have been made to provide a strong STEM (science, technology, engineering, and mathematics) background for women at the secondary education level, but there are issues with tertiary education such as at colleges and universities, where game development programs tend to reflect the male-dominated demographics of the industry, a factor that may lead women with strong STEM backgrounds to choose other career goals. There is also a significant gap in racial minorities within the video game industry; a 2019 IGDA survey found only 2% of developers considered themselves to be of African descent and 7% Hispanic, while 81% were Caucasian; in contrast, 2018 estimates from the United States Census estimate the U.S. population to be 13% of African descent and 18% Hispanic. In a 2014 and 2015 survey of job positions and salaries, the IGDA found that people of color were both underrepresented in senior management roles as well as underpaid in comparison to white developers. Further, because video game developers typically draw from personal experiences in building game characters, this diversity gap has led to few characters of racial minority to be featured as main characters within video games. Minority developers have also been harassed from external groups due to the toxic nature of the video game culture. This racial diversity issue has similar ties to the gender one, and similar methods to result both have been suggested, such as improving grade school education, developing games that appeal beyond the white, male gamer stereotype, and identifying toxic behavior in both video game workplaces and online communities that perpetuate discrimination against gender and race. In regards to LGBT and other gender or sexual orientations, the video game industry typically shares the same demographics as with the larger population based on a 2005 IGDA survey. Those in the LGBT community do not find workplace issues with their identity, though work to improve the representation of LGBT themes within video games in the same manner as with racial minorities. However, LGBT developers have also come under the same type of harassment from external groups like women and racial minorities due to the nature of the video game culture. The industry also is recognized to have an ageism issue, discriminating against the hiring and retention of older developers. A 2016 IGDA survey found only 3% of developers were over 50 years old, while at least two-thirds were between 20 and 34; these numbers show a far lower average age compared to the U.S. national average of about 41.9 that same year. While discrimination by age in hiring practices is generally illegal, companies often target their oldest workers first during layoffs or other periods of reduction. Older developers with experience may find themselves too qualified for the types of positions that other game development companies seek given the salaries and compensations offered. Some of the larger video game developers and publishers have also engaged contract workers through agencies to help add manpower in game development in part to alleviate crunch time from employees. Contractors are brought on for a fixed period and generally work similar hours as full-time staff members, assisting across all areas of video game development, but as contractors, do not get any benefits such as paid time-off or health care from the employer; they also are typically not credited on games that they work on for this reason. The practice itself is legal and common in other engineering and technology areas, and generally it is expected that this is meant to lead into a full-time position, or otherwise the end of the contract. But more recently, its use in the video game industry has been compared to Microsoft's past use of "permatemp", contract workers that were continually renewed and treated for all purposes as employees but received no benefits. While Microsoft has waned from the practice, the video game industry has adapted it more frequently. Around 10% of the workforce in video games is estimated to be from contract labor. Similar to other tech industries, video game developers are typically not unionized. This is a result of the industry being driven more by creativity and innovation rather than production, the lack of distinction between management and employees in the white-collar area, and the pace at which the industry moves that makes union actions difficult to plan out. However, when situations related to crunch time become prevalent in the news, there have typically been followup discussions towards the potential to form a union. A survey performed by the International Game Developers Association in 2014 found that more than half of the 2,200 developers surveyed favored unionization. A similar survey of over 4,000 game developers run by the Game Developers Conference in early 2019 found that 47% of respondents felt the video game industry should unionize. In 2016, voice actors in the Screen Actors Guild‐American Federation of Television and Radio Artists (SAG-AFTRA) union doing work for video games struck several major publishers, demanding better royalty payments and provisions related to the safety of their vocal performances, when their union's standard contract was up for renewal. The voice actor strike lasted for over 300 days into 2017 before a new deal was made between SAG-AFTRA and the publishers. While this had some effects on a few games within the industry, it brought to the forefront the question of whether video game developers should unionize. A grassroots movement, Game Workers Unite, was established around 2017 to discuss and debate issues related to unionization of game developers. The group came to the forefront during the March 2018 Game Developers Conference by holding a roundtable discussion with the International Game Developers Association (IGDA), the professional association for developers. Statements made by the IGDA's current executive director Jen MacLean relating to IGDA's activities had been seen by as anti-union, and Game Workers Unite desired to start a conversation to lay out the need for developers to unionize. In the wake of the sudden near-closure of Telltale Games in September 2018, the movement again called out for the industry to unionize. The movement argued that Telltale had not given any warning to its 250 employees let go, having hired additional staff as recently as a week prior, and left them without pensions or health-care options; it was further argued that the studio considered this a closure rather than layoffs, as to get around failure to notify required by the Worker Adjustment and Retraining Notification Act of 1988 preceding layoffs. The situation was argued to be "exploitive", as Telltale had been known to force its employees to frequently work under "crunch time" to deliver its games. By the end of 2018, a United Kingdom trade union, Game Workers Unite UK, an affiliate of the Game Workers Unite movement, had been legally established. Following Activision Blizzard's financial report for the previous quarter in February 2019, the company said that they would be laying off around 775 employees (about 8% of their workforce) despite having record profits for that quarter. Further calls for unionization came from this news, including the AFL–CIO writing an open letter to video game developers encouraging them to unionize. In January 2020, Game Workers Unite and the Communications Workers of America established a new campaign to push for unionization of video game developers, the Campaign to Organize Digital Employees (CODE), in January 2020. Initial efforts for CODE were aimed to determine what approach to unionization would be best suited for the video game industry. Whereas some video game employees believe they should follow the craft-based model used by SAG-AFTRA which would unionize based on job function, others feel an industry-wide union, regardless of job position, would be better. Starting in 2021, several smaller game studios in the United States began efforts to unionize. These mostly involved teams doing quality assurance rather than developers. These studios included three QA studios under Blizzard Entertainment: Raven Software, Blizzard Albany, and Proletariat; and Zenimax Media's QA team. Microsoft, which had previously acquired Zenimax and announced plans to acquire Blizzard via the acquisition of Activision Blizzard, stated it supported these unionization efforts. After this acquisition, the employees of Bethesda Game Studios, part of Zenimax under Microsoft, unionized under the Communications Workers of America (CWA) in July 2024. Over 500 employees within Blizzard Entertainment's World of Warcraft division also unionized with CWA that same month. Similarly, Blizzard's Overwatch team unionized in May 2025, Raven Software, Blizzard's story and franchise development team, and Blizzard's Diablo team separately voted for unionization in August 2025, and the Hearthstone and Warcraft Rumble teams followed with their vote in October 2025. By this point, over 2000 Blizzard employees had become unionzized. Sweden presents a unique case where nearly all parts of its labor force, including white-collar jobs such as video game development, may engage with labor unions under the Employment Protection Act often through collective bargaining agreements. Developer DICE had reached its union agreements in 2004. Paradox Interactive became one of the first major publishers to support unionization efforts in June 2020 with its own agreements to cover its Swedish employees within two labor unions Unionen and SACO. In Australia, video game developers could join other unions, but the first video game-specific union, Game Workers Unite Australia, was formed in December 2021 under Professionals Australia to become active in 2022. In Canada, in a historic move, video game workers in Edmonton unanimously voted to unionize for the first time in June 2022. In January 2023, after not being credited in The Last of Us HBO adaptation, Bruce Straley called for unionization of the video game industry. He told the Los Angeles Times: "Someone who was part of the co-creation of that world and those characters isn't getting a credit or a nickel for the work they put into it. Maybe we need unions in the video game industry to be able to protect creators." An industry-wide union, the United Video game Workers-CWA (UVA-CWA), for North American workers, was announced in March 2025 with support from the Communication Workers of America. ZU/AM, the developers of Disco Elysium , became the first video game studio in the United Kingdom, unuionizing under the Independent Workers' Union of Great Britain in October 2025. See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Shalom] | [TOKENS: 891] |
Contents Shalom Shalom (Hebrew: שָׁלוֹם šālōm) is a Hebrew word meaning peace and can be used idiomatically to mean hello and goodbye. As it does in English,[citation needed] it can refer to either peace between two entities (especially between a person and God or between two countries), or to the well-being, welfare or safety of an individual or a group of individuals. The word shalom is also found in many other expressions and names. Its equivalent cognate in Arabic is salaam, sliem in Maltese, Shlama in Neo-Aramaic dialects, and sälam in Ethiopian Semitic languages from the Proto-Semitic root Š-L-M. Etymology In Hebrew, words are built on "roots", generally of three consonants. When the root consonants appear with various vowels and additional letters, a variety of words, often with some relation in meaning, can be formed from a single root. Thus from the root sh-l-m come the words shalom ("peace, well-being"), hishtalem ("it was worth it"), shulam ("was paid for"), meshulam ("paid for in advance"), mushlam ("perfect"), and shalem ("whole"). Biblically, shalom is seen in reference to the well-being of others (Genesis 43:27, Exodus 4:18), to treaties (I Kings 5:12), and in prayer for the wellbeing of cities or nations (Psalm 122:6, Jeremiah 29:7). The meaning of completeness, central to the term shalom, can also be confirmed in related terms found in other Semitic languages. The Assyrian term salamu means to be complete, unharmed, paid/atoned. Sulmu, another Assyrian term, means welfare. A closer relation to the idea of shalom as a concept and action is seen in the Arabic root salaam, meaning, among other things, to be safe, secure and forgiven. In expressions The word "shalom" can be used for all parts of speech; as a noun, adjective, verb, adverb, and interjection. It categorizes all shaloms. The word shalom is used in a variety of expressions and contexts in Hebrew speech and writing: Jewish religious principle In Judaism, shalom is one of the underlying principles of the Torah: "Her ways are pleasant ways and all her paths are shalom". The Talmud explains, "The entire Torah is for the sake of the ways of shalom". Maimonides comments in his Mishneh Torah: "Great is peace, as the whole Torah was given in order to promote peace in the world, as it is stated, 'Her ways are pleasant ways and all her paths are peace'". In the book Not the Way It's Supposed to Be: A Breviary of Sin, Christian author Cornelius Plantinga described the biblical concept of shalom: The webbing together of God, humans, and all creation in justice, fulfillment, and delight is what the Hebrew prophets call shalom. We call it peace but it means far more than mere peace of mind or a cease-fire between enemies. In the Bible, shalom means universal flourishing, wholeness and delight – a rich state of affairs in which natural needs are satisfied and natural gifts fruitfully employed, a state of affairs that inspires joyful wonder as its Creator and Savior opens doors and welcomes the creatures in whom he delights. Shalom, in other words, is the way things ought to be. Use as name The Talmud says, "the name of God is 'Peace'", therefore, one is not permitted to greet another with the word 'shalom' in places such as a bathroom. Biblical references lead some Christians to teach that "Shalom" is one of the sacred names of God. Shalom is also a Hebrew name, found commonly in Israel as both a given and family name. While traditionally masculine, it is occasionally androgynous, such as in the case of model Shalom Harlow. Shalom can be part of an organization's name, including the titles of the following establishments promoting Israeli-Arab peace: Shalom is used in Jewish religious contexts, such as the names of synagogues and parks, including: See also References Sources |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Mahayana_Buddhism] | [TOKENS: 17852] |
Contents Mahayana Mahayana[a] is the largest branch of Buddhism, followed by Theravada. It is a broad group of Buddhist traditions, texts, philosophies, and practices that developed in Amaravati region of ancient India (c. 1st century BCE onwards). Mahāyāna accepts the main scriptures and teachings of early Buddhism but also recognizes various doctrines and texts that are not accepted by Theravada Buddhism as original. These include the Mahāyāna sūtras and their emphasis on the bodhisattva path and Prajñāpāramitā. Vajrayana or Mantra traditions are a subset of Mahāyāna which makes use of numerous Tantric methods Vajrayānists consider to help achieve Buddhahood. Mahāyāna also refers to the path of the bodhisattva striving to become a fully awakened Buddha for the benefit of all sentient beings, and is thus also called the "Bodhisattva Vehicle" (Bodhisattvayāna).[note 1] Mahāyāna Buddhism generally sees the goal of becoming a Buddha through the bodhisattva path as being available to all and sees the state of the arhat as incomplete. Mahāyāna also includes numerous Buddhas and bodhisattvas that are not found in Theravada (eg. Amitābha and Vairocana, as conceptions of Buddha belong to Mahāyāna tradition and are absent from Theravada understandings of the Buddha). Mahāyāna Buddhist philosophy also promotes unique theories, such as the Madhyamaka theory of emptiness (śūnyatā), the Vijñānavāda ("the doctrine of consciousness" also called "mind-only"), and the Buddha-nature teaching. While initially a small movement in India, Mahāyāna eventually grew to become an influential force in Indian Buddhism. Large scholastic centers associated with Mahāyāna such as Nalanda and Vikramashila thrived between the 7th and 12th centuries. In the course of its history, Mahāyāna Buddhism spread from South Asia to East Asia, Southeast Asia and the Himalayan regions. Various Mahāyāna traditions are the predominant forms of Buddhism found in China, Korea, Japan, Taiwan, Singapore, Vietnam, Philippines, Malaysia and Indonesia. Since Vajrayana is a tantric form of Mahāyāna, Mahāyāna Buddhism is also dominant in Tibet, Mongolia, Bhutan, and other Himalayan regions. It has also been traditionally present elsewhere in Asia as a minority among Buddhist communities in Nepal, Malaysia, Indonesia and regions with Asian diaspora communities. As of 2010, the Mahāyāna tradition was the largest major tradition of Buddhism, with 53% of Buddhists belonging to East Asian Mahāyāna and 6% to Vajrayana, compared to 36% to Theravada. Etymology According to Jan Nattier, the term Mahāyāna ("Great Vehicle") was originally an honorary synonym for Bodhisattvayāna ("Bodhisattva Vehicle"), the vehicle of a bodhisattva seeking buddhahood for the benefit of all sentient beings. The term Mahāyāna (which had earlier been used simply as an epithet for Buddhism itself) was therefore adopted at an early date as a synonym for the path and the teachings of the bodhisattvas. Since it was simply an honorary term for Bodhisattvayāna, the adoption of the term Mahāyāna and its application to Bodhisattvayāna did not represent a significant turning point in the development of a Mahāyāna tradition. The earliest Mahāyāna texts, such as the Lotus Sūtra, often use the term Mahāyāna as a synonym for Bodhisattvayāna, but the term Hīnayāna is comparatively rare in the earliest sources. The presumed dichotomy between Mahāyāna and Hīnayāna can be deceptive, as the two terms were not actually formed in relation to one another in the same era. Among the earliest and most important references to Mahāyāna are those that occur in the Lotus Sūtra (Skt. Saddharma Puṇḍarīka Sūtra) dating between the 1st century BCE and the 1st century CE. Seishi Karashima has suggested that the term first used in an earlier Gandhāri Prakrit version of the Lotus Sūtra was not the term mahāyāna but the Prakrit word mahājāna in the sense of mahājñāna (great knowing). At a later stage when the early Prakrit word was converted into Sanskrit, this mahājāna, being phonetically ambivalent, may have been converted into mahāyāna, possibly because of what may have been a double meaning in the famous Parable of the Burning House, which talks of three vehicles or carts (Skt: yāna).[note 2] In Chinese, Mahāyāna is called 大乘 (dàshèng, or dàchéng), which is a calque of maha (great 大) yana (vehicle 乘). There is also the transliteration 摩诃衍那. The term appeared in some of the earliest Mahāyāna texts, including Emperor Ling of Han's translation of the Lotus Sutra. It also appears in the Chinese Āgamas, though scholars like Yin Shun argue that this is a later addition. Some Chinese scholars also argue that the meaning of the term in these earlier texts is different from later ideas of Mahāyāna Buddhism. History The origins of Mahāyāna are still not completely understood and there are numerous competing theories. The earliest Western views of Mahāyāna assumed that it existed as a separate school in competition with the so-called "Hīnayāna" schools. Some of the major theories about the origins of Mahāyāna include the following: The lay origins theory was first proposed by Jean Przyluski and then defended by Étienne Lamotte and Akira Hirakawa. This view states that laypersons were particularly important in the development of Mahāyāna and is partly based on some texts like the Vimalakirti Sūtra, which praise lay figures at the expense of monastics. This theory is no longer widely accepted since numerous early Mahāyāna works promote monasticism and asceticism. The Mahāsāṃghika origin theory, which argues that Mahāyāna developed within the Mahāsāṃghika tradition. This is defended by scholars such as Hendrik Kern, A.K. Warder and Paul Williams who argue that at least some Mahāyāna elements developed among Mahāsāṃghika communities (from the 1st century BCE onwards), possibly in the area along the Kṛṣṇa River in the Āndhra region of southern India. The Mahāsāṃghika doctrine of the supramundane (lokottara) nature of the Buddha is sometimes seen as a precursor to Mahāyāna views of the Buddha. Some scholars also see Mahāyāna figures like Nāgārjuna, Dignaga, Candrakīrti, Āryadeva, and Bhāviveka as having ties to the Mahāsāṃghika tradition of Āndhra. However, other scholars have also pointed to different regions as being important, such as Gandhara and northwest India. In connection with this, Warder states that "the sudden appearance of large numbers of (Mahayana) teachers and texts (in North India in the second century AD) would seem to require some previous preparation and development, and this we can look for in the South." An argument cited against the Mahāsāṃghika origins theory is the fact that scholarship has revealed how certain Mahāyāna sutras show traces of having developed among other nikāyas or monastic orders (such as the Dharmaguptaka). Because of such evidence, scholars like Paul Harrison and Paul Williams argue that the movement was not sectarian and was possibly pan-buddhist. According to Jan Napier, the concept of Mahāyāna, as used in an early text such as the Ugraparipṛcchā Sūtra, did not originally refer to a separate formal school or sect of Buddhism, but rather denoted a certain set of ideals, and later doctrines, for aspiring bodhisattvas; only later did the tensions between monks pursuing different paths lead to an "institutional fission generating a separate Mahāyāna community". The "forest hypothesis" meanwhile states that Mahāyāna arose mainly among "hard-core ascetics, members of the forest dwelling (aranyavasin) wing of the Buddhist Order", who were attempting to imitate the Buddha's forest living. This has been defended by Paul Harrison, Jan Nattier and Reginald Ray. This theory is based on certain sutras like the Ugraparipṛcchā Sūtra and the Mahāyāna Rāṣṭrapālapaṛiprcchā which promote ascetic practice in the wilderness as a superior and elite path. These texts criticize monks who live in cities and denigrate the forest life. Jan Nattier's study of the Ugraparipṛcchā Sūtra, A few good men (2003) argues that this sutra represents the earliest form of Mahāyāna, which presents the bodhisattva path as a 'supremely difficult enterprise' of elite monastic forest asceticism. Boucher's study on the Rāṣṭrapālaparipṛcchā-sūtra (2008) is another recent work on this subject. The cult of the book theory, defended by Gregory Schopen, states that Mahāyāna arose among a number of loosely connected book worshiping groups of monastics, who studied, memorized, copied and revered particular Mahāyāna sūtras. Schopen thinks they were inspired by cult shrines where Mahāyāna sutras were kept. Schopen also argued that these groups mostly rejected stupa worship, or worshiping holy relics. David Drewes has recently argued against all of the major theories outlined above. He points out that there is no actual evidence for the existence of book shrines, that the practice of sutra veneration was pan-Buddhist and not distinctly Mahāyāna. Furthermore, Drewes argues that "Mahāyāna sutras advocate mnemic/oral/aural practices more frequently than they do written ones." Regarding the forest hypothesis, he points out that only a few Mahāyāna sutras directly advocate forest dwelling, while the others either do not mention it or see it as unhelpful, promoting easier practices such as "merely listening to the sutra, or thinking of particular Buddhas, that they claim can enable one to be reborn in special, luxurious 'pure lands' where one will be able to make easy and rapid progress on the bodhisattva path and attain Buddhahood after as little as one lifetime." Drewes states that the evidence merely shows that "Mahāyāna was primarily a textual movement, focused on the revelation, preaching, and dissemination of Mahāyāna sutras, that developed within, and never really departed from, traditional Buddhist social and institutional structures." Drewes points out the importance of dharmabhāṇakas (preachers, reciters of these sutras) in the early Mahāyāna sutras. This figure is widely praised as someone who should be respected, obeyed ('as a slave serves his lord'), and donated to, and it is thus possible these people were the primary agents of the Mahāyāna movement. Early Mahayana came directly from some of the "early Buddhist schools" and was a successor to them. The earliest textual evidence of "Mahāyāna" comes from sūtras ("discourses", scriptures) originating around the beginning of the common era. Jan Nattier has noted that some of the earliest Mahāyāna texts, such as the Ugraparipṛccha Sūtra use the term "Mahāyāna", yet there is no doctrinal difference between Mahāyāna in this context and the early schools. Instead, Nattier writes that in the earliest sources, "Mahāyāna" referred to the rigorous emulation of Gautama Buddha's path to Buddhahood. Some important evidence for early Mahāyāna Buddhism comes from the texts translated by the Indoscythian monk Lokakṣema in the 2nd century CE, who came to China from the kingdom of Gandhāra. These are some of the earliest known Mahāyāna texts.[note 3] Study of these texts by Paul Harrison and others show that they strongly promote monasticism (contra the lay origin theory), acknowledge the legitimacy of arhatship, and do not show any attempt to establish a new sect or order. A few of these texts often emphasize ascetic practices, forest dwelling, and deep states of meditative concentration (samadhi). Indian Mahāyāna never had nor ever attempted to have a separate Vinaya or ordination lineage from the early schools of Buddhism, and therefore each bhikṣu or bhikṣuṇī adhering to the Mahāyāna formally belonged to one of the early Buddhist schools. Membership in these nikāyas, or monastic orders, continues today, with the Dharmaguptaka nikāya being used in East Asia, and the Mūlasarvāstivāda nikāya being used in Tibetan Buddhism. Therefore, Mahāyāna was never a separate monastic sect outside of the early schools. Paul Harrison clarifies that while monastic Mahāyānists belonged to a nikāya, not all members of a nikāya were Mahāyānists. From Chinese monks visiting India, we now know that both Mahāyāna and non-Mahāyāna monks in India often lived in the same monasteries side by side. It is also possible that, formally, Mahāyāna would have been understood as a group of monks or nuns within a larger monastery taking a vow together (known as a "kriyākarma") to memorize and study a Mahāyāna text or texts. The earliest stone inscription containing a recognizably Mahāyāna formulation and a mention of the Buddha Amitābha (an important Mahāyāna figure) was found in the Indian subcontinent in Mathura, and dated to around 180 CE. Remains of a statue of a Buddha bear the Brāhmī inscription: "Made in the year 28 of the reign of King Huviṣka, ... for the Blessed One, the Buddha Amitābha." There is also some evidence that the Kushan Emperor Huviṣka himself was a follower of Mahāyāna. A Sanskrit manuscript fragment in the Schøyen Collection describes Huviṣka as having "set forth in the Mahāyāna." Evidence of the name "Mahāyāna" in Indian inscriptions in the period before the 5th century is very limited in comparison to the multiplicity of Mahāyāna writings transmitted from Central Asia to China at that time.[note 4][note 5][note 6] Based on archeological evidence, Gregory Schopen argues that Indian Mahāyāna remained "an extremely limited minority movement – if it remained at all – that attracted absolutely no documented public or popular support for at least two more centuries." Likewise, Joseph Walser speaks of Mahāyāna's "virtual invisibility in the archaeological record until the fifth century". Schopen also sees this movement as being in tension with other Buddhists, "struggling for recognition and acceptance". Their "embattled mentality" may have led to certain elements found in Mahāyāna texts like Lotus sutra, such as a concern with preserving texts. Schopen, Harrison and Nattier also argue that these communities were probably not a single unified movement, but scattered groups based on different practices and sutras. One reason for this view is that Mahāyāna sources are extremely diverse, advocating many different, often conflicting doctrines and positions, as Jan Nattier writes: Thus we find one scripture (the Aksobhya-vyuha) that advocates both srávaka and bodhisattva practices, propounds the possibility of rebirth in a pure land, and enthusiastically recommends the cult of the book, yet seems to know nothing of emptiness theory, the ten bhumis, or the trikaya, while another (the P'u-sa pen-yeh ching) propounds the ten bhumis and focuses exclusively on the path of the bodhisattva, but never discusses the paramitas. A Madhyamika treatise (Nagarjuna's Mulamadhyamika-karikas) may enthusiastically deploy the rhetoric of emptiness without ever mentioning the bodhisattva path, while a Yogacara treatise (Vasubandhu's Madhyanta-vibhaga-bhasya) may delve into the particulars of the trikaya doctrine while eschewing the doctrine of ekayana. We must be prepared, in other words, to encounter a multiplicity of Mahayanas flourishing even in India, not to mention those that developed in East Asia and Tibet. In spite of being a minority in India, Indian Mahāyāna was an intellectually vibrant movement, which developed various schools of thought during what Jan Westerhoff has been called "The Golden Age of Indian Buddhist Philosophy" (from the beginning of the first millennium CE up to the 7th century). Some major Mahāyāna traditions are Prajñāpāramitā, Mādhyamaka, Yogācāra, Buddha-nature (Tathāgatagarbha), and the school of Dignaga and Dharmakirti as the last and most recent. Major early figures include Nagarjuna, Āryadeva, Aśvaghoṣa, Asanga, Vasubandhu, and Dignaga. Mahāyāna Buddhists seem to have been active in the Kushan Empire (30–375 CE), a period that saw great missionary and literary activities by Buddhists. This is supported by the works of the historian Taranatha. The Mahāyāna movement (or movements) remained quite small until it experienced much growth in the fifth century. Very few manuscripts have been found before the fifth century (the exceptions are from Bamiyan). According to Walser, "the fifth and sixth centuries appear to have been a watershed for the production of Mahāyāna manuscripts." Likewise it is only in the 4th and 5th centuries CE that epigraphic evidence shows some kind of popular support for Mahāyāna, including some possible royal support at the kingdom of Shan shan as well as in Bamiyan and Mathura. Still, even after the 5th century, the epigraphic evidence which uses the term Mahāyāna is still quite small and is notably mainly monastic, not lay. By this time, Chinese pilgrims, such as Faxian (337–422 CE), Xuanzang (602–664), Yijing (635–713 CE) were traveling to India, and their writings do describe monasteries which they label 'Mahāyāna' as well as monasteries where both Mahāyāna monks and non-Mahāyāna monks lived together. After the fifth century, Mahāyāna Buddhism and its institutions slowly grew in influence. Some of the most influential institutions became massive monastic university complexes such as Nalanda (established by the 5th-century CE Gupta emperor, Kumaragupta I) and Vikramashila (established under Dharmapala c. 783 to 820) which were centers of various branches of scholarship, including Mahāyāna philosophy. The Nalanda complex eventually became the largest and most influential Buddhist center in India for centuries. Even so, as noted by Paul Williams, "it seems that fewer than 50 percent of the monks encountered by Xuanzang (Hsüan-tsang; c. 600–664) on his visit to India actually were Mahāyānists." Over time Indian Mahāyāna texts and philosophy reached Central Asia and China through trade routes like the Silk Road, later spreading throughout East Asia. Over time, Central Asian Buddhism became heavily influenced by Mahāyāna and it was a major source for Chinese Buddhism. Mahāyāna works have also been found in Gandhāra, indicating the importance of this region for the spread of Mahāyāna. Central Asian Mahāyāna scholars were very important in the Silk Road Transmission of Buddhism. They include translators like Lokakṣema (c. 167–186), Dharmarakṣa (c. 265–313), Kumārajīva (c. 401), and Dharmakṣema (385–433). The site of Dunhuang seems to have been a particularly important place for the study of Mahāyāna Buddhism. Mahāyāna spread from China to Korea, Vietnam, and Japan (the latter partly through Korea as well). Mahāyāna also spread from India to Myanmar, and then Sumatra and Malaysia. Mahāyāna spread from Sumatra to other Indonesian islands, including Java and Borneo, the Philippines, Cambodia, and eventually, Indonesian Mahāyāna traditions made it to China. By the fourth century, Chinese monks like Faxian (c. 337–422 CE) had also begun to travel to India (now dominated by the Guptas) to bring back Buddhist teachings, especially Mahāyāna works. These figures also wrote about their experiences in India and their work remains invaluable for understanding Indian Buddhism. In some cases Indian Mahāyāna traditions were directly transplanted, as with the case of the East Asian Madhymaka (by Kumārajīva) and East Asian Yogacara (especially by Xuanzang). Later, new developments in Chinese Mahāyāna led to new Chinese Buddhist traditions like Tiantai, Huayen, Pure Land and Chan Buddhism (Zen). These traditions would then spread to Korea, Vietnam and Japan. Forms of Mahāyāna Buddhism which are mainly based on the doctrines of Indian Mahāyāna sutras are still popular in East Asian Buddhism, which is mostly dominated by various branches of Mahāyāna Buddhism. Paul Williams has noted that in this tradition in the Far East, primacy has always been given to the study of the Mahāyāna sūtras. Beginning during the Gupta (c. 3rd century CE–575 CE) period a new movement began to develop which drew on previous Mahāyāna doctrine as well as new Pan-Indian tantric ideas. This came to be known by various names such as Vajrayāna (Tibetan: rdo rje theg pa), Mantrayāna, and Esoteric Buddhism or "Secret Mantra" (Guhyamantra). This new movement continued into the Pala era (8th century–12th century CE), during which it grew to dominate Indian Buddhism. Possibly led by groups of wandering tantric yogis named mahasiddhas, this movement developed new tantric spiritual practices and also promoted new texts called the Buddhist Tantras. Philosophically, Vajrayāna Buddhist thought remained grounded in the Mahāyāna Buddhist ideas of Madhyamaka, Yogacara and Buddha-nature. Tantric Buddhism generally deals with new forms of meditation and ritual which often makes use of the visualization of Buddhist deities (including Buddhas, bodhisattvas, dakinis, and fierce deities) and the use of mantras. Most of these practices are esoteric and require ritual initiation or introduction by a tantric master (vajracarya) or guru. The source and early origins of Vajrayāna remain a subject of debate among scholars. Some scholars like Alexis Sanderson argue that Vajrayāna derives its tantric content from Shaivism and that it developed as a result of royal courts sponsoring both Buddhism and Shaivism. Sanderson argues that Vajrayāna works like the Samvara and Guhyasamaja texts show direct borrowing from Shaiva tantric literature. However, other scholars such as Ronald M. Davidson question the idea that Indian tantrism developed in Shaivism first and that it was then adopted into Buddhism. Davidson points to the difficulties of establishing a chronology for the Shaiva tantric literature and argues that both traditions developed side by side, drawing on each other as well as on local Indian tribal religion. Whatever the case, this new tantric form of Mahāyāna Buddhism became extremely influential in India, especially in Kashmir and in the lands of the Pala Empire. It eventually also spread north into Central Asia, the Tibetan plateau and to East Asia. Vajrayāna remains the dominant form of Buddhism in Tibet, in surrounding regions like Bhutan and in Mongolia. Esoteric elements are also an important part of East Asian Buddhism where it is referred to by various terms. These include: Zhēnyán (Chinese: 真言, literally "true word", referring to mantra), Mìjiao (Chinese: 密教; Esoteric Teaching), Mìzōng (密宗; "Esoteric Tradition") or Tángmì (唐密; "Tang (Dynasty) Esoterica") in Chinese and Shingon, Tomitsu, Mikkyo, and Taimitsu in Japanese. Worldview Few things can be said with certainty about Mahāyāna Buddhism in general other than that the Buddhism practiced in China, Indonesia, Vietnam, Korea, Tibet, Mongolia and Japan is Mahāyāna Buddhism.[note 7] Mahāyāna can be described as a loosely bound collection of many teachings and practices (some of which are seemingly contradictory).[note 8] Mahāyāna constitutes an inclusive and broad set of traditions characterized by plurality and the adoption of a vast number of new sutras, ideas and philosophical treatises in addition to the earlier Buddhist texts. Broadly speaking, Mahāyāna Buddhists accept the classic Buddhist doctrines found in early Buddhism (i.e. the Nikāya and Āgamas), such as the Middle Way, Dependent origination, the Four Noble Truths, the Noble Eightfold Path, the Three Jewels, the Three marks of existence and the bodhipakṣadharmas (aids to awakening). Mahāyāna Buddhism further accepts some of the ideas found in Buddhist Abhidharma thought. However, Mahāyāna also adds numerous Mahāyāna texts and doctrines, which are seen as definitive and in some cases superior teachings. D.T. Suzuki described the broad range and doctrinal liberality of Mahāyāna as "a vast ocean where all kinds of living beings are allowed to thrive in a most generous manner, almost verging on a chaos". Paul Williams refers to the main impulse behind Mahāyāna as the vision which sees the motivation to achieve Buddhahood for the sake of other beings as being the supreme religious motivation. This is the way that Atisha defines Mahāyāna in his Bodhipathapradipa. As such, according to Williams, "Mahāyāna is not as such an institutional identity. Rather, it is inner motivation and vision, and this inner vision can be found in anyone regardless of their institutional position." Thus, instead of a specific school or sect, Mahāyāna is a "family term" or a religious tendency, which is united by "a vision of the ultimate goal of attaining full Buddhahood for the benefit of all sentient beings (the 'bodhisattva ideal') and also (or eventually) a belief that Buddhas are still around and can be contacted (hence the possibility of an ongoing revelation)." Buddhas and bodhisattvas (beings on their way to Buddhahood) are central elements of Mahāyāna. Mahāyāna has a vastly expanded cosmology and theology, with various Buddhas and powerful bodhisattvas residing in different worlds and buddha-fields (buddha kshetra). Buddhas unique to Mahāyāna include the Buddhas Amitābha ("Infinite Light"), Akṣobhya ("the Imperturbable"), Bhaiṣajyaguru ("Medicine guru") and Vairocana ("the Illuminator"). In Mahāyāna, a Buddha is seen as a being that has achieved the highest kind of awakening due to his superior compassion and wish to help all beings. An important feature of Mahāyāna is the way that it understands the nature of a Buddha, which differs from non-Mahāyāna understandings. Mahāyāna texts not only often depict numerous Buddhas besides Sakyamuni, but see them as transcendental or supramundane (lokuttara) beings with great powers and huge lifetimes. The White Lotus Sutra famously describes the lifespan of the Buddha as immeasurable and states that he actually achieved Buddhahood countless of eons (kalpas) ago and has been teaching the Dharma through his numerous avatars for an unimaginable period of time. Furthermore, Buddhas are active in the world, constantly devising ways to teach and help all sentient beings. According to Paul Williams, in Mahāyāna, a Buddha is often seen as "a spiritual king, relating to and caring for the world", rather than simply a teacher who after his death "has completely 'gone beyond' the world and its cares". Buddha Sakyamuni's life and death on earth are then usually understood docetically as a "mere appearance", his death is a show, while in actuality he remains out of compassion to help all sentient beings. Similarly, Guang Xing describes the Buddha in Mahāyāna as an omnipotent and almighty divinity "endowed with numerous supernatural attributes and qualities". Mahayana Buddhologies have often been compared to various types of theism (including pantheism) by different scholars, though there is disagreement among scholars regarding this issue as well on the general relationship between Buddhism and Theism. The idea that Buddhas remain accessible is extremely influential in Mahāyāna and also allows for the possibility of having a reciprocal relationship with a Buddha through prayer, visions, devotion and revelations. Through the use of various practices, a Mahāyāna devotee can aspire to be reborn in a Buddha's pure land or buddha field (buddhakṣetra), where they can strive towards Buddhahood in the best possible conditions. Depending on the sect, liberation into a buddha-field can be obtained by faith, meditation, or sometimes even by the repetition of Buddha's name. Faith-based devotional practices focused on rebirth in pure lands are common in East Asia Pure Land Buddhism. The influential Mahāyāna concept of the three bodies (trikāya) of a Buddha developed to make sense of the transcendental nature of the Buddha. This doctrine holds that the "bodies of magical transformation" (nirmāṇakāyas) and the "enjoyment bodies" (saṃbhogakāya) are emanations from the ultimate Buddha body, the Dharmakaya, which is none other than the ultimate reality itself, i.e. emptiness or Thusness. The Mahāyāna bodhisattva path (mārga) or vehicle (yāna) is seen as being the superior spiritual path by Mahāyānists, over and above the paths of those who seek arhatship or "solitary buddhahood" for their own sake (Śrāvakayāna and Pratyekabuddhayāna). Mahāyāna Buddhists generally hold that pursuing only the personal release from suffering i.e. nirvāṇa is a smaller or inferior aspiration (called "hinayana"), because it lacks the wish and resolve to liberate all other sentient beings from saṃsāra (the round of rebirth) by becoming a Buddha. This wish to help others by entering the Mahāyāna path is called bodhicitta and someone who engages in this path to complete buddhahood is a bodhisattva. High level bodhisattvas (with eons of practice) are seen as extremely powerful supramundane beings. They are objects of devotion and prayer throughout the Mahāyāna world. Popular bodhisattvas which are revered across Mahāyāna include Avalokiteshvara, Manjushri, Tara and Maitreya. Bodhisattvas could reach the personal nirvana of the arhats, but they reject this goal and remain in saṃsāra to help others out of compassion. According to eighth-century Mahāyāna philosopher Haribhadra, the term "bodhisattva" can technically refer to those who follow any of the three vehicles, since all are working towards bodhi (awakening) and hence the technical term for a Mahāyāna bodhisattva is a mahāsattva (great being) bodhisattva. According to Paul Williams, a Mahāyāna bodhisattva is best defined as: that being who has taken the vow to be reborn, no matter how many times this may be necessary, in order to attain the highest possible goal, that of Complete and Perfect Buddhahood. This is for the benefit of all sentient beings. There are two models for the nature of the bodhisattvas, which are seen in the various Mahāyāna texts. One is the idea that a bodhisattva must postpone their awakening until full Buddhahood is attained. This could take eons and in the meantime, they will help countless beings. After reaching Buddhahood, they do pass on to nirvāṇa (after which they do not return). The second model is the idea that there are two kinds of nirvāṇa, the nirvāṇa of an arhat and a superior type of nirvāṇa called apratiṣṭhita (non-abiding, not-established) that allows a Buddha to remain forever engaged in the world. As noted by Paul Williams, the idea of apratiṣṭhita nirvāṇa may have taken some time to develop and is not obvious in some of the early Mahāyāna literature. In most classic Mahāyāna sources (as well as in non-Mahāyāna sources on the topic), the bodhisattva path is said to take three or four asaṃkheyyas ("incalculable eons"), requiring a huge number of lifetimes of practice to complete. However, certain practices are sometimes held to provide shortcuts to Buddhahood (these vary widely by tradition). According to the Bodhipathapradīpa (A Lamp for the Path to Awakening) by the Indian master Atiśa, the central defining feature of a bodhisattva's path is the universal aspiration to end suffering for themselves and all other beings, i.e. bodhicitta. The bodhisattva's spiritual path is traditionally held to begin with the revolutionary event called the "arising of the Awakening Mind" (bodhicittotpāda), which is the wish to become a Buddha in order to help all beings. This is achieved in different ways, such as the meditation taught by the Indian master Shantideva in his Bodhicaryavatara called "equalising self and others and exchanging self and others". Other Indian masters like Atisha and Kamalashila also teach a meditation in which we contemplate how all beings have been our close relatives or friends in past lives. This contemplation leads to the arising of deep love (maitrī) and compassion (karuṇā) for others, and thus bodhicitta is generated. According to the Indian philosopher Shantideva, when great compassion and bodhicitta arises in a person's heart, they cease to be an ordinary person and become a "son or daughter of the Buddhas". The idea of the bodhisattva is not unique to Mahāyāna Buddhism and it is found in Theravada and other early Buddhist schools. However, these schools held that becoming a bodhisattva required a prediction of one's future Buddhahood in the presence of a living Buddha. In Mahāyāna, the term bodhisattva is applicable to any person from the moment they intend to become a Buddha (i.e. the moment in which bodhicitta arises in their mind) and without the requirement of a living Buddha being present. Some Mahāyāna sūtras like the Lotus Sutra promote the bodhisattva path as being universal and open to everyone. Other texts disagree with this and state that only some beings have the capacity for Buddhahood. The generation of bodhicitta may then be followed by the taking of the bodhisattva vows (praṇidhāna) to "lead to Nirvana the whole immeasurable world of beings" as the Prajñaparamita sutras state. This compassionate commitment to help others is the central characteristic of the Mahāyāna bodhisattva. These vows may be accompanied by certain ethical guidelines called bodhisattva precepts. Numerous sutras also state that a key part of the bodhisattva path is the practice of a set of virtues called pāramitās (transcendent or supreme virtues). Sometimes six are outlined: giving, ethical discipline, patient endurance, diligence, meditation and transcendent wisdom. Other sutras (like the Daśabhūmika) give a list of ten, with the addition of upāya (skillful means), praṇidhāna (vow, resolution), Bala (spiritual power) and Jñāna (knowledge). Prajñā (transcendent knowledge or wisdom) is arguably the most important virtue of the bodhisattva. This refers to an understanding of the emptiness of all phenomena, arising from study, deep consideration and meditation. Various Mahāyāna Buddhist scriptures associate the beginning of the bodhisattva practice with what is called the "path of accumulation" or equipment (saṃbhāra-mārga), which is the first path of the classic five paths schema. The Daśabhūmika Sūtra as well as other texts also outline a series of bodhisattva levels or spiritual stages (bhūmis ) on the path to Buddhahood. The various texts disagree on the number of stages however, the Daśabhūmika giving ten for example (and mapping each one to the ten paramitas), the Bodhisattvabhūmi giving seven and thirteen and the Avatamsaka outlining 40 stages. In later Mahāyāna scholasticism, such as in the work of Kamalashila and Atiśa, the five paths and ten bhūmi systems are merged and this is the progressive path model that is used in Tibetan Buddhism. According to Paul Williams, in these systems, the first bhūmi is reached once one attains "direct, nonconceptual and nondual insight into emptiness in meditative absorption", which is associated with the path of seeing (darśana-mārga). At this point, a bodhisattva is considered an ārya (a noble being). Skillful means or Expedient techniques (Skt. upāya) is another important virtue and doctrine in Mahāyāna Buddhism. The idea is most famously expounded in the White Lotus Sutra, and refers to any effective method or technique that is conducive to spiritual growth and leads beings to awakening and nirvana. This doctrine states that, out of compassion, the Buddha adapts his teaching to whomever he is teaching. Because of this, it is possible that the Buddha may teach seemingly contradictory things to different people. This idea is also used to explain the vast textual corpus found in Mahāyāna. A closely related teaching is the doctrine of the One Vehicle (ekayāna). This teaching states that even though the Buddha is said to have taught three vehicles (the disciples' vehicle, the vehicle of solitary Buddhas and the bodhisattva vehicle, which are accepted by all early Buddhist schools), these actually are all skillful means which lead to the same place: Buddhahood. Therefore, there really are not three vehicles in an ultimate sense, but one vehicle, the supreme vehicle of the Buddhas, which is taught in different ways depending on the faculties of individuals. Even those beings who think they have finished the path (i.e. the arhats) are actually not done, and they will eventually reach Buddhahood. This doctrine was not accepted in full by all Mahāyāna traditions. The Yogācāra school famously defended an alternative theory that held that not all beings could become Buddhas. This became a subject of much debate throughout Mahāyāna Buddhist history. Some of the key Mahāyāna teachings are found in the Prajñāpāramitā ("Transcendent Knowledge" or "Perfection of Wisdom") texts, which are some of the earliest Mahāyāna works. Prajñāpāramitā is a deep knowledge of reality which Buddhas and bodhisattvas attain. It is a transcendent, non-conceptual and non-dual kind of knowledge into the true nature of things. This wisdom is also associated with insight into the emptiness (śūnyatā) of dharmas (phenomena) and their illusory nature (māyā). This amounts to the idea that all phenomena (dharmas) without exception have "no essential unchanging core" (i.e. they lack svabhāva, an essence or inherent nature), and therefore have "no fundamentally real existence". These empty phenomena are also said to be conceptual constructions. Because of this, all dharmas (things, phenomena), even the Buddha's Teaching, the Buddha himself, Nirvāṇa and all living beings, are like "illusions" or "magic" (māyā) and "dreams" (svapna). This emptiness or lack of real existence applies even to the apparent arising and ceasing of phenomena. Because of this, all phenomena are also described as unarisen (anutpāda), unborn (ajata), "beyond coming and going" in the Prajñāpāramitā literature. Most famously, the Heart Sutra states that "all phenomena are empty, that is, without characteristic, unproduced, unceased, stainless, not stainless, undiminished, unfilled". The Prajñāpāramitā texts also use various metaphors to describe the nature of things, for example, the Diamond Sutra compares phenomena to: "A shooting star, a clouding of the sight, a lamp, an illusion, a drop of dew, a bubble, a dream, a lightning's flash, a thunder cloud."[citation needed] Prajñāpāramitā is also associated with not grasping, not taking a stand on or "not taking up" (aparigṛhīta) anything in the world. The Aṣṭasāhasrikā Prajñāpāramitā Sūtra explains it as "not grasping at form, not grasping at sensation, perception, volitions and cognition". This includes not grasping or taking up even correct Buddhist ideas or mental signs (such as "not-self", "emptiness", bodhicitta, vows), since these things are ultimately all empty concepts as well. Attaining a state of fearless receptivity (ksanti) through the insight into the true nature of reality (Dharmatā) in an intuitive, non-conceptual manner is said to be the prajñāpāramitā, the highest spiritual wisdom. According to Edward Conze, the "patient acceptance of the non-arising of dharmas" (anutpattika-dharmakshanti) is "one of the most distinctive virtues of the Mahāyānistic saint." The Prajñāpāramitā texts also claim that this training is not just for Mahāyānists, but for all Buddhists following any of the three vehicles. Religion portal The Mahāyāna philosophical school termed Madhyamaka (Middle theory or Centrism, also known as śūnyavāda, 'the emptiness theory') was founded by the second-century figure of Nagarjuna. This philosophical tradition focuses on refuting all theories which posit any kind of substance, inherent existence or intrinsic nature (svabhāva). In his writings, Nagarjuna attempts to show that any theory of intrinsic nature is contradicted by the Buddha's theory of dependent origination, since anything that has an independent existence cannot be dependently originated. The śūnyavāda philosophers were adamant that their denial of svabhāva is not a kind of nihilism (against protestations to the contrary by their opponents). Using the two truths theory, Madhyamaka claims that while one can speak of things existing in a conventional, relative sense, they do not exist inherently in an ultimate sense. Madhyamaka also argues that emptiness itself is also "empty", it does not have an absolute inherent existence of its own. It is also not to be understood as a transcendental absolute reality. Instead, the emptiness theory is merely a useful concept that should not be clung to. In fact, for Madhyamaka, since everything is empty of true existence, all things are just conceptualizations (prajñapti-matra), including the theory of emptiness, and all concepts must ultimately be abandoned in order to truly understand the nature of things. Vijñānavāda ("the doctrine of consciousness", a.k.a. vijñapti-mātra, "perceptions only" and citta-mātra "mind only") is another important doctrine promoted by some Mahāyāna sutras which later became the central theory of a major philosophical movement which arose during the Gupta period called Yogācāra. The primary sutra associated with this school of thought is the Saṃdhinirmocana Sūtra, which claims that śūnyavāda is not the final definitive teaching (nītārtha) of the Buddha. Instead, the ultimate truth (paramārtha-satya) is said to be the view that all things (dharmas) are only mind (citta), consciousness (vijñāna) or perceptions (vijñapti) and that seemingly "external" objects (or "internal" subjects) do not really exist apart from the dependently originated flow of mental experiences. When this flow of mentality is seen as being empty of the subject-object duality we impose upon it, one reaches the non-dual cognition of "Thusness" (tathatā), which is nirvana. This doctrine is developed through various theories, the most important being the eight consciousnesses and the three natures. The Saṃdhinirmocana calls its doctrine the 'third turning of the dharma wheel'. The Pratyutpanna sutra also mentions this doctrine, stating: "whatever belongs to this triple world is nothing but thought [citta-mātra]. Why is that? It is because however I imagine things, that is how they appear". The most influential thinkers in this tradition were the Indian brothers Asanga and Vasubandhu. Yogācāra philosophers developed their own interpretation of the doctrine of emptiness which also criticized Madhyamaka, in effect claiming it fell into nihilism. The doctrine of Tathāgata embryo or Tathāgata womb (Tathāgatagarbha), also known as Buddha-nature, matrix or principle (Skt: Buddha-dhātu) is important in all modern Mahāyāna traditions, though it is interpreted in many different ways. Broadly speaking, Buddha-nature is concerned with explaining what allows sentient beings to become Buddhas. The earliest sources for this idea may include the Tathāgatagarbha Sūtra and the Mahāyāna Mahāparinirvāṇa Sūtra. The Mahāyāna Mahāparinirvāṇa refers to "a sacred nature that is the basis for [beings] becoming buddhas", and it also describes it as the 'Self' (atman). David Seyfort Ruegg explains this concept as the base or support for the practice of the path, and thus it is the "cause" (hetu) for the fruit of Buddhahood. The Tathāgatagarbha Sūtra states that within the defilements is found "the tathagata's wisdom, the tathagata's vision, and the tathagata's body...eternally unsullied, and...replete with virtues no different from my own...the tathagatagarbhas of all beings are eternal and unchanging". The ideas found in the Buddha-nature literature are a source of much debate and disagreement among Mahāyāna Buddhist philosophers as well as modern academics. Some scholars have seen this as an influence from Brahmanic Hinduism, and some of these sutras admit that the use of the term 'Self' is partly done in order to win over non-Buddhist ascetics (in other words, it is a skillful means). According to some scholars, the Buddha-nature discussed in some Mahāyāna sūtras does not represent a substantial self (ātman) which the Buddha critiqued; rather, it is a positive expression of emptiness (śūnyatā) and represents the potentiality to realize Buddhahood through Buddhist practices. Similarly, Williams thinks that this doctrine was not originally dealing with ontological issues, but with "religious issues of realising one's spiritual potential, exhortation, and encouragement." The Buddha-nature genre of sūtras can be seen as an attempt to state Buddhist teachings using positive language while also maintaining the middle way, to prevent people from being turned away from Buddhism by a false impression of nihilism. This is the position taken by the Laṅkāvatāra Sūtra, which states that the Buddhas teach the doctrine of tathāgatagarbha (which sounds similar to an atman) in order to help those beings who are attached to the idea of anatman. However, the sutra goes on to say that the tathāgatagarbha is empty and is not actually a substantial self. A different view is defended by various modern scholars like Michael Zimmermann. This view is the idea that Buddha-nature sutras such as the Mahāparinirvāṇa and the Tathāgatagarbha Sūtra teach an affirmative vision of an eternal, indestructible Buddhic Self. Shenpen Hookham, a western scholar and lama sees Buddha-nature as a True Self that is real and permanent. Similarly, C. D. Sebastian understands the Ratnagotravibhāga's view of this topic as a transcendental self that is "the unique essence of the universe". Indian Mahāyāna Buddhists faced various criticisms from non-Mahāyānists regarding the authenticity of their teachings. The main critique they faced was that Mahāyāna teachings had not been taught by the Buddha, but were invented by later figures. Numerous Mahāyāna texts discuss this issue and attempt to defend the truth and authenticity of Mahāyāna in various ways. One idea that Mahāyāna texts put forth is that Mahāyāna teachings were taught later because most people were unable to understand the Mahāyāna sūtras at the time of the Buddha and that people were ready to hear the Mahāyāna only in later times. Certain traditional accounts state that Mahāyāna sutras were hidden away or kept safe by divine beings like Nagas or bodhisattvas until the time came for their dissemination. Similarly, some sources also state that Mahāyāna teachings were revealed by other Buddhas, bodhisattvas and devas to a select number of individuals (often through visions or dreams). Some scholars have seen a connection between this idea and Mahāyāna meditation practices which involve the visualization of Buddhas and their Buddha-lands. Another argument that Indian Buddhists used in favor of the Mahāyāna is that its teachings are true and lead to awakening since they are in line with the Dharma. Because of this, they can be said to be "well said" (subhasita), and therefore, they can be said to be the word of the Buddha in this sense. This idea that whatever is "well spoken" is the Buddha's word can be traced to the earliest Buddhist texts, but it is interpreted more widely in Mahāyāna. From the Mahāyāna point of view, a teaching is the "word of the Buddha" because it is in accord with the Dharma, not because it was spoken by a specific individual (i.e. Gautama). This idea can be seen in the writings of Shantideva (8th century), who argues that an "inspired utterance" is the Buddha word if it is "connected with the truth", "connected with the Dharma", "brings about renunciation of kleshas, not their increase" and "it shows the laudable qualities of nirvana, not those of samsara". The modern Japanese Zen Buddhist scholar D. T. Suzuki similarly argued that while the Mahāyāna sūtras may not have been directly taught by the historical Buddha, the "spirit and central ideas" of Mahāyāna derive from the Buddha. According to Suzuki, Mahāyāna evolved and adapted itself to suit the times by developing new teachings and texts, while maintaining the spirit of the Buddha. Mahāyāna often sees itself as penetrating further and more profoundly into the Buddha's Dharma. An Indian commentary on the Mahāyānasaṃgraha, gives a classification of teachings according to the capabilities of the audience: According to disciples' grades, the Dharma is classified as inferior and superior. For example, the inferior was taught to the merchants Trapuṣa and Ballika because they were ordinary men; the middle was taught to the group of five because they were at the stage of saints; the eightfold Prajñāpāramitās were taught to bodhisattvas, and [the Prajñāpāramitās] are superior in eliminating conceptually imagined forms. - Vivṛtaguhyārthapiṇḍavyākhyā There is also a tendency in Mahāyāna sūtras to regard adherence to these sūtras as generating spiritual benefits greater than those that arise from being a follower of the non-Mahāyāna approaches. Thus the Śrīmālādevī Siṃhanāda Sūtra claims that the Buddha said that devotion to Mahāyāna is inherently superior in its virtues to following the śrāvaka or pratyekabuddha paths. The commentary on the Abhidharmasamuccaya gives the following seven reasons for the "greatness" of the Mahayana: Practice Mahāyāna Buddhist practice is quite varied. A common set of virtues and practices which is shared by all Mahāyāna traditions are the six perfections or transcendent virtues (pāramitā). Another central practice advocated by numerous Mahāyāna sources is focused around "the acquisition of merit, the universal currency of the Buddhist world, a vast quantity of which was believed to be necessary for the attainment of Buddhahood". Indian Mahayana Buddhist practice included numerous elements of devotion and ritual, which were considered to generate much merit (punya) and to allow the devotee to obtain the power or spiritual blessings of the Buddhas and bodhisattvas. These elements remain a key part of Mahayana Buddhism today. Some key Mahayana practices in this vein include: Mahāyāna sūtras, especially those of the Prajñāpāramitā genre, teach the practice of the six transcendent virtues or perfections (pāramitā) as part of the path to Buddhahood. Special attention is given to transcendent knowledge (prajñāpāramitā), which is seen as a primary virtue. According to Donald S. Lopez Jr., the term pāramitā can mean "excellence" or "perfection" as well as "that which has gone beyond" or "transcendence". The Prajñapāramitā sūtras, and a large number of other Mahāyāna texts list six perfections: This list is also mentioned by the Theravāda commentator Dhammapala, who describes it as a categorization of the same ten perfections of Theravada Buddhism. According to Dhammapala, Sacca is classified as both Śīla and Prajñā, Mettā and Upekkhā are classified as Dhyāna, and Adhiṭṭhāna falls under all six. Bhikkhu Bodhi states that the correlations between the two sets show there was a shared core before the Theravada and Mahayana schools split. In the Ten Stages Sutra and the Mahāratnakūṭa Sūtra, four more pāramitās are listed: Mahāyāna Buddhism teaches a vast array of meditation practices. These include meditations which are shared with the early Buddhist traditions, including mindfulness of breathing; mindfulness of the unattractivenes of the body; loving-kindness; the contemplation of dependent origination; and mindfulness of the Buddha. In Chinese Buddhism, these five practices are known as the "five methods for stilling or pacifying the mind" and support the development of the stages of dhyana. The Yogācārabhūmi-Śāstra (compiled c. 4th century), which is the most comprehensive Indian treatise on Mahāyāna practice, discusses classic Buddhist numerous meditation methods and topics, including the four dhyānas, the different kinds of samādhi, the development of insight (vipaśyanā) and tranquility (śamatha), the four foundations of mindfulness (smṛtyupasthāna), the five hindrances (nivaraṇa), and classic Buddhist meditations such as the contemplation of unattractiveness, impermanence (anitya), suffering (duḥkha), and contemplation death (maraṇasaṃjñā). Other works of the Yogācāra school, such as Asaṅga's Abhidharmasamuccaya, and Vasubandhu's Madhyāntavibhāga-bhāsya also discuss meditation topics such as mindfulness, smṛtyupasthāna, the 37 wings to awakening, and samadhi. A very popular Mahāyāna practice from very early times involved the visualization of a Buddha while practicing mindfulness of a Buddha (buddhānusmṛti) along with their Pure Land. This practice could lead the meditator to feel that they were in the presence of the Buddha and in some cases it was held that it could lead to visions of the Buddhas, through which one could receive teachings from them. This meditation is taught in numerous Mahāyāna sūtras such as the Pure Land sutras, the Akṣobhya-vyūha and the Pratyutpanna Samādhi. The Pratyutpanna states that through mindfulness of the Buddha meditation one may be able to meet this Buddha in a vision or a dream and learn from them. Similarly, the Samādhirāja Sūtra for states that: Those who, while walking, sitting, standing, or sleeping, recollect the moon-like Buddha, will always be in Buddha's presence and will attain the vast nirvāṇa. His pure body is the colour of gold, beautiful is the Protector of the World. Whoever visualizes him like this practises the meditation of the bodhisattvas. In the case of Pure Land Buddhism, it is widely held that the practice of reciting the Buddha's name (called nianfo in Chinese and nembutsu in Japanese) can lead to rebirth in a Buddha's Pure Land, as well as other positive outcomes. In East Asian Buddhism, the most popular Buddha used for this practice is Amitabha. East Asian Mahāyāna Buddhism also teaches numerous unique meditation methods, including the Chan (Zen) practices of huatou, koan meditation, and silent illumination (Chinese: mòzhào, which developed into the Japanese shikantaza method). Indo-Tibetan Buddhism also includes numerous unique forms of Mahāyāna contemplations, such as tonglen ("sending and receiving"), lojong ("mind training") and samatha-vipasyana. There are also numerous meditative practices that are generally considered to be part of a separate category rather than general or mainstream Mahāyāna meditation. These are the various practices associated with Vajrayāna (also termed Mantrayāna, Secret Mantra, Buddhist Tantra, and Esoteric Buddhism). This family of practices, which include such varied forms as Deity Yoga, Dzogchen, Mahamudra, the Six Dharmas of Nāropa, the recitation of mantras and dharanis, and the use of mudras and mandalas, are very important in Tibetan Buddhism as well as in some forms of East Asian Mantrayāna like Chinese Esoteric Buddhism, Shingon, and Tendai. Scripture Mahāyāna Buddhism takes the basic teachings of the Buddha as recorded in early scriptures as the starting point of its teachings, such as those concerning karma and rebirth, anātman, emptiness, dependent origination, and the Four Noble Truths. Mahāyāna Buddhists in East Asia have traditionally studied these teachings in the Āgamas preserved in the Chinese Buddhist canon. "Āgama" is the term used by those traditional Buddhist schools in India who employed Sanskrit for their basic canon. These correspond to the Nikāyas used by the Theravāda school. The surviving Āgamas in Chinese translation belong to at least two schools. Most of the Āgamas were never translated into the Tibetan canon, which according to Hirakawa, only contains a few translations of early sutras corresponding to the Nikāyas or Āgamas. However, these basic doctrines are contained in Tibetan translations of later works such as the Abhidharmakośa and the Yogācārabhūmi-Śāstra. In addition to accepting the essential scriptures of the early Buddhist schools as valid, Mahāyāna Buddhism maintains large collections of sūtras that are not recognized as authentic by the modern Theravāda school. The earliest of these sutras do not call themselves 'Mahāyāna', but use the terms vaipulya (extensive) sutras, or gambhira (profound) sutras. These were also not recognized by some individuals in the early Buddhist schools. In other cases, Buddhist communities such as the Mahāsāṃghika school were divided along these doctrinal lines. In Mahāyāna Buddhism, the Mahāyāna sūtras are often given greater authority than the Āgamas. The first of these Mahāyāna-specific writings were written probably around the 1st century BCE or 1st-century CE. Some influential Mahāyāna sutras are the Prajñaparamita sutras such as the Aṣṭasāhasrikā Prajñāpāramitā Sūtra, the Lotus Sutra, the Pure Land sutras, the Vimalakirti Sutra, the Golden Light Sutra, the Avatamsaka Sutra, the Sandhinirmocana Sutra and the Tathāgatagarbha sūtras. According to David Drewes, Mahāyāna sutras contain several elements besides the promotion of the bodhisattva ideal, including "expanded cosmologies and mythical histories, ideas of purelands and great, 'celestial' Buddhas and bodhisattvas, descriptions of powerful new religious practices, new ideas on the nature of the Buddha, and a range of new philosophical perspectives." These texts present stories of revelation in which the Buddha teaches Mahāyāna sutras to certain bodhisattvas who vow to teach and spread these sutras after the Buddha's death. Regarding religious praxis, David Drewes outlines the most commonly promoted practices in Mahāyāna sutras were seen as means to achieve Buddhahood quickly and easily and included "hearing the names of certain Buddhas or bodhisattvas, maintaining Buddhist precepts, and listening to, memorizing, and copying sutras, that they claim can enable rebirth in the pure lands Abhirati and Sukhavati, where it is said to be possible to easily acquire the merit and knowledge necessary to become a Buddha in as little as one lifetime." Another widely recommended practice is anumodana, or rejoicing in the good deeds of Buddhas and Bodhisattvas. The practice of meditation and visualization of Buddhas has been seen by some scholars as a possible explanation for the source of certain Mahāyāna sutras which are seen traditionally as direct visionary revelations from the Buddhas in their pure lands. Paul Harrison has also noted the importance of dream revelations in certain Mahāyāna sutras such as the Arya-svapna-nirdesa which lists and interprets 108 dream signs. As noted by Paul Williams, one feature of Mahāyāna sutras (especially earlier ones) is "the phenomenon of laudatory self-reference – the lengthy praise of the sutra itself, the immense merits to be obtained from treating even a verse of it with reverence, and the nasty penalties which will accrue in accordance with karma to those who denigrate the scripture." Some Mahāyāna sutras also warn against the accusation that they are not the word of the Buddha (buddhavacana), such as the Astasāhasrikā (8,000 verse) Prajñāpāramitā, which states that such claims come from Mara (the evil tempter). Some of these Mahāyāna sutras also warn those who would denigrate Mahāyāna sutras or those who preach it (i.e. the dharmabhanaka) that this action can lead to rebirth in hell. Another feature of some Mahāyāna sutras, especially later ones, is increasing sectarianism and animosity towards non-Mahāyāna practitioners (sometimes called sravakas, "hearers") which are sometimes depicted as being part of the 'hīnayāna' (the 'inferior way') who refuse to accept the 'superior way' of the Mahāyāna. As noted by Paul Williams, earlier Mahāyāna sutras like the Ugraparipṛcchā Sūtra and the Ajitasena sutra do not present any antagonism towards the hearers or the ideal of arhatship like later sutras do. Regarding the bodhisattva path, some Mahāyāna sutras promote it as a universal path for everyone, while others like the Ugraparipṛcchā see it as something for a small elite of hardcore ascetics. In the 4th-century Mahāyāna Abhidharma work Abhidharmasamuccaya, Asaṅga refers to the collection which contains the āgamas as the Śrāvakapiṭaka and associates it with the śrāvakas and pratyekabuddhas. Asaṅga classifies the Mahāyāna sūtras as belonging to the Bodhisattvapiṭaka, which is designated as the collection of teachings for bodhisattvas. Mahāyāna Buddhism also developed a massive commentarial and exegetical literature, many of which are called śāstra (treatises) or vrittis (commentaries). Philosophical texts were also written in verse form (karikās), such as in the case of the famous Mūlamadhyamika-karikā (Root Verses on the Middle Way) by Nagarjuna, the foundational text of Madhyamika philosophy. Numerous later Madhyamika philosophers like Candrakirti wrote commentaries on this work as well as their own verse works. Mahāyāna Buddhist tradition also relies on numerous non-Mahayana commentaries (śāstra), a very influential one being the Abhidharmakosha of Vasubandhu, which is written from a non-Mahayana Sarvastivada–Sautrantika perspective. Vasubandhu is also the author of various Mahāyāna Yogacara texts on the philosophical theory known as vijñapti-matra (conscious construction only). The Yogacara school philosopher Asanga is also credited with numerous highly influential commentaries. In East Asia, the Satyasiddhi śāstra was also influential. Another influential tradition is that of Dignāga's Buddhist logic whose work focused on epistemology. He produced the Pramānasamuccaya, and later Dharmakirti wrote the Pramānavārttikā, which was a commentary and reworking of the Dignaga text. Later Tibetan and Chinese Buddhists continued the tradition of writing commentaries. Dating back at least to the Saṃdhinirmocana Sūtra is a classification of the corpus of Buddhism into three categories, based on ways of understanding the nature of reality, known as the "Three Turnings of the Dharma Wheel". According to this view, there were three such "turnings": Some traditions of Tibetan Buddhism consider the teachings of Esoteric Buddhism and Vajrayāna to be the third turning of the Dharma Wheel. Tibetan teachers, particularly of the Gelugpa school, regard the second turning as the highest teaching, because of their particular interpretation of Yogācāra doctrine. The Buddha Nature teachings are normally included in the third turning of the wheel.[citation needed] The different Chinese Buddhist traditions have different schemes of doctrinal periodization called panjiao which they use to organize the sometimes bewildering array of texts. Scholars have noted that many key Mahāyāna ideas are closely connected to the earliest texts of Buddhism. The seminal work of Mahāyāna philosophy, Nāgārjuna's Mūlamadhyamakakārikā, mentions the canon's Katyāyana Sūtra (SA 301) by name, and may be an extended commentary on that work. Nāgārjuna systematized the Mādhyamaka school of Mahāyāna philosophy. He may have arrived at his positions from a desire to achieve a consistent exegesis of the Buddha's doctrine as recorded in the canon. In his eyes, the Buddha was not merely a forerunner, but the very founder of the Mādhyamaka system. Nāgārjuna also referred to a passage in the canon regarding "nirvanic consciousness" in two different works. Yogācāra, the other prominent Mahāyāna school in dialectic with the Mādhyamaka school, gave a special significance to the canon's Lesser Discourse on Emptiness (MA 190). A passage there (which the discourse itself emphasizes) is often quoted in later Yogācāra texts as a true definition of emptiness. According to Walpola Rahula, the thought presented in the Yogācāra school's Abhidharma-samuccaya is undeniably closer to that of the Pali Nikayas than is that of the Theravadin Abhidhamma. Both the Mādhyamikas and the Yogācārins saw themselves as preserving the Buddhist Middle Way between the extremes of nihilism (everything as unreal) and substantialism (substantial entities existing). The Yogācārins criticized the Mādhyamikas for tending towards nihilism, while the Mādhyamikas criticized the Yogācārins for tending towards substantialism. Key Mahāyāna texts introducing the concepts of bodhicitta and Buddha nature also use language parallel to passages in the canon containing the Buddha's description of "luminous mind" and appear to have evolved from this idea. Contemporary forms of Mahāyāna Buddhism The main contemporary traditions of Mahāyāna in Asia are: There are also some minor Mahāyāna traditions practiced by minority groups, such as Newar Buddhism practiced by the Newar people (Nepal) and Azhaliism practiced by the Bai people (Yunnan). Furthermore, there are also various new religious movements which either see themselves as Mahāyāna or are strongly influenced by Mahāyāna Buddhism. Examples of these include Hòa Hảo, Won Buddhism, Triratna Buddhist Community and Sōka Gakkai. Lastly, there are various East Asian religious traditions which are strongly influenced by Mahāyāna Buddhism, though they may not be considered as being "Buddhist" per se. These include: Bon, Shugendo, Mongolian Yellow shamanism, Syncretized Shinto (shinbutsu-shūgō) and some of the Chinese salvationist religions. Most of the major forms of contemporary Mahāyāna Buddhism are also practiced by Asian immigrant populations in the West and also by western convert Buddhists. For more on this topic see: Buddhism in the West. Contemporary Han Chinese Buddhism is practiced through many varied forms, such as Chan (Zen), Pure land, Tiantai, Huayan and mantra practices. This group is the largest population of Buddhists in the world. There are between 228 and 239 million Mahāyāna Buddhists in the People's Republic of China. This does not include the Tibetan and Mongolian Buddhists who practice Tibetan Buddhism. Harvey gives the East Asian Mahāyāna Buddhist population in other countries as follows: Taiwanese Buddhists, 8 million; Malaysian Buddhists, 5.5 million; Singaporean Buddhists, 1.5 million; Hong Kong, 0.7 million; Indonesian Buddhists, 4 million, The Philippines: 2.3 million. Most of these are Han Chinese populations. Chinese Buddhism can be divided into various different traditions (zong), such as Sanlun, Faxiang, Tiantai, Huayan, Pure Land, Chan, and Zhenyan. However, historically, most temples, institutions and Buddhist practitioners usually did not belong to any single "sect" (as is common in Japanese Buddhism), but draw from the various different elements of Chinese Buddhist thought and practice. This non-sectarian and eclectic aspect of Chinese Buddhism as a whole has persisted from its historical beginnings into its modern practice. The modern development of an ideology called Humanistic Buddhism (Chinese: 人間佛教; pinyin: rénjiān fójiào, more literally "Buddhism for the Human World") has also been influential on Chinese Buddhist leaders and institutions. Chinese Buddhists may also practice some form of religious syncretism with other Chinese religions, such as Taoism. In modern China, the reform and opening up period in the late 20th century saw a particularly significant increase in the number of converts to Chinese Buddhism, a growth which has been called "extraordinary". Korean Buddhism is dominated by the Korean Seon school (i.e. Zen), primarily represented by the Jogye Order and the Taego Order. Korean Seon also includes some Pure Land practice. It is mainly practiced in South Korea, with a rough population of about 10.9 million Buddhists. There are also some minor Korean schools, such as the Cheontae (i.e. Korean Tiantai), and the esoteric Jingak and Chinŏn schools. While North Korea's totalitarian government remains repressive and ambivalent towards religion, at least 11 percent of the population is considered to be Buddhist according to Williams. Japanese Buddhism is divided into numerous traditions which include various sects of Pure Land Buddhism (the largest being Shin and Jodo), Tendai, Nichiren Buddhism, Shingon and three major sects of Zen (Soto, Rinzai and Obaku). There are also various Mahāyāna oriented Japanese new religions that arose in the post-war period. Many of these new religions are lay movements like Sōka Gakkai, Risshō Kōsei Kai and Agon Shū. An estimate of the Japanese Mahāyāna Buddhist population is given by Harvey as 52 million and a recent 2018 survey puts the number at 84 million. It should also be noted that many Japanese Buddhists also participate in Shinto practices, such as visiting shrines, collecting amulets and attending festivals. Vietnamese Buddhism is strongly influenced by the Chinese tradition. It is a synthesis of numerous practices and ideas. Vietnamese Mahāyāna draws practices from Vietnamese Thiền (Chan/Zen), Tịnh độ (Pure Land), and Mật Tông (Mantrayana) and its philosophy from Hoa Nghiêm (Huayan) and Thiên Thai (Tiantai). New Mahāyāna movements have also developed in the modern era, perhaps the most influential of which has been Thích Nhất Hạnh's Plum Village Tradition, which also draws from Theravada Buddhism. Though Vietnamese Buddhism suffered extensively during the Vietnam war (1955–1975) and during subsequent communist takeover of the south, there has been a revival of the religion since the liberalization period following 1986. There are about 43 million Vietnamese Mahāyāna Buddhists. Indo-Tibetan Buddhism, Tibetan Buddhism or "Northern" Buddhism derives from the Indian Vajrayana Buddhism that was adopted in medieval Tibet. Though it includes numerous tantric Buddhist practices not found in East Asian Mahāyāna, Northern Buddhism still considers itself as part of Mahāyāna Buddhism (albeit as one which also contains a more effective and distinct vehicle or yana). Contemporary Northern Buddhism is traditionally practiced mainly in the Himalayan regions and in some regions of North Central Asia, including: As with Eastern Buddhism, the practice of northern Buddhism declined in Tibet, China and Mongolia during the communist takeover of these regions (Mongolia: 1924, Tibet: 1959). Tibetan Buddhism continued to be practiced among the Tibetan diaspora population, as well as by other Himalayan peoples in Bhutan, Ladakh and Nepal. Post-1980s though, Northern Buddhism has seen a revival in both Tibet and Mongolia due to more liberal government policies towards religious freedom. Northern Buddhism is also now practiced in the Western world by western convert Buddhists. Relationship to the Theravada school In the early Buddhist texts, and as taught by the modern Theravada school, the goal of becoming a teaching Buddha in a future life is viewed as the aim of a small group of individuals striving to benefit future generations after the current Buddha's teachings have been lost, but in the current age there is no need for most practitioners to aspire to this goal. Theravada texts do, however, hold that this is a more perfectly virtuous goal. Paul Williams writes that some modern Theravada meditation masters in Thailand are popularly regarded as bodhisattvas. Cholvijarn observes that prominent figures associated with the Self perspective in Thailand have often been famous outside scholarly circles as well, among the wider populace, as Buddhist meditation masters and sources of miracles and sacred amulets. Like perhaps some of the early Mahāyāna forest hermit monks, or the later Buddhist Tantrics, they have become people of power through their meditative achievements. They are widely revered, worshipped, and held to be arhats or (note!) bodhisattvas. In the 7th century, the Chinese Buddhist monk Xuanzang describes the concurrent existence of the Mahāvihara and the Abhayagiri Vihara in Sri Lanka. He refers to the monks of the Mahāvihara as the "Hīnayāna Sthaviras" (Theras), and the monks of the Abhayagiri Vihara as the "Mahāyāna Sthaviras". Xuanzang further writes: The Mahāvihāravāsins reject the Mahāyāna and practice the Hīnayāna, while the Abhayagirivihāravāsins study both Hīnayāna and Mahāyāna teachings and propagate the Tripiṭaka. The modern Theravāda school is usually described as belonging to Hīnayāna. Some authors have argued that it should not be considered such from the Mahāyāna perspective. Their view is based on a different understanding of the concept of Hīnayāna. Rather than regarding the term as referring to any school of Buddhism that has not accepted the Mahāyāna canon and doctrines, such as those pertaining to the role of the bodhisattva, these authors argue that the classification of a school as "Hīnayāna" should be crucially dependent on the adherence to a specific phenomenological position. They point out that unlike the now-extinct Sarvāstivāda school, which was the primary object of Mahāyāna criticism, the Theravāda does not claim the existence of independent entities (dharmas); in this it maintains the attitude of early Buddhism. Adherents of Mahāyāna Buddhism disagreed with the substantialist thought of the Sarvāstivādins and Sautrāntikas, and in emphasizing the doctrine of emptiness, Kalupahana holds that they endeavored to preserve the early teaching. The Theravādins too refuted the Sarvāstivādins and Sautrāntikas (and other schools) on the grounds that their theories were in conflict with the non-substantialism of the canon. The Theravāda arguments are preserved in the Kathāvatthu. Some contemporary Theravādin figures have indicated a sympathetic stance toward the Mahāyāna philosophy found in texts such as the Heart Sūtra (Skt. Prajñāpāramitā Hṛdaya) and Nāgārjuna's Fundamental Stanzas on the Middle Way (Skt. Mūlamadhyamakakārikā). See also Notes References Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Genetic_studies_of_Jews] | [TOKENS: 20999] |
Contents Genetic studies of Jews Genetic studies of Jews are part of the population genetics discipline and are used to analyze the ancestry of Jewish populations, complementing research in other fields such as history, linguistics, archaeology, paleontology, and medicine. These studies investigate the origins of various Jewish ethnic divisions by using DNA to investigate whether different Jewish and non-Jewish populations have shared ancestry or not. The medical genetics of Jews are studied for population-specific diseases and disease commonalities with other ethnicities. Studies on Jewish populations have been principally conducted using three types of genealogical DNA tests: autosomal (atDNA), mitochondrial (mtDNA), and Y-chromosome (Y-DNA). Autosomal testing, which looks at the largest sets of genes within peoples' DNA, shows that Jewish populations tended to form genetic isolates – relatively closely related groups in independent communities with most in a community sharing significant ancestry. The Ashkenazi Jews form the largest such group. Mitochondrial and Y-DNA tests look at maternal and paternal ancestry respectively, via two small groups of genes transmitted only via female or male ancestors. Studies on the genetic composition of Ashkenazi, Sephardi, and Mizrahi Jewish populations of the Jewish diaspora show significant amounts of shared Middle Eastern ancestry, as well as admixture from their host populations. Jews living in the North African, Italian, and Iberian regions show variable frequencies of genetic overlap with the historical non-Jewish population along the maternal lines. In the case of Ashkenazi and Sephardi Jews (in particular Moroccan Jews), who are closely related, the source of non-Middle-Eastern admixture is mainly Southern European. Some researchers have remarked on an especially close relationship between Ashkenazi Jews and modern Italians, Greeks, and other Southern Europeans. Bene Israel and the Cochin Jews of India, and Beta Israel of Ethiopia, resemble their local populations but also may have some near eastern lineages on the paternal side. Religious, historical, and genetic perspectives on Jewish identity Jewish identity and Jewish peoplehood are multifaceted, and there are multiple theories on the ethnic origins of Jews. In addition to the religion of Judaism, genetics and political and ethnic division have influenced Jewish identity. The traditional narrative is that Jews descended from the Israelites, without implying that all Jews are biological descendants, since conversion has always been part of Judaism. Attempts to furnish genetic evidence corroborating the biblical storytelling has been disputed. The advent of modern genetic research methods has led to extensive genetic studies on the topic. Such research has identified genotypic common denominators of Jewish people, but as per Raphael Falk, while certain detectable Middle Eastern genetic components exist in numerous Jewish communities, there is no evidence for a single Jewish prototype, and that "any general biological definition of Jews is meaningless". Autosomal DNA These studies focus upon autosomal chromosomes, the 22 homologous or autosomes (non-sex chromosomes) Autosomal DNA studies show high levels of genetic relatedness among Ashkenazi, Sephardi, and Mizrahi Jews, corresponding to a shared Middle Eastern ancestry with variations in regional admixture. Autosomal DNA evidence supports the historical narrative of Jewish populations originating from the ancient Levant, with genetic diversity shaped by migrations, admixture, and isolation over millennia. Ashkenazi Jews share genetic similarities with Southern Europeans, such as Italians and Greeks, while exhibiting unique markers distinguishing them from non-Jewish groups. Similarly, North African Jews also exhibit proximity to European and Middle Eastern groups, reflecting their historical migration pattern. Other studies also reveal significant regional genetic diversity, such as the Berber admixture in Libyan Jews, or Ethiopian Jews' local ancestry combined with Middle Eastern links. In autosomal analyses, the Iraqi Jews, Iranian Jews, Bukharian Jews, Kurdish Jews, Mountain Jews, and Georgian Jews form a close genetic cluster. When examined at a more detailed level, the groups can be separated from each other. This cluster plots between Levantine and Northern West Asian populations. Syrian and North African Jews are separate from it and closer to the Sephardi Jews. Yemenite Jews are distinct from other Jewish groups and cluster with the non-Jewish population of the Arabian Peninsula. In 2022, a three-year-long study that analyzed the DNA from the remains of 38 individuals from an excavated Jewish cemetery in Erfurt dating to the 14th century found that the medieval Erfurt Ashkenazi community was more genetically diverse than modern Ashkenazi Jews. The medieval Erfurt community was found to consist of two groups, one which had more Eastern European ancestry than modern Ashkenazi Jews and another with more Middle Eastern ancestry which was also genetically close to German and French Ashkenazi Jews and Turkish Sephardi Jews. The groups also had different levels of oxygen isotopes in their teeth, suggesting they used water sources from different areas in childhood, indicating that one of these groups migrated to Erfurt. These results seem to back up historical research which has suggested that medieval Ashkenazi Jewry was culturally divided between Western Jews, who originally lived in the Rhineland (and who may be the group with more Middle Eastern ancestry), and Eastern Jews, who originally lived in eastern Germany, Austria, Bohemia, Moravia, and Silesia (and who may be the group with more European ancestry). Erfurt lay at the boundary of these communities. The study also found evidence of the historic founder effect of Ashkenazi Jewry, with a third of the individuals sampled found to descend from a single woman along the maternal line. The genome of modern Ashkenazi Jewry was found to appear as a near-even mixture between the two groups, with about 60% of modern Ashkenazi DNA found to come from the group with more Middle Eastern ancestry and 40% found to come from the group with more Eastern European ancestry, suggesting that they eventually merged into a single Ashkenazi culture. The study's admixture models for Erfurt Ashkenazi Jews (EAJ) varied, but the authors concluded that "Under the extensive set of models we studied, the ME [Middle Eastern] ancestry in EAJ is estimated in the range 19%–43% and the Mediterranean European ancestry in the range 37%–65% [the remainder of the European ancestry being Eastern European]. However, the true ancestry proportions could be higher or lower than implied by these ranges." They continued, "Our results therefore should only be interpreted to suggest that AJ ancestral sources have links to populations living in Mediterranean Europe and the Middle East today." A 2020 genetic study on Bronze Age and Iron Age southern Levantine (Canaanite) remains found evidence of large-scale migration of populations related to those of the Zagros or Caucasus into the southern Levant by the Bronze Age and increasing over time (resulting in a Canaanite population descended from both those migrants and earlier Neolithic Levantine peoples). The findings were found to be consistent with modern-day non-Jewish Arabic-speaking Levantine populations (such as Syrians, Lebanese, Palestinians, and Druze) and Jewish groups (such as Moroccan Sephardi Jews, Ashkenazi Jews, and Iranian Jews), "having 50% or more of their ancestry from people related to groups who lived in the Bronze Age Levant and the Chalcolithic Zagros." Ashkenazi Jews were found to have 41% European admixture and Moroccan Jews were found to have 31% European admixture. Ethiopian Jews were found to derive 80% of their ancestry from an East African or Horn African component but also carried some Canaanite-like and Zagros-like ancestry. This does not necessarily mean that any these present-day groups bear direct ancestry from people who lived in the Middle to Late Bronze Age Levant or in Chalcolithic Zagros; rather, it indicates that they have ancestries from populations whose ancient proxy can be related to the Middle East. A 2017 study by Xue et al., running different tests on Ashkenazi Jewish genomes found an approximately even mixture of Middle Eastern and European ancestry and concluded that the true fraction of European ancestry was possibly about 60% with the remaining 40% being Middle Eastern. The authors estimated the Levant as the most likely source of Middle Eastern ancestry in Ashkenazi Jews, and also estimated that between 60% and 80% of the European ancestry was Southern European, "with the rest being likely Eastern European." In 2011, Moorjani et al. detected 3%–5% sub-Saharan African ancestry in all eight of the diverse Jewish populations (Ashkenazi Jews, Syrian Jews, Iranian Jews, Iraqi Jews, Greek Jews, Turkish Jews, Italian Jews) that they analyzed. The timing of this African admixture among all Jewish populations was identical. The exact date was not determined, but it was estimated to have taken place between 1,600 and 3,400 years ago. Although African admixture was determined among South Europeans and Near Eastern populations too, this admixture was found to be younger compared to the Jewish populations. These findings the authors explained as evidence regarding the common origin of these 8 main Jewish groups. "It is intriguing that the Mizrahi Irani and Iraqi Jews—who are thought to descend at least in part from Jews who were exiled to Babylon about 2,600 years ago share the signal of African admixture. A parsimonious explanation for these observations is that they reflect a history in which many of the Jewish groups descend from a common ancestral population which was itself admixed with Africans, before the beginning of the Jewish diaspora that occurred in 8th to 6th century BC" the authors concluded. In 2012, two major genetic studies were carried out under the leadership of Harry Ostrer, from the Albert Einstein College of Medicine. The results were published in the Proceedings for the National Academy of Sciences. The genes of 509 Jewish donors from 15 different backgrounds and 114 non-Jewish donors of North African origin were analyzed. Ashkenazi, Sephardi, and Mizrahi Jews were found to be closer genetically to each other than to their long-term host populations, and all of them were found to have Middle Eastern ancestry, together with varying amounts of admixture in their local populations. Mizrahi and Ashkenazi Jews were found to have diverged from each other approximately 2,500 years in the past, approximately the time of the Babylonian exile. The studies also reconfirmed the results of previous studies which found that North African Jews were more closely related to each other and to European and Middle Eastern Jews than to their non-Jewish host populations. The genome-wide ancestry of North African Jewish groups was compared with respect to European (Basque), Maghrebi (Tunisian non-Jewish), and Middle Eastern (Palestinian) origins. The Middle Eastern component was found to be comparable across all North African Jewish and non-Jewish groups, while North African Jewish groups showed increased European and decreased levels of North African (Maghrebi) ancestry with Moroccan and Algerian Jews tending to be genetically closer to Europeans than Djerban Jews. The study found that Yemenite, Ethiopian, and Georgian Jews formed their own distinctive, genetically linked clusters. In particular, Yemenite Jews, who had previously been believed to have lived in isolation, were found to have genetic connections to their host population, suggesting some conversion of local Arabs to Judaism had taken place. Georgian Jews were found to share close connections to Iraqi and Iranian Jews, as well as other Middle Eastern Jewish groups. The study also found that Syrian Jews share more genetic commonality with Ashkenazi Jews than with other Middle Eastern Jewish populations. According to the study: distinctive North African Jewish population clusters with proximity to other Jewish populations and variable degrees of Middle Eastern, European, and North African admixture. Two major subgroups were identified by principal component, neighbor joining tree, and identity-by-descent analysis—Moroccan/Algerian and Djerban/Libyan—that varied in their degree of European admixture. These populations showed a high degree of endogamy and were part of a larger Ashkenazi and Sephardic Jewish group. By principal component analysis, these North African groups were orthogonal to contemporary populations from North and South Morocco, Western Sahara, Tunisia, Libya, and Egypt. Thus, this study is compatible with the history of North African Jews—founding during Classical Antiquity with proselytism of local populations, followed by genetic isolation with the rise of Christianity and then Islam, and admixture following the emigration of Sephardic Jews during the Inquisition. Ostrer also found that Ethiopian Jews are predominantly related to the indigenous populations of Ethiopia, but do have distant genetic links to the Middle East from more than 2,000 years in the past, and are likely descended from a few Jewish founders. It was speculated that the community began when a few itinerant Jews settled in Ethiopia in ancient times, converted locals to Judaism, and married into the local populations. A 2012 study by Eran Elhaik analyzed data collected for previous studies and concluded that the DNA of Eastern and Central European Jewish populations indicates that their ancestry is "a mosaic of Caucasus, European, and Semitic ancestries". For the study, Bedouins and Jordanian Hashemites, known to descend from Arabian tribes, were assumed to be a valid genetic surrogate of ancient Jews, whereas the Druze, known to come from Syria, were assumed to be non-Semitic immigrants into the Levant. Armenians and Georgians were also used as surrogate populations for the Khazars, who spoke a Turkic language unrelated to Georgian or Armenian. On this basis, a relatively strong connection to the Caucasus was proposed because of the stronger genetic similarity of these Jewish groups to modern Armenians, Georgians, Azerbaijani Jews, Druze and Cypriots, compared to a weaker genetic similarity with Hashemites and Bedouins. This proposed Caucasian component of ancestry was in turn taken to be consistent with the Khazarian Hypothesis as an explanation of part of the ancestry of Ashkenazi Jews. A study by Haber et al. (2013) noted that while previous studies of the Levant, which had focused mainly on diaspora Jewish populations, showed that the "Jews form a distinctive cluster in the Middle East", these studies did not make clear "whether the factors driving this structure would also involve other groups in the Levant". The authors found strong evidence that modern Levant populations descend from two major apparent ancestral populations. One set of genetic characteristics that are shared with modern-day Europeans and Central Asians is most prominent in the Levant amongst "Lebanese, Armenians, Cypriots, Druze and Jews, as well as Turks, Iranians, and Caucasian populations". The second set of inherited genetic characteristics is shared with populations in other parts of the Middle East as well as some African populations. Levant populations in this category today include "Palestinians, Jordanians, Syrians, as well as North Africans, Ethiopians, Saudis, and Bedouins". Concerning this second component of ancestry, the authors remark that while it correlates with "the pattern of the Islamic expansion", and that "a pre-Islamic expansion Levant was more genetically similar to Europeans than to Middle Easterners," they also say that "its presence in Lebanese Christians, Sephardi and Ashkenazi Jews, Cypriots, and Armenians might suggest that its spread to the Levant could also represent an earlier event". The authors also found a strong correlation between religion and apparent ancestry in the Levant: all Jews (Sephardi and Ashkenazi) cluster in one branch; Druze from Mount Lebanon and Druze from Mount Carmel are depicted on a private branch; and Lebanese Christians form a private branch with the Christian populations of Armenia and Cyprus placing the Lebanese Muslims as an outer group. The predominantly Muslim populations of Syrians, Palestinians, and Jordanians cluster on branches with other Muslim populations as distant as Morocco and Yemen. A 2013 study by Doron M. Behar, Mait Metspalu, Yael Baran, Naama M. Kopelman, Bayazit Yunusbayev et al. using integration of genotypes on newly collected largest data set available to date (1,774 samples from 106 Jewish and non-Jewish populations) for assessment of Ashkenazi Jewish genetic origins from the regions of potential Ashkenazi ancestry: (Europe, the Middle East, and the region historically associated with the Khazar Khaganate) concluded that "This most comprehensive study... does not change and in fact reinforces the conclusions of multiple past studies, including ours and those of other groups (Atzmon and others, 2010; Bauchet and others, 2007; Behar and others, 2010; Campbell and others, 2012; Guha and others, 2012; Haber and others; 2013; Henn and others, 2012; Kopelman and others, 2009; Seldin and others, 2006; Tian and others, 2008). We confirm the notion that the Ashkenazi, North African, and Sephardi Jews share substantial genetic ancestry and that they derive it from Middle Eastern and European populations, with no indication of a detectable Khazar contribution to their genetic origins." The authors also reanalyzed the 2012 study of Eran Elhaik and found that "The provocative assumption that Armenians and Georgians could serve as appropriate proxies for Khazar descendants is problematic for a number of reasons as the evidence for ancestry among Caucasus populations do not reflect Khazar ancestry". Also, the authors found that "Even if it were allowed that Caucasus affinities could represent Khazar ancestry, the use of the Armenians and Georgians as Khazar proxies is particularly poor, as they represent the southern part of the Caucasus region, while the Khazar Khaganate was centered in the North Caucasus and further to the north. Furthermore, among populations of the Caucasus, Armenians and Georgians are geographically the closest to the Middle East, and are therefore expected a priori to show the greatest genetic similarity to Middle Eastern populations." Concerning the similarity of South Caucasus populations to Middle Eastern groups which was observed at the level of the whole genome in one recent study (Yunusbayev and others, 2012). The authors found that "Any genetic similarity between Ashkenazi Jews and Armenians and Georgians might merely reflect a common shared Middle Eastern ancestry component, actually providing further support to a Middle Eastern origin of Ashkenazi Jews, rather than a hint for a Khazar origin". The authors claimed "If one accepts the premise that similarity to Armenians and Georgians represents Khazar ancestry for Ashkenazi Jews, then by extension one must also claim that Middle Eastern Jews and many Mediterranean European and Middle Eastern populations are also Khazar descendants. This claim is clearly not valid, as the differences among the various Jewish and non-Jewish populations of Mediterranean Europe and the Middle East predate the period of the Khazars by thousands of years". A 2014 scientific study by geneticists, Shai Carmi, PhD (Hebrew University) et al. published by Nature Communications found that the Ashkenazi Jewish population originates from an even mixture between Middle Eastern and European peoples, descending from 330 to 350 individuals who were genetically about half-Middle Eastern and half-European, making all Ashkenazi Jews related to the point of being at least 30th cousins or closer. According to the authors, this genetic bottleneck likely occurred some 600–800 years in the past, followed by rapid growth and genetic isolation (rate per generation 16–53%;). The principal component analysis of common variants in the sequenced AJ samples confirmed previous observations, namely, the proximity of the Ashkenazi Jewish cluster to other Jewish, European, and Middle Eastern populations. This was confirmed by another 2022 genome study by Shamam Waldman PhD (also Hebrew University) et al. published by Cell that modern Ashkenazi Jews descend from a small group, with the original researcher, Shai Carmi, stating, "Whether they're from Israel or New York, the Ashkenazi population today is homogenous genetically." A 2016 study by Elhaik et al. in the Oxford University Press published journal Genome Biology and Evolution found that the DNA of Ashkenazi Jews originated in northeastern Turkey. The study found 90% of Ashkenazi Jews could be traced to four ancient villages in northeastern Turkey. The researchers speculated that the Ashkenazi Jews originated in the first millennium, when Iranian Jews converted Greco-Roman, Turkish, Iranian, southern Caucasian, and Slavic populations inhabiting Turkey, and speculated that the Yiddish language also originated there among Jewish merchants as a cryptic language in order to gain advantage in trade along the Silk Road. In a joint study published in 2016 by Genome Biology and Evolution, a group of geneticists and linguists from the UK, Czech Republic, Russia, and Lithuania, dismissed both the genetic and linguistic components of Elhaik's 2016 study. As for the genetic component, the authors argued that using a genetic "GPS tool" (as used by Elhaik et al.) would place Italians and Spaniards into Greece, all Tunisians and some Kuwaitis would be placed in the Mediterranean Sea, all Greeks were positioned in Bulgaria and in the Black Sea, and all Lebanese were scattered along a line connecting Egypt and the Caucasus; "These cases are sufficient to illustrate that mapping of test individuals has nothing to do with ancestral locations" the authors wrote. As for the linguistic component, the authors stated "Yiddish is a Germanic language, leaving no room for the Slavic relexification hypothesis and for the idea of early Yiddish-Persian contacts in Asia Minor. The study concluded that 'Yiddish is a Slavic language created by Irano-Turko-Slavic Jewish merchants along the Silk Roads as a cryptic trade language, spoken only by its originators to gain an advantage in trade' (Das et al. 2016) remains an assertion in the realm of unsupported speculation", the study concluded. In an ancient DNA analysis by Elhaik of six Natufians and a Levantine Neolithic (2016), some of the likely Judaean progenitors, the ancient individuals clustered predominantly with modern-day Palestinians and Bedouins and marginally overlapped with Arabian Jews. Ashkenazic Jews clustered away from these ancient Levantine individuals and adjacent to Neolithic Anatolians and Late Neolithic and Bronze Age Europeans. A 2016 study of Indian Jews from the Bene Israel community by Waldman et al. found that the genetic composition of the community is "unique among Indian and Pakistani populations we analyzed in sharing considerable genetic ancestry with other Jewish populations. Putting together the results from all analyses point to Bene Israel being an admixed population with both Jewish and Indian ancestry, with the genetic contribution of each of these ancestral populations being substantial." The authors also examined the proportion and roots of the shared Jewish ancestry and the local genetic admixture: "In addition, we performed f4-based analysis to test whether Bene Israel are closer to Jews than to non-Jewish Middle-Eastern populations. We found that Middle-Eastern Jewish populations were closer to Bene Israel as compared to other Middle-Eastern populations examined (Druze, Bedouin, and Palestinians). Non-Middle-Eastern Jewish populations were still closer to Bene Israel as compared to Bedouin and Palestinians, but not as compared to Druze. These results further support the hypothesis that the non-Indian ancestry of Bene Israel is Jewish specific, likely from a Middle-Eastern Jewish population." An autosomal DNA study carried out in 2010 by Atzmon et al. examined the origin of Iranian, Iraqi, Syrian, Turkish, Greek, Sephardic, and Ashkenazi Jewish communities. The study compared these Jewish groups with 1043 unrelated individuals from 52 worldwide populations. To further examine the relationship between Jewish communities and European populations, 2407 European subjects were assigned and divided into 10 groups based on the geographic region of their origin. This study confirmed previous findings of shared Middle Eastern origin of the above Jewish groups and found that "the genetic connections between the Jewish populations became evident from the frequent identity by descent (IBD) across these Jewish groups (63% of all shared segments). Jewish populations shared more and longer segments with one another than with non-Jewish populations, highlighting the commonality of Jewish origin. Among pairs of populations ordered by total sharing, 12 out of the top 20 were pairs of Jewish populations, and "none of the top 30 paired a Jewish population with a non-Jewish one". Atzmon concludes that "Each Jewish group demonstrated Middle Eastern ancestry and variable admixture from host population, while the split between Middle Eastern and European/Syrian Jews, calculated by simulation and comparison of length distributions of IBD segments, occurred 100–150 generations ago, which was described as "compatible with a historical divide that is reported to have occurred more than 2500 years ago" as the Jewish community in Iraq and Iran were formed by Jews in the Babylonian and Persian empires during and after Babylonian exile. The main difference between Mizrahi and Ashkenazi/Sephardic Jews was the absence of Southern European components in the former. According to these results, European/Syrian Jewish populations, including the Ashkenazi Jewish community, were formed later, as a result of the expulsion and migration of Jews from Palestine, during Roman rule. Concerning Ashkenazi Jews, this study found that genetic dates "are incompatible with theories that Ashkenazi Jews are for the most part the direct lineal descendants of converted Khazars or Slavs". Citing Behar, Atzmon states that "Evidence for founder females of Middle Eastern origin has been observed in all Jewish populations based on non-overlapping mitochondrial haplotypes with coalescence times >2000 years". The closest people related to Jewish groups were the Palestinians, Bedouins, Druze, Greeks, and Italians. Regarding this relationship, the authors conclude that "These observations are supported by the significant overlap of Y chromosomal haplogroups between Israeli and Palestinian Arabs with Ashkenazi and non-Ashkenazi Jewish populations". A 2010 study by Zoossmann-Diskin concluded that based upon the analysis of the X chromosome and seventeen autosomal markers, Eastern European Jewish populations and Jewish populations from Iran, Iraq, and Yemen, do not have the same genetic origins. In particular, concerning Eastern European Jews, he concluded that the evidence points to a dominant amount of southern European, and specifically Italian, ancestry, which he attributed to the conversions to Judaism in ancient Rome which are also supported by historical evidence. Concerning the similarity between Sephardi and Ashkenazi Jews, he stated that the reasons are uncertain but that it is likely to be caused by Sephardic Jews having "Mediterranean basin" ancestry also like the Ashkenazi Jews. A 2009 study on various European and Near Eastern ethnic groups found Ashkenazi Jews to show closer Genetic distance (Fst) with Italians, Greeks, Germans, and other European groups than what they show with Levantine groups such as Druze and Palestinians. Though it also found that the Ashkenazi Jews were mainly a population "clearly of southern" [Mediterranean] origin", they "appear to have a unique genotypic pattern that may not reflect geographic origins." A 2009 study by Goldstein et al. shows that it is possible to predict full Ashkenazi Jewish ancestry with 100% sensitivity and 100% specificity, although the exact dividing line between a Jewish and non-Jewish cluster will vary across sample sets which in practice would reduce the accuracy of the prediction. While the full historical demographic explanations for this distinction remain to be resolved, it is clear that the genomes of individuals with full Ashkenazi Jewish ancestry carry an unambiguous signature of their Jewish ancestral DNA, the author suggested that it is more likely to be due to their specific Middle Eastern ancestry than to inbreeding. The authors note that there is almost perfect separation along PC 1, and, they note that most of the non-Jewish Europeans who are closest to the Jews on this PC are of Italian or Eastern Mediterranean origin. In a 2009 study by Kopelman et al., four Jewish groups, Ashkenazi, Turkish, Moroccan, and Tunisian, were found to share a common origin from the Middle East, with more recent admixture that has resulted in "intermediate placement of the Jewish populations compared to European and Middle Eastern populations". The authors found that "the closest genetic neighbors to most Jewish groups were the Palestinians, Israeli Bedouins, and Druze in addition to the Southern Europeans". The Tunisian Jews were found to be distinct from three other Jewish populations, which suggests, according to the authors, a greater genetic isolation or a significant local Berber ancestry, as in the case of Libyan Jews. Concerning the theory of Khazar ancestry in Ashkenazi Jews, the authors found no direct evidence. Although they did find genetic similarities between Jews, especially Ashkenazi Jews, and the Adyghe people, a group from the Caucasus, whose region was formerly occupied by the Khazars, the Adyghe, living on the edge of geographical Europe, are more genetically related to Middle Easterners, including Palestinians, Bedouin, and non-Ashkenazi Jews, than to Europeans. Another study by L. Hao et al. studied seven groups of Jewish populations with different geographic origins (Ashkenazi, Italian, Greek, Turkish, Iranian, Iraqi, and Syrian) and showed that the individuals all shared a common Middle Eastern background, although they were also genetically distinguishable from each other. In public comments, Harry Ostrer, the director of the Human Genetics Program at NYU Langone Medical Center, and one of the authors of this study, concluded, "We have shown that Jewishness can be identified through genetic analysis, so the notion of a Jewish people is plausible." A genome-wide genetic study carried out by Need et al. and published in 2009 showed that "individuals with full Jewish ancestry formed a clearly distinct cluster from those individuals with no Jewish ancestry." The study found that the Jewish cluster examined, fell between that of Middle Eastern and European populations. Reflecting on these findings, the authors concluded, "It is clear that the genomes of individuals with full Ashkenazi Jewish ancestry carry an unambiguous signature of their Jewish heritage, and this seems more likely to be due to their specific Middle Eastern ancestry than to inbreeding." The current study extends the analysis of the European population genetic structure to include additional southern European groups and Arab populations. While the Ashkenazi are clearly of southern origin based on both PCA and STRUCTURE studies, in this analysis of diverse European populations, this group appears to have a unique genotypic pattern that may not reflect geographic origins. A 2008 study by Price et al. sampled Southern Italians, Jews, and other Europeans, and isolated the genetic markers that are most accurate for distinguishing between European groups, achieving results comparable to those from genome-wide analyses. It mines much larger datasets (more markers and more samples) to identify a panel of 300 highly ancestry-informative markers that accurately distinguish not just northwest and southeast European, but also Ashkenazi Jewish ancestry from Southern Europeans. A 2008 study by Tian et al. provides an additional example of the same clustering pattern, using samples and markers similar to those in their other study. European population genetic substructure was examined in a diverse set of >1,000 individuals of European descent, each genotyped with >300 K SNPs. Both STRUCTURE and principal component analyses (PCA) showed the largest division/principal component (PC) differentiated northern from southern European ancestry. A second PC further separated Italian, Spanish, and Greek individuals from those of Ashkenazi Jewish ancestry as well as distinguishing among northern European populations. In separate analyses of northern European participants other substructure relationships were discerned showing a west-to-east gradient. In June 2010, Behar et al. "shows that most Jewish samples form a remarkably tight subcluster with common genetic origin, that overlies Druze and Cypriot samples but not samples from other Levantine populations or paired Diaspora host populations. In contrast, Ethiopian Jews (Beta Israel) and Indian Jews (Bene Israel and Cochini) cluster with neighboring autochthonous populations in Ethiopia and western India, respectively, despite a clear paternal link between the Bene Israel and the Levant." "The most parsimonious explanation for these observations is a common genetic origin, which is consistent with an historical formulation of the Jewish people as descending from ancient Hebrew and Israelite residents of the Levant." The authors say that the genetic results are concordant "with the dispersion of the people of ancient Israel throughout the Old World". Regarding the samples he used, Behar says, "Our conclusion favoring common ancestry (of Jewish people) over recent admixture is further supported by the fact that our sample contains individuals that are known not to be admixed in the most recent one or two generations." A study led by Harry Ostrer published on 11 June 2010, found close links between Ashkenazi, Sephardi, and Mizrahi Jews, and found them to be genetically distinct from non-Jews. In the study, DNA from the blood of 237 Jews and about 2,800 non-Jews was analyzed, and it was determined how closely related they were through IBD. Individuals within the Ashkenazi, Sephardi, and Mizrahi groups shared high levels of IBD, roughly equivalent to that of fourth or fifth cousins. All three groups shared many genetic features, suggesting a common origin dating back more than 2,000 years. The study did find that all three Jewish groups did show various signs of admixture with non-Jews, with the genetic profiles of Ashkenazi Jews indicating between 30% and 60% admixture with Europeans, although they clustered more closely with Sephardi and Mizrahi Jews. In July 2010, Bray et al., using SNP microarray techniques and linkage analysis, found that Ashkenazi Jews clustered between Middle Eastern and European populations but found a closer relationship between the Ashkenazim and several European populations (Tuscans, Italians, and French) than between the Ashkenazi Jews and Middle Eastern populations and that European admixture "is considerably higher than previous estimates by studies that used the Y chromosome." They add their study data "support the model of a Middle Eastern origin of the Ashkenazim population followed by subsequent admixture with host Europeans or populations more similar to Europeans," and that their data imply that modern Ashkenazi Jews are possibly more similar to Europeans than modern Middle Easterners. The level of admixture with European populations was estimated at between 35% and 55%. The study assumed Druze and Palestinian Arab populations to represent the reference to the world Jewry ancestor genome. With this reference point, the linkage disequilibrium in the Ashkenazi Jewish population was interpreted as "matches signs of interbreeding or 'admixture' between Middle Eastern and European populations". Also, in their press release, Bray stated: "We were surprised to find evidence that Ashkenazi Jews have higher heterozygosity than Europeans, contradicting the widely-held presumption that they have been a largely isolated group". The authors said that their calculations might have "overestimated the level of admixture" as it is possible that the true Jewish ancestors were genetically closer to Southern Europeans than to Druze and Palestinian Arabs. They predicted that using the non-Ashkenazi Jewish Diaspora populations as a reference for a world Jewry ancestor genome would "underestimate the level of admixture" but that "however, using the Jewish Diaspora populations as the reference Jewish ancestor will naturally underestimate the true level of admixture, as the modern Jewish Diaspora has also undergone admixture since their dispersion. A 2007 study by Bauchet et al. found that Ashkenazi Jews were most closely clustered with Arabic North African populations when compared to the global population of that study. In the European structure analysis, they share genetic similarities with Greeks and Sicilians, reflecting their east Mediterranean origins. A 2006 study by Seldin et al. used over five thousand autosomal SNPs to demonstrate the European genetic substructure. The results showed "a consistent and reproducible distinction between 'northern' and 'southern' European population groups". Most northern, central, and eastern Europeans (Finns, Swedes, English, Irish, Germans, and Ukrainians) showed >90% in the 'northern' population group, while most individual participants with southern European ancestry (Italians, Greeks, Portuguese, Spaniards) showed >85% in the 'southern' group. Both Ashkenazi Jews as well as Sephardic Jews showed >85% membership in the "southern" group. Referring to the Jews clustering with southern Europeans, the authors state the results were "consistent with a later Mediterranean origin of these ethnic groups". An initial study conducted in 2001 by Noah Rosenberg and colleagues on six Jewish populations (Poland, Libya, Ethiopia, Iraq, Morocco, Yemen) and two non-Jewish populations (Palestinians and Druze) showed that while the eight groups had genetic links to each other, the Jews of Libya have a distinct genetic signature related to their genetic isolation and possible admixture with Berber populations.[a] This same study suggested a close relationship between the Jews of Yemen and those of Ethiopia. Paternal line Approximately 35% to 43% of Jewish men are in the paternal line known as haplogroup J[b] and its sub-haplogroups. This haplogroup is particularly present in the Middle East and Southern Europe. 15% to 30% are in haplogroup E1b1b,[c] (or E-M35) and its sub-haplogroups which is common in the Middle East, North Africa, and Southern Europe. The Mediterranean haplogroup T1a1 is found in varying percentages depending on the Jewish group studied but with upward of 15 to 3% with the highest frequency within Jewish communities native to the Fertile Crescent and East Africa. Studies of Levites and Cohanim provide further insights into Jewish paternal heritage. The Cohanim lineage, traditionally associated with priestly descent, has been linked to the Cohen Modal Haplotype (CMH), first identified by Skorecki et al. in 1997. This haplotype, found in a significant proportion of Cohanim, suggests descent from a single male ancestor approximately 3,000 years ago. A 2009 study by Hammer et al. refined this understanding, identifying J-P58 (or J1E) as the most common haplogroup among Cohanim, accounting for 46.1% of their lineages. These findings confirm a distinct paternal lineage among Cohanim, consistent with biblical accounts of priestly descent. Ashkenazi Jews, while showing minor European genetic input (~12–23%), remain closer to Middle Eastern and Sephardi Jews than to European populations. A study on Levites highlighted a high proportion (~50%) of haplogroup R1a-M582, which further indicates a Near Eastern origin rather than European ancestry. These genetic findings, together with broader studies of Y-DNA haplogroups frequencies, suggest a shared Middle Eastern origin across Jewish populations, shaped by migrations, isolation, and limited admixture with host populations. In 1992, G. Lucotte and F. David were the first genetic researchers to have documented a common paternal genetic heritage between Sephardi and Ashkenazi Jews. Another study published just a year later suggested the Middle Eastern origin of Jewish paternal lineages. In 2000, M. Hammer, et al. conducted a study on 1,371 men and definitively established that part of the paternal gene pool of Jewish communities in Europe, North Africa, and the Middle East came from a common Middle East ancestral population. They suggested that most Jewish communities in the Diaspora remained relatively isolated and endogamous compared to non-Jewish neighbor populations. Investigations by Nebel et al. on the Y-haplotypes (paternal lineages) of Ashkenazi Jews, Kurdish and Sephardi (North Africa, Turkey, Iberian Peninsula, Iraq and Syria) indicate that Jews are more genetically similar to groups in the northern Fertile Crescent (Kurds, Turks and Armenians) than their Arab neighbors, and suggest that some of this difference might be due to migration and admixture from the Arabian peninsula during the last two millennia (into certain current Arabic-speaking populations). Considering the timing of this origin, the study found that "the common genetic Middle Eastern background (of Jewish populations) predates the ethnogenesis in the region and concludes that the Y chromosome pool of Jews is an integral part of the genetic landscape of the Middle East. The study nevertheless found a high degree of overall similarity between Jewish and local Arab groups. Lucotte et al. 2003 study found that (Oriental, Sephardic, Ashkenazic Jews and Lebanese and Palestinians), "seem to be similar in their Y-haplotype patterns, both with regard to the haplotype distributions and the ancestral haplotype VIII frequencies." The authors stated in their findings that these results confirm similarities in the Y-haplotype frequencies of these Near-Eastern populations, sharing a common geographic origin. In a study of Israeli Jews from some different groups (Ashkenazi Jews, Kurdish Jews, North African Sephardi Jews, and Iraqi Jews) and Palestinian Muslim Arabs, more than 70% of the Jewish men and 82% of the Arab men whose DNA was studied had inherited their Y chromosomes from the same paternal ancestors, who lived in the region within the last few thousand years. "Our recent study of high-resolution microsatellite haplotypes demonstrated that a substantial portion of Y chromosomes of Jews (70%) and of Palestinian Muslim Arabs (82%) belonged to the same chromosome pool." Kurdish, North African Sephardi, and Iraqi Jews were found to be genetically indistinguishable while slightly but significantly differing from Ashkenazi Jews. In relation to the region of the Fertile Crescent, the same study noted; "In comparison with data available from other relevant populations in the region, Jews were found to be more closely related to groups in the north of the Fertile Crescent (Kurds, Turks, and Armenians) than to their Arab neighbors", which the authors suggested was due to migration and admixture from the Arabian Peninsula into certain current Arabic-speaking populations during the period of Islamic expansion. The Y chromosome of most Ashkenazi and Sephardi Jews contains mutations that are common among Middle Eastern peoples, but uncommon in the general European population, according to a study of haplotypes of the Y chromosome by Michael Hammer, Harry Ostrer and others, published in 2000. According to Hammer et al. this suggests that the paternal lineages of Ashkenazi Jews could be traced mostly to the Middle East. In Ashkenazi (and Sephardi) Jews, the most common paternal lineages generally are E1b1b, J2, and J1, with others found at lesser rates. Hammer et al. add that "Diaspora Jews from Europe, Northwest Africa, and the Near East resemble each other more closely than they resemble their non-Jewish neighbors." In addition, the authors have found that the "Jewish cluster was interspersed with the Palestinian and Syrian populations, whereas the other Middle Eastern non-Jewish populations (Saudi Arabians, Lebanese, and Druze) closely surrounded it. Of the Jewish populations in this cluster, the Ashkenazim were closest to South European populations (specifically the Greeks) and they were also closest to the Turks." The study estimated that on their paternal side, Ashkenazi Jews are descended from a core population of approximately 20,000 Jews who migrated from Italy into the rest of Europe over the course of the first millennium, and it also estimated that "All European Jews seem connected on the order of fourth or fifth cousins." The study also maintained that the paternal lines of Roman Jews were close to those of Ashkenazi Jews. It asserts that these mostly originated from the Middle East. The estimated cumulative total male genetic admixture amongst Ashkenazim was, according to Hammer et al., "very similar to Motulsky's average estimate of 12.5%. This could be the result, for example, of "as little as 0.5% per generation, over an estimated 80 generations", according to Hammer et al. Such figures indicated that there had been a "relatively minor contribution" to Ashkenazi paternal lineages by converts to Judaism and non-Jews. These figures, however, were based on a limited range of paternal haplogroups assumed to have originated in Europe. When potentially European haplogroups were included in the analysis, the estimated admixture increased to 23 percent (±7%).[d] The frequency of haplogroup R1b in the Ashkenazim population is similar to the frequency of R1b in Middle Eastern populations.[citation needed] This is significant because R1b is also the most common haplogroup amongst non-Jewish males in Western Europe. That is, the commonness of nominally Middle Eastern subclades of R1b amongst Ashkenazim tends to minimize the Western European contribution to the ~10% of R1b found amongst Ashkenazim. A large study by Behar et al. (2004) of Ashkenazi Jews records a percentage of 5–8% European contribution to the Ashkenazi paternal gene pool.[e] In the words of Behar: Because haplogroups R-M17 (R1a) and R-P25 (R1b) are present in non-Ashkenazi Jewish populations (e.g., at 4% and 10%, respectively) and in non-Jewish Near Eastern populations (e.g., at 7% and 11%, respectively; Hammer et al. 2000; Nebel et al. 2001), it is likely that they were also present at low frequency in the AJ (Ashkenazi Jewish) founding population. The admixture analysis shown in Table 6 suggests that 5%–8% of the Ashkenazi gene pool is, indeed, comprised of Y chromosomes that may have introgressed from non-Jewish European populations. For G. Lucotte et al., the R1b frequency is about 11%.[f] In 2004, when the calculation was made excluding Jews from the Netherlands the R1b rate was 5% ± 11.6%. Two studies by Nebel et al. in 2001 and 2005, based on Y chromosome polymorphic markers, suggested that Ashkenazi Jews are more closely related to other Jewish and Middle Eastern groups than they are to their host populations in Europe (defined in the using Eastern European, German, and French Rhine Valley populations). Ashkenazi, Sephardic, and Kurdish Jews were all very closely related to the populations of the Fertile Crescent, even closer than to Arabs. The study speculated that the ancestors of the Arab populations of the Levant might have diverged due to mixing with migrants from the Arabian Peninsula. However, 11.5% of male Ashkenazim, and more specifically 50% of the Levites while 1.7% of the Cohanim, were found to belong to R1a1a (R-M17), the dominant Y chromosome haplogroup in Eastern European populations. They hypothesized that these chromosomes could reflect low-level gene flow from surrounding Eastern European populations, or, alternatively, that both the Ashkenazi Jews with R1a1a (R-M17), and to a much greater extent Eastern European populations in general, might partly be descendants of Khazars. They concluded, "However, if the R1a1a (R-M17) chromosomes in Ashkenazi Jews do indeed represent the vestiges of the mysterious Khazars then, according to our data, this contribution was limited to either a single founder or a few closely related men, and does not exceed ~12% of the present-day Ashkenazim." This hypothesis is also supported by David B. Goldstein in his book Jacob's legacy: A genetic view of Jewish history. However, Faerman (2008) states that "External low-level gene flow of possible Eastern European origin has been shown in Ashkenazim but no evidence of a hypothetical Khazars' contribution to the Ashkenazi gene pool has ever been found." A 2017 study, concentrating on the Ashkenazi Levites where the proportion reaches 50%, while signaling that there's a "rich variation of haplogroup R1a outside of Europe which is phylogenetically separate from the typically European R1a branches", notes that the particular R1a-Y2619 sub-clade testifies for a local origin, and that the "Middle Eastern origin of the Ashkenazi Levite lineage based on what was previously a relatively limited number of reported samples, can now be considered firmly validated." Furthermore, 7% of Ashkenazi Jews have the haplogroup G2c, which is mainly found among the Pashtuns and on a lower scale, it is mainly found among members of all major Jewish ethnic groups, Palestinians, Syrians, and Lebanese. Behar et al. suggest that those haplogroups are minor Ashkenazi founding lineages. Among Ashkenazi Jews, the Jews of the Netherlands seem to have a particular distribution of haplogroups since nearly one quarter of them have the Haplogroup R1b1 (R-P25), in particular sub-haplogroup R1b1b2 (R-M269), which is characteristic of Western European populations. Ashkenazi men show a low level of Y-DNA diversity within each major haplogroup, which means that compared to the size of the modern population, it seems that there once was a relatively small number of men having children. This possibly results from a series of founder events and high rates of endogamy within Europe. Despite Ashkenazi Jews representing a recently founded population in Europe, founding effects suggest that they probably derived from a large and diverse ancestral source population in the Middle East, who may have been larger than the source population from which the non-Jewish Europeans derived. The first largest study on the Jews of North Africa was led by Gerard Lucotte et al. in 2003. This study showed that the Jews of North Africa[h] showed frequencies of their paternal haplotypes almost equal to those of the Lebanese and Palestinian non-Jews. The authors also compared the distribution of haplotypes of Jews from North Africa with Sephardi Jews, Ashkenazi Jews, and "Oriental" (Mizrahi) Jews, and found significant differences between the Ashkenazim and Mizrahim and the other two groups. The Jewish community of the island of Djerba in Tunisia is of special interest, Tradition traces this community's origins back to the time of the destruction of Solomon's Temple. Two studies have attempted to test this hypothesis first by G. Lucotte et al. from 1993, the second of F. Manni et al. of 2005. They also conclude that the Jews of Djerba's paternal gene pool is different from the Arabs and Berbers of the island. For the first 77.5% of samples tested are of haplotype VIII (probably similar to the J haplogroup according Lucotte), the second shows that 100% of the samples are of Haplogroup J *. The second suggests that it is unlikely that the majority of this community comes from an ancient colonization of the island while for Lucotte it is unclear whether this high frequency is really an ancient relationship. These studies therefore suggest that the paternal lineage of North African Jews comes predominantly from the Middle East with a minority contribution of African lineages, probably Berbers. A study by Inês Nogueiro et al. (July 2009) on the Jews of north-eastern Portugal (region of Trás-os-Montes) showed that their paternal lines consisted of 35.2% lineages more typical of Europe (R : 31.7%, I : 3.5%), and 64.8% lineages more typical of the Near East than Europe (E1b1b: 8.7%, G: 3.5%, J: 36.8%, T: 15.8%) and consequently, the Portuguese Jews of this region were genetically closer to other Jewish populations than to Portuguese non-Jews. In the article by Nebel et al. the authors show that Kurdish and Sephardi Jews have indistinguishable paternal genetic heritage, with both being similar to but slightly differing from Ashkenazi Jews (possibly due to a low-level European admixture or a genetic drift during isolation among Ashkenazim). The study shows that mixtures between Kurdish Jews and their Muslim hosts are negligible and Kurdish Jews are closer to other Jewish groups than they are to their long-term host population. Hammer had already shown the strong correlation between the genetic heritage of Jews from North Africa with Kurdish Jews. Sample size 9/50 – 18% haplogroup T1. A 2002 study by geneticist Dror Rosengarten found that the paternal haplotypes of Mountain Jews "were shared with other Jewish communities and were consistent with a Mediterranean origin." A 2016 study by Karafet at all found, with a sample of 17, 11.8% of Mountain Jewish men tested in Dagestan's Derbentsky District to belong to Haplogroup T-P77. The studies of Shen and Hammer et al. show that the paternal genes of Yemenite Jews are very similar to those of other Jewish populations. They include Y haplogroups A3b2, E3b3a, E3b1, E3b1b, J1a, J2e, R1b10, and the lowest frequency found was Haplogroup T-M184 2/94 2.1% in one sample. A study by Lucotte and Smets has shown that the genetic father of Beta Israel (Ethiopian Jews) was close to the Ethiopian non-Jewish populations. This is consistent with the theory that Beta Israel are descendants of ancient inhabitants of Ethiopia, not the Middle East. Hammer et al. (2000) and the team of Shen in 2004 arrive at similar conclusions, namely a genetic differentiation in – other people in the north of Ethiopia, which probably indicates a conversion of local populations. A study by Behar et al. (2010) on the Beta Israel showed similar Middle Eastern genetic clustering to Semitic-speaking Ethiopian non-Jewish Tigrayans and Amharas, but more than Cushitic-speaking non-Jewish Ethiopian Oromos. Genetic analysis shows that the Bene Israel of India cluster with the Indigenous populations of western India, but do have a clear paternal link to the populations of the Levant. A recent more detailed study on Indian Jews has reported that the paternal ancestry of Indian Jews is composed of Middle East specific haplogroups (E, G, J(xJ2) and I) as well as common South Asian haplogroups (R1a, H, L-M11, R2). Nephrologist Karl Skorecki decided to analyze the Cohanim to see if they were the descendants of one man, in which case they should have a set of common genetic markers. To test this hypothesis, he contacted Michael Hammer of the University of Arizona, a researcher in molecular genetics and a pioneer in research on chromosomes. Their article, published in Nature in 1997, has had some impact. A set of special markers (called Cohen Modal Haplotype or CMH) was defined as one which is more likely to be present in the Cohanim, defined as contemporary Jews named Cohen or a derivative, and it was proposed that this results from a common descent from the ancient priestly lineage rather than from the Jewish population in general. But, subsequent studies showed that the number of genetic markers used and the number of samples (of people saying Cohen) were not big enough. The last study, conducted in 2009 by Hammer and Behar et al., says 20 of the 21 Cohen haplogroups have no single common young haplogroup; five haplogroups comprise 79.5% of all haplogroups of Cohen. Among these first 5 haplogroups, J-P58 (or J1E) accounts for 46.1% of Cohen, and the second major haplogroup, J-M410 or J2a accounts for 14.4%. Hammer and Behar have redefined an extended CMH haplotype as determined by a set of 12 markers and having a "background" haplogroup determining the most important lines J1E (46.1%). This haplotype was absent among non-Jews in 2009 analyzed in the study. This divergence would appear to be from 3000 ± 1000 years ago. This study nevertheless confirms that the current Cohen lineage descended from a small number of paternal ancestors. In the summary of their findings, the authors concluded that "Our estimates of the coalescence time also lend support to the hypothesis that the extended CMH represents a unique founding lineage of the ancient Hebrews that has been paternally inherited along with the Jewish priesthood." Molecular phylogenetics research published in 2013 and 2016 for Levant haplogroup J1 (J-M267) places the Y-chromosomal Aaron within subhaplogroup Z18271, age estimate 2638–3280 years Before Present (yBP). The Lemba of South Africa, a Bantu speaking people whose culture forbids the consumption of pork and requires male circumcision, has a high frequency of the Middle Eastern Y-chromosome HgJ-12f2a (25%), a potentially SEA Y, Hg-K(xPQR) (32%) and a Bantu Y, E-PN1 (30%) (similar to E-M2). The Lemba tribe of Venda in South Africa claims to be Jewish and to have originated in Sena – possibly Yemenite Sena in Wadi Masila of the Hadramaut. There are indications of genetic connections with the Hadramaut, i.e., the Lemba Y-chromosomes and Hadramaut Y-chromosomes showed overlap. In addition, there was also a Cohen Modal Haplotype (CMH) within their subclan, the Buba – higher than the general Jewish population. It has been suggested by Tudor Parfitt and Yulia Egorova that their Jewish ancestors probably came along with general Semitic incursions into East Africa from South Arabia, and then moved slowly south through the area of Great Zimbabwe. A 2003 study of the Y-chromosome by Behar et al. pointed to multiple origins for Ashkenazi Levites, a priestly class who comprise approximately 4% of Ashkenazi Jews. It found that Haplogroup R1a1a (R-M17), which is uncommon in the Middle East or among Sephardi Jews, but dominant in Eastern Europe, is present in over 50% of Ashkenazi Levites, while the rest of Ashkenazi Levites' paternal lineage is of apparent Middle Eastern origin. Behar suggested a founding event, probably involving one or very few European men, occurring at a time close to the initial formation and settlement of the Ashkenazi community as a possible explanation. Nebel, Behar and Goldstein speculated that this may indicate a Khazar origin. However, a 2013 study by Rootsi, Behar et al. found that R1a-M582, the specific subclade of R1a to which all sampled Ashkenazi Levites with R1a belonged, was completely absent from a sample of 922 Eastern Europeans and was only found in one of the 2,164 samples from the Caucasus, while it made up 33.8% of non-Levite Ashkenazi R1a and was also found in 5.9% of Near Easterners bearing R1a. The clade, though less represented in Near Easterners, was more diverse among them than among Ashkenazi Jews. Rootsi et al. argued this supports a Near Eastern Hebrew origin for the paternal lineage R1a present among Ashkenazi Levites: R1a-M582 was also found among different Iranian populations, among Kurds from Cilician Anatolia and Kazakhstan, and among non-Ashkenazi Jews. Previous Y-chromosome studies have demonstrated that Ashkenazi Levites, members of a paternally inherited Jewish priestly caste, display a distinctive founder event within R1a, the most prevalent Y-chromosome haplogroup in Eastern Europe. Here we report the analysis of 16 whole R1 sequences and show that a set of 19 unique nucleotide substitutions defines the Ashkenazi R1a lineage. While our survey of one of these, M582, in 2,834 R1a samples reveals its absence in 922 Eastern Europeans, we show it is present in all sampled R1a Ashkenazi Levites, as well as in 33.8% of other R1a Ashkenazi Jewish males and 5.9% of 303 R1a Near Eastern males, where it shows considerably higher diversity. Moreover, the M582 lineage also occurs at low frequencies in non-Ashkenazi Jewish populations. In contrast to the previously suggested Eastern European origin for Ashkenazi Levites, the current data are indicative of a geographic source of the Levite founder lineage in the Near East and its likely presence among pre-Diaspora Hebrews. Maternal line Studies of mitochondrial DNA of Jewish populations are more recent and are still debatable. The maternal lineages of Jewish populations, studied by looking at mitochondrial DNA, are generally more heterogeneous. Scholars such as Harry Ostrer and Raphael Falk believe this may indicate that many Jewish males found new mates from European and other communities in the places where they migrated in the diaspora after fleeing ancient Israel. Mitochondrial DNA (mtDNA) studies of Jewish populations reveal diverse maternal lineages with significant regional variations. For Ashkenazi Jews, approximately 40% of their mtDNA is linked to four female founders potentially of Near Eastern origin, although later research suggests that up to 81% of their maternal ancestry might stem from European women. Some studies propose ancient Near Eastern origins, while others European contributions. Sephardi Jews exhibit greater diversity in mtDNA, with limited genetic influence from local Berber or Arab populations. Mizrahi Jews, including those from Iraq and Persia, often trace their maternal lines back to a small number of Middle Eastern women.[vague] Ethiopian Jews share genetic characteristics with neighboring African populations, reflecting their regional origins, while Indian Jews, such as the Bene Israel and Cochin Jews, predominantly show indigenous maternal lineages, alongside some shared Jewish genetic markers. Across Jewish communities, genetic studies identify maternal founders dating back over 2,000 years, highlighting genetic bottlenecks and founder effects. These patterns suggest distinct genetic clusters for Ashkenazi, Sephardi, and Mizrahi Jews. According to Thomas et al. in 2002, several Jewish communities reveal direct-line maternal ancestry originating from a few women. This was seen in independently founded communities in different geographic areas. What they shared was limited genetic additions later on the female side. Together, this is described as the founder effect. Those same communities had diversity in the male lines that were similar to the non-Jewish population. Two studies in 2006 and 2008 suggested that about 40% of Ashkenazi Jews originate maternally from four female founders likely of Near-Eastern origin who lived 1,000 years ago, while the populations of Sephardi and Mizrahi Jewish communities "showed no evidence for a narrow founder effect". Except for Ethiopian Jews and Indian Jews, it has been argued that all of the various Jewish populations have components of mitochondrial genomes that were of Middle Eastern origin. In 2013, however, Richards et al. published work suggesting that an overwhelming majority of Ashkenazi Jewish maternal ancestry, estimated at "80 percent of Ashkenazi maternal ancestry comes from women indigenous to Europe, and [only] 8 percent from the Near East, with the rest uncertain", suggesting that Jewish males migrated to Europe and took new wives from the local population, and converted them to Judaism, though some geneticists, such as Doron Behar, have expressed disagreement with the study's conclusions. Another study by Eva Fernandez and her colleagues argues that the K lineages (claimed to be European in origin by Richards et al.) in Ashkenazi Jews might have an ancient Near Eastern source. Reflecting on previous mtDNA studies carried out by Behar, Atzmon et al. conclude that all major Jewish population groups are showing evidence for founder females of Middle Eastern origin with coalescence times >2000 years. A 2013 study by Richards et al., based on a much larger sample base, drew differing conclusions, namely, that the Mt-DNA of Ashkenazi Jews originated among southern European women, where Diaspora communities had been established centuries before the fall of the Second Temple in 70 CE. A 2014 study by Fernandez et al. found that Ashkenazi Jews display a frequency of haplogroup K which suggests an ancient Near Eastern origin, stating that this observation clearly contradicts the results of the study led by Richards which suggested a predominantly European origin for the Ashkenazi community's maternal lines. However, the authors of the 2014 study also state that definitively answering the question of whether this group was of Jewish origin rather than the result of a Neolithic migration to Europe would require the genotyping of the complete mtDNA in ancient Near Eastern populations. In 2004, Behar et al. found that approximately 32% of Ashkenazi Jews belong to the mitochondrial Haplogroup K, which points to a genetic bottleneck having taken place some 100 generations prior. Haplogroup K itself is thought to have originated in Western Asia some 12,000 years ago. A 2006 study by Behar et al., based on high-resolution analysis of Haplogroup K (mtDNA), suggested that about 40% of the current Ashkenazi population is descended matrilineally from just four women, or "founder lineages", likely of mixed European and Middle Eastern origin. They concluded that these founder lineages may have originated in the Middle East in the 1st and 2nd centuries CE, and later underwent expansion in Europe. Moreover, a maternal line "sister" was found among the Jews of Portugal, North Africa, France, and Italy. They wrote: Both the extent and location of the maternal ancestral deme from which the Ashkenazi Jewry arose remain obscure. Here, using complete sequences of the maternally inherited mitochondrial DNA (mtDNA), we show that close to one-half of Ashkenazi Jews, estimated at 8,000,000 people, can be traced back to only four women carrying distinct mtDNAs that are virtually absent in other populations, with the important exception of low frequencies among non-Ashkenazi Jews. We conclude that four founding mtDNAs, likely of Near Eastern ancestry, underwent major expansion(s) in Europe within the past millennium… A 2007 study by J. Feder et al. confirmed the hypothesis of the founding of non-European origin among the maternal lines. Their study did not address the geographical origin of Ashkenazim and therefore does not explicitly confirm the origin "Levantine" of these founders. This study revealed a significant divergence in total haplogroup distribution between the Ashkenazi Jewish populations and their European host populations, namely Russians, Poles, and Germans. They concluded that, regarding mtDNAs, the differences between Jews and non-Jews are far larger than those observed among the Jewish communities. The study also found that "the differences between the Jewish communities can be overlooked when non-Jews are included in the comparisons." It supported previous interpretations that, in the direct maternal line, there was "little or no gene flow from the local non-Jewish communities in Poland and Russia to the Jewish communities in these countries." Considering Ashkenazi Jews, Atzmon (citing Behar above) states that beyond four founder mitochondrial haplogroups of possible Middle Eastern origins which comprise approximately 40% of Ashkenazi Jewish mtDNA, the remainder of the mtDNA falls into other haplogroups, many of European origin. He noted that beyond Ashkenazi Jews, "Evidence for founder females of Middle Eastern origin has been observed in other Jewish populations based on non-overlapping mitochondrial haplotypes with coalescence times >2000 years". A 2013 study at the University of Huddersfield, led by Professor Martin B. Richards, concluded that 65%-81% of Ashkenazi Mt-DNA is European in origin, including all four founding mothers, and that most of the remaining lineages are also European. The results were published in Nature Communications in October 2013. The team analyzed about 2,500 complete and 28,000 partial Mt-DNA genomes of mostly non-Jews, and 836 partial Mt-DNA genomes of Ashkenazi Jews. The study claims that only 8% of Ashkenazi Mt-DNA could be identified as Middle Eastern in origin, with the origin of the rest being unclear. They wrote: If we allow for the possibility that K1a9 and N1b2 might have a Near Eastern source, then we can estimate the overall fraction of European maternal ancestry at ~65%. Given the strength of the case for even these founders having a European source, however, our best estimate is to assign ~81% of Ashkenazi lineages to a European source, ~8% to the Near East, and ~1% further to the east in Asia, with ~10% remaining ambiguous... Thus at least two-thirds and most likely more than four-fifths of Ashkenazi maternal lineages have a European ancestry. Regarding the origin of Ashkenazi admixture, the analyses suggest that "the first major wave of assimilation probably took place in Mediterranean Europe, most likely in Southern Europe, with substantial further assimilation of minor founders in west/central Europe." According to Richards, who acknowledged past research showing that Ashkenazi Jews' paternal origins are largely from the Middle East, the most likely explanation is that Ashkenazi Jews are descended from Middle Eastern men who moved to Europe and married local women whom they converted to Judaism. The authors found "less evidence for assimilation in Eastern Europe, and almost none for a source in the North Caucasus/Chuvashia, as would be predicted by the Khazar hypothesis." The study was criticized by geneticist Doron Behar, who stated that while the Mt-DNA of Ashkenazi Jews is of mixed Middle Eastern and European origins, the deepest maternal roots of Ashkenazi Jews are not European. Harry Ostrer said Richards' study seemed reasonable and corresponded to the known facts of Jewish history. Karl Skorecki of the Rambam Health Care Campus stated that there were serious flaws in phylogenetic analysis. David B. Goldstein, the Duke University geneticist who first found similarities between the founding mothers of Ashkenazi Jewry and European populations, said that, although Richards' analysis was well-done and 'could be right,' the estimate that 80% of Ashkenazi Jewish Mt-DNA is European was not statistically justified given the random rise and fall of mitochondrial DNA lineages. Geneticist Antonio Torroni of the University of Pavia found the conclusions very convincing, adding that recent studies of cell nucleus DNA also show "a very close similarity between Ashkenazi Jews and Italians". Diaspora communities were established in Rome and in Southern Europe centuries before the fall of the Second Temple in 70 CE. A 2014 study by Fernandez et al. found that Ashkenazi Jews display a frequency of haplogroup K which suggests ancient Middle Eastern origins, stating that this observation contradicts the results of the study led by Richards which suggested a predominantly European origin for the Ashkenazi community's maternal line. However, the authors also state that definitively answering the question of whether this group was of Jewish origin rather than the result of a Neolithic migration to Europe would require the genotyping of the complete mtDNA in ancient Near Eastern populations. On the study by Richards: According to that work the majority of the Ashkenazi mtDNA lineages can be assigned to three major founders within haplogroup K (31% of their total lineages): K1a1b1a, K1a9 and K2a2. The absence of characteristic mutations within the control region in the PPNB K-haplotypes allows discarding them as members of either sub-clades K1a1b1a or K2a2, both representing 79% of total Ashkenazi K lineages. However, without a high-resolution typing of the mtDNA coding region, it cannot be excluded that the PPNB K lineages belong to the third sub-cluster K1a9 (20% of Ashkenazi K lineages). Moreover, in light of the evidence presented here of a loss of lineages in the Near East since Neolithic times, the absence of Ashkenazi mtDNA founder clades in the Near East should not be taken as a definitive argument for its absence in the past. The genotyping of the complete mtDNA in ancient Near Eastern populations would be required to fully answer this question and it will undoubtedly add resolution to the patterns detected in modern populations in this and other studies. A 2022 study by Kevin Brook focused on the Mt-DNA of Ashkenazi Jews and used thousands of complete sequences. Brook's study concluded that the Ashkenazi maternal genome has significant roots in both the Middle East and Europe, the most frequent lineages being overwhelmingly of ancient Middle Eastern origin and a large number of uncommon lineages being of heterogenous European origin. Brook found a total of six branches of haplogroup K in Ashkenazim that each represents separate founding women: K1a1b1*, K1a1b1a, K1a4a, K1a9, K2a*, and K2a2a1. He found that K1a9 is shared with Iraqi Jews and with non-Jews in Syria and Iran. K2a2a1 is shared with southern Europeans but is also found among Mizrahi Jews from the Caucasus and is the maternal sister to the Arabian haplogroup K2a2a2. He therefore proposed that K1a9 and K2a2a1 are likely of Hebrew origin. Brook similarly found Near Eastern roots for several more Ashkenazi haplogroups, including R0a2m and U1b1. K1a4a, which is also found in Egyptian Jews, Maghrebi Jews, and Turkish Jews, is interpreted as a lineage that may have come from an ancient Greek or Italian convert to Judaism but also found it in Syria. Several haplogroups are seen as indicating the assimilation of West Slavic women, including V7a and H11b1. The debate over potential Khazar descent was also reexamined. Although Brook did not find any connection to the Chuvash people nor to any of the medieval Khazar samples that have been collected to date, he pinpointed the Ashkenazic branch of N9a3 as the daughter subclade of a variety found among Bashkirs, a Turkic people of the Ural region, and proposes that the former could have come from a Khazar woman, or, alternatively, a woman from China or the North Caucasus. The back cover of Brook's book carries an endorsement by Skorecki. A 2025 study by Joseph Livni and Karl Skorecki examined mitochondrial DNA (mtDNA) to distinguish between founder and host population lineages in the Ashkenazi Jewish population. By accounting for the frequency of mtDNA signatures, the study found that absorbed lineages generally appear as singletons, while founder lineages are represented in multiple copies. The results indicated that fewer than 15% of individuals in the sample carried absorbed mtDNA lineages, suggesting that the majority of maternal lineages trace back to founder lineages. These findings do not support the hypothesis of a primarily non-Jewish European origin for the maternal founding population. Combined with existing Y-chromosome evidence pointing to a Near Eastern origin for Ashkenazi paternal lineages, the study concludes that both maternal and paternal lineages in the Ashkenazi population likely share a Near Eastern origin. This challenges earlier models proposing a mixed ancestry involving European women and Near Eastern men, instead supporting a unified founder population of predominantly Near Eastern origin for both sexes. A critique of the study by Livni and Skorecki was published in a subsequent issue of the same journal, suggesting higher European maternal input, although less than Costa et al. had suggested, and stressing the importance of studying ancient DNA. Analysis of mitochondrial DNA of the Jewish populations of North Africa (Morocco, Tunisia, Libya) was the subject of further detailed study in 2008 by Doron Behar et al. Their text concludes that Jews from this region do not share the haplogroups of the mitochondrial DNA haplogroups (M1 and U6) that are typical of the North African Berber and Arab populations. Behar et al. conclude that it is unlikely that North African Jews have significant Arab, or Berber admixture, "consistent with social restrictions imposed by religious restrictions," or endogamy. This study also found genetic similarities between the Ashkenazi and North African Jews of European mitochondrial DNA pools, but differences between both of these of the diaspora and Jews from the Middle East. Genetic research by M. Thomas et al. that was based on studying only the hypervariable region 1 (HVS-I) seemed to show that about 26%-27% of Moroccan Jews descend from one female ancestor but when Behar et al. studied their complete mitogenomes they did not find that any one founder of this population had a frequency that high. Behar's study found that 43% of Tunisian Jews are descended from four women along their maternal lines. According to Behar, 39.8% of the mtDNA of Libyan Jews "could be related to one woman carrying the X2e1a1a lineage". The data (mt-DNA) recovered by D. Behar et al. were from a community descended from crypto-Jews located in the village of Belmonte in Portugal. Because of the small size of the sample and the circumstances of the community having been isolated for so long, It is not possible to generalize the findings to the entire Iberian Peninsula. There was a relatively high presence of Haplogroup T2e in Sephardim who arrived in Turkey and Bulgaria. This finding suggests that the subhaplogroup, which resembles the populations who live between Saudi Arabia, Egypt and North Central Italy more than the local Iberians, occurred relatively early in the Sephardic population because if it appeared instead at the end of the community's isolation in Iberia, there would be insufficient time for its spread in the population. The frequency of T2e matches in Spain and Portugal is drastically lower than in those listed above Jews. Similarly, fewer Sephardic signature T2e5 matches were found in Iberia than in Northern Mexico and Southwest United States. Behar et al. proposed that the existence of the mtDNA haplogroup HV0 among Jews from Turkey might represent the "genetic signal of an admixture of Iberian Jewry with local Iberian populations". Mt-DNA of the Jews of Turkey does not include to a large extent mt-DNA lineages typical of West Asia. According to the 2008 study by Behar, 43% of Iraqi Jews are descended from five women. Genetic studies show that Persian and Bukharan Jews descend from a small number of female ancestors. The Mountain Jews showed a striking maternal founding event, with 58.6% of their total mtDNA genetic variation tracing back to one woman from the Levant carrying an mtDNA lineage within Hg J2b. According to the study of M. Thomas et al., 51% of Georgian Jews are descended from a single female. According to Behar, 58% are descended from this female ancestor. Researchers have not determined the origin of this ancestor, but it is known that this woman carried a haplotype that can be found throughout a large area stretching from the Mediterranean to Iraq and to the Caucasus. " In a study by Richards et al., the authors suggest that a minor proportion of haplogroup L1 and L3a lineages from sub-Saharan Africa is present among Yemenite Jews. However, these lines occur 4 times less frequently than among non-Jewish Yemenis. These sub-Saharan haplogroups are virtually absent among Jews from Iraq, Iran, and Georgia and do not appear among Ashkenazi Jews. The Jewish population of Yemen also reveals a founder effect: 42% of the direct maternal lines are traceable to five women, four coming from western Asia and one from East Africa. For Beta Israel, the results are similar to those of the male population, namely, genetic characteristics identical to those of surrounding populations. According to the study of 2008 by Behar et al., the maternal lineage of the Bene Israel and Cochin Jews of India is of predominantly indigenous Indian origin. However, the mtDNA of the Bene Israel also includes lineages commonly found among Iranian and Iraqi Jews and also present among Italian Jews, and the MtDNA of Cochin Jews has some similarities to mtDNA lineages present in several non-Ashkenazi Jewish communities. Genetic research shows that 41.3% of Bene Israel descend from one female ancestor, who was of indigenous Indian origin. Another study also found that Cochin Jews have genetic similarities with other Jewish populations, in particular with Yemenite Jews, along with the indigenous populations of India. Comparison to non-Jewish populations Many genetic studies have demonstrated that most of the various Jewish ethnic divisions, Palestinians, Bedouin, and other Levantines cluster near one another. They found substantial genetic overlap between Israeli and Palestinian Arabs and Ashkenazi and Sephardic Jews. A small but statistically significant difference was found in the Y-chromosomal haplogroup distributions of Sephardic Jews and Palestinians, but no significant differences were found between Ashkenazi Jews and Palestinians nor between the two Jewish communities. A distinct cluster was found in Palestinian haplotypes. Out of the 143 Arab Y-chromosomes studied, 32% of them belonged to this "I&P Arab clade", which contained only one non-Arab chromosome, that of a Sephardic Jew. This could be due to geographical isolation of the Jews or admixture from other populations but also could be seen as insignificant considering how small the group being tested was, with over 68% of them showing no significant genetic differences at all. The Samaritans is a population of the northern part of ancient Israel, where they are historically well-identified since at least the 4th century BC. They define themselves as the descendants of tribes of Ephraim and Manasseh (named after the two sons of Joseph) living in the Kingdom of Israel before its destruction in 722 BC, as distinct from the Jews, descendants of the Israelites from the southern Kingdom of Judah. A 2004 study by Shen et al. compared the Y-DNA and DNA-mt of 12 Samaritan men with those of 158 men who were not Samaritans, divided between 6 Jewish populations (Ashkenazi, Moroccan, Libyan, Ethiopian, Iraqi, and Yemeni) and 2 non-Jewish populations from Israel (Druze and Arab). The study concludes that significant similarities exist between the paternal lines of Jews and Samaritans, but the maternal lines differ between the two populations. The pair-wise genetic distances (Fst) between 11 populations from AMOVA were applied to the Y-chromosomal and mitochondrial data. For the Y-chromosome, all Jewish groups (except for the Ethiopian Jews) are closely related to each other and do not differ significantly from the Samaritans (0.041) or Druze (0.033), but are different from Palestinian Arabs (0.163), Africans (0.219), and Europeans (0.111). This study indicated that the Samaritan and Jewish Y-chromosomes have a much greater affinity for the other than for their geographical neighbors, the Palestinian Arabs. This suggests the two share a common ancestral Near Eastern population preceding their divergence in the 4th century BCE, supporting the Samaritan narrative of descent from native Israelites who survived the Assyrian exile rather than from foreign populations introduced by the Assyrian Empire. The Lemba clans are scattered among the Bantu-speaking tribes in Zimbabwe and northern South Africa. Their oral tradition traces the origin of the Jewish Lembas to Sana'a in Yemen. Some practices seem reminiscent of Jewish practices (e.g. circumcision, and food laws). Two studies have attempted to determine the paternal origin of these tribes. The first by A. Spurdle and T. Jenkins dates from 1996 and suggests that more than half of Lembas tested carry paternal lineages of Semitic origin.[i] The second study by Mark G. Thomas et al. dates from 2000 and also suggests that part of Lembas have a Semitic origin that can come from a mixture of Arabs and Jews.[j] In addition, the authors show that one of the Lemba clans (the Buba clan) has a large proportion of the former CMH. Recent research published in the South African Medical Journal studied Y-Chromosome variations in two groups of Lemba, one South African and the other Zimbabwean (the Remba). It concluded that "While it was not possible to trace unequivocally the origins of the non-African Y chromosomes in the Lemba and Remba, this study does not support the earlier claims of their Jewish genetic heritage." The researcher suggested "a stronger link with Middle Eastern populations, probably the result of trade activity in the Indian Ocean." According to a 2008 study by Adams and colleagues the inhabitants of the Iberian Peninsula (Spain and Portugal) have an average of 20% Sephardi Jewish ancestry,[k] with significant geographical variations ranging from 0% on Menorca to 36.3% in southern Portugal. According to the authors, part of this admixture might also be of Neolithic, Phoenician, or Arab-Syrian origin. Modern-day Ibero-American populations have also shown varying degrees of Sephardic Jewish ancestry: New Christian converso Iberian settler ancestors of Sephardic Jewish origin. Ibero-Americans are largely the result of admixture between immigrants from Iberia, indigenous peoples of the Americas, and sub-Saharan African slaves, as well as other Europeans and other immigrants. An individual's specific mixture depends on their family genealogy; a significant proportion of immigrants from Iberia (Spain and Portugal) hid their Sephardic Jewish origin. Researchers analyzed "two well-established communities in Colorado (33 unrelated individuals) and Ecuador (20 unrelated individuals) with a measurable prevalence of the BRCA1 c.185delAG and the GHR c.E180 mutations, respectively [...] thought to have been brought to these communities by Sephardic Jewish progenitors. [...] When examining the presumed European component of these two communities, we demonstrate enrichment for Sephardic Jewish ancestry not only for these mutations but also for other segments as well. [...] These findings are consistent with historical accounts of Jewish migration from the realms that comprise modern Spain and Portugal during the Age of Discovery. More importantly, they provide a rationale for the occurrence of mutations typically associated with the Jewish diaspora in Latin American communities." Studies on historical populations A 2020 study on remains from Bronze Age southern Levantine (Canaanite) populations found evidence of large-scale migration from the Zagros or Caucasus into the southern Levant by the Bronze Age and increasing over time (resulting in a Canaanite population descended from both those migrants and earlier Neolithic Levantine peoples). The results were found to be consistent with several Jewish groups (Moroccan, Ashkenazi, and Persian/Iranian Jews) and non-Jewish Arabic-speaking Levantine populations (such as Lebanese, Druze, Palestinians, and Syrians) deriving about half or more of their ancestry from populations related to those from the Bronze Age Levant and Chalcolithic Zagros. The study modeled the aforementioned groups as having ancestry from both ancient populations. In a study published in December 2022, new genome data obtained from the medieval Jewish cemetery of Erfurt, Germany was used to further trace the origins of the Ashkenazi Jewish community. These findings suggest that medieval Erfurt had at least two related but genetically distinct Jewish groups: one was closely related to Middle Eastern populations and was especially similar to modern Ashkenazi Jews from France and Germany and modern Sephardic Jews from Turkey; the other group had a substantial contribution from Eastern European populations. Modern Ashkenazi Jews from Eastern Europe no longer exhibit this genetic variability, and instead, their genomes resemble a nearly even mixture of the two Erfurt groups (with about 60% from the first group and 40% from the second). A study of Norwich Jews published in Current Biology in October 2022 analyzed a mass grave dated to between 1161 and 1216, correlating their death to an 1190 Third Crusade-era pogrom, and found DNA evidence of strong genetic affinity to present-day Ashkenazi Jews, including the reconstruction of similar genetic diseases, red hair and blue eyes. Hypotheses A 2009 study was able to genetically identify individuals with full or partial Ashkenazi Jewish ancestry. In August 2012, Legacy: A Genetic History of the Jewish People, a book by Harry Ostrer, concluded that all major Jewish groups share a common Middle Eastern origin. Ostrer also refuted the Khazar hypothesis of Ashkenazi ancestry. Autosomal genetic analysis in 2012 revealed that North African Jews are genetically close to European Jews, which "shows that North African Jews date to biblical-era Israel, and are not largely the descendants of natives who converted to Judaism." Y DNA studies examine various paternal lineages of modern Jewish populations. Such studies tend to imply a small number of founders in an old population whose members parted and followed different migration paths. In most Jewish populations, these male line ancestors appear to have been mainly Middle Eastern. For example, Ashkenazi Jews share more common paternal lineages with other Jewish and Middle Eastern groups than with non-Jewish populations in areas where Jews lived in Eastern Europe, Germany, and the Rhine Valley. This is consistent with Jewish traditions in placing most Jewish paternal origins in the region of the Middle East. A study conducted in 2013 by Behar et al. found no evidence of a Khazar origin for Ashkenazi Jews and stated that this lack of evidence "corroborates earlier results that Ashkenazi Jews derive their ancestry primarily from populations of the Middle East and Europe, that they possess considerable shared ancestry with other Jewish populations, and that there is no indication of a significant genetic contribution either from within or from north of the Caucasus region." In 2016, Eran Elhaik, together with Ranajit Das, Paul Wexler and Mehdi Pirooznia, advanced the view that the first Ashkenazi populations to speak Yiddish came from areas near four villages in Eastern Anatolia along the Silk Road whose names derived from the word "Ashkenaz", arguing that Iranian, Greek, Turkish, and Slav populations converted on that travel route before moving to Khazaria, where a small-scale conversion took place. The study was dismissed by Sergio DellaPergola as a "falsification", noting it failed to include Jewish groups such as the Italkim and Sephardic Jews, to whom Ashkenazi Jews are closely related genetically. Shaul Stampfer called Elhaik's research "basically nonsense". Elhaik replied that the DNA of non-Ashkenazic Jews would not affect the origin of DNA hypothesized for the former. Dovid Katz criticized the study's linguistic analysis, stating: "The authors have melded accurate but contextually meaningless genetic correlations with laughable linguistic theories ... there is not a single word or sound in Yiddish that comes from Iranian or Turkish". In joint study published in 2016 by Genome Biology and Evolution, Pavel Flegontov from Department of Biology and Ecology, Faculty of Science, University of Ostrava, Czech Republic, A. A. Kharkevich Institute of Linguistics, Russian Academy of Sciences, Moscow, Mark G. Thomas from Research Department of Genetics, Evolution and Environment, University College London, UK, Valentina Fedchenko from Saint Petersburg State University, and George Starostin from Russian State University for the Humanities, dismissed both the genetic and linguistic components of Elhaik et al. study arguing that "GPS is a provenancing tool suited to inferring the geographic region where a modern and recently unadmixed genome is most likely to arise, but is hardly suitable for admixed populations and for tracing ancestry up to 1000 years before present, as its authors have previously claimed. Moreover, all methods of historical linguistics concur that Yiddish is a Germanic language, with no reliable evidence for Slavic, Iranian, or Turkic substrata." The authors concluded: "In our view, Das and co-authors have attempted to fit together a marginal and unsupported interpretation of the linguistic data with a genetic provenancing approach, GPS, that is at best only suited to inferring the most likely geographic location of modern and relatively unadmixed genomes, and tells nothing of population history and origin." The authors, in a non peer-reviewed response, defended the methodological adequacy of their approach. In 2016 Elhaik having reviewed the literature searching for a 'Jüdische Typus' argued that there is no genomic hallmark for Jewishness. While he allows that in the future a 'Jewish' marker may turn up, so far, in his view, Jewishness turns out to be socially defined (a socionome), determined by non-genetic factors. On 31 October 2016 a corrigendum to the initial GPS paper by Elhaik et al. 2014 was published in Nature Communications. The GPS tool remained freely available on the lab website of Dr. Tatiana Tatarinova, but as of December 2016, the link is broken. In 2017, the same authors further supported a non-Levantine origin of Ashkenazi Jews claiming that "Overall, the combined results (of linguistics study and GPS tool) are in a strong agreement with the predictions of the Irano-Turko-Slavic hypothesis and rule out an ancient Levantine origin for AJs, which is predominant among modern-day Levantine populations (e.g., Bedouins and Palestinians)." Elhaik's and Das' work was among others, strongly criticized by Marion Aptroot from University of Düsseldorf, who in the study published by Genome Biology and Evolution claimed that "Das et al. create a narrative based on genetic, philological and historical research and state that the findings of the three disciplines support each other...Incomplete and unreliable data from times when people were not counted regardless of sex, age, religion, or financial or social status on the one hand, and the dearth of linguistic evidence predating the 15th century on the other, leave much room for conjecture and speculation. Linguistic evidence, however, does not support the theory that Yiddish is a Slavic language, and textual sources belie the thesis that the name Ashkenaz was brought to Eastern Europe directly from a region in the Near East. Although the focus and methods of research may be different in the humanities and the sciences, scholars should try to account for all evidence and observations, regardless of the field of research. Seen from the standpoint of the humanities, certain aspects of the article by Das et al. fall short of established standards". In August 2022, Elhaik published a critique of the methodology of PCA, which lies at the core of studies by population geneticists seeking to identify ethnogenesis, instancing work on the Ashkenazi Jews among several others. His re-analysis concludes that the outcomes are generated by cherry-picking the data to obtain a foregone conclusion of origins – a Middle Eastern link in the case of the Ashkenazi – and argues that the circular reasoning in the procedure lends itself to eliciting "erroneous, contradictory, and absurd results". As of 2025, the sole study on ancient Israelite DNA pertains to genetic material recovered from the remains of ancient Israelites who lived during the First Temple period. These remains were excavated from the Kiryat Ye'arim site. Professor Israel Finkelstein led the research, during which two individuals - one male and one female - were examined. The study revealed that the male individual belonged to the J2 Y-DNA haplogroup, a set of closely related DNA sequences thought to have originated in the Caucasus or Eastern Anatolia, while the two different mitochondrial haplogroups identified were T1a9 and H87. The former haplogroup had previously been documented in an Iron Age Polish site. The latter has been observed in modern Basques, Tunisian Arabs, and Iraqis. History As early as the 1950s, failed attempts were made to use markers such as finger-print patterns to characterize Jewish communities. In the 1960s, more success was made in tracking the distribution of genetic diseases in Jewish communities. Alongside this, studies were being conducted that focused on identifying trends in converging blood group frequencies. Also at this time, studies began being conducted based on blood groups and serum markers, research that yielded both evidence of Middle East origins among Jewish diaspora groups and a degree of commonality between Jewish populations relative to paired Jewish and non-Jewish populations. While efforts to find converging blood group frequencies that might point to "hypothetical ancient Jews" were not successful, according to Falk, this "did not discourage the authors" from making claims of common ancestry. From the mid-1970s onwards, RNA and DNA sequencing enabled the comparison of genetic relationships, and during the 1980s, it also became possible to examine genetic polymorphism across multiple sites in DNA sequences. During this period, researchers worked to categorize the relatedness between different Jewish groups. Due to a paucity of polymorphic markers, the early studies "focused on genetic distances" and building hierarchal models between population samples. Advances in DNA sequence analysis using algorithms based on "probable common forefathers on the assumption of branching phylogenies" pointed to common progenitors among diverse Jewish communities, as well as overlap with Mediterranean populations. Both the early studies on blood markers and later studies of the monoallelic Y chromosomal and mitochondrial DNA (mtDNA) haplotypes revealed evidence of both Middle Eastern and local origin, with indeterminate levels of local genetic admixture. The conclusions of the diverse studies conducted turned out to be "remarkably similar", providing both evidence of shared genetic ancestry among major diaspora groups and varied levels of local genetic admixture. In the 1990s, this developed into attempts to identify markers in highly discrete population groups. The results were mixed. One study on the Cohanim hereditary priesthood found distinctive signs of genetic homogeneity within the group. At the same time, no unusual clustering of Y-haplotypes was found relative to non-Cohanim Jews. However, such studies did show that certain population groups could be identified. As David Goldstein noted: "Our studies of the Cohanim established that present-day Ashkenazi and Sephardi Cohanim are more genetically similar to one another than they are to either Israelites or non-Jews." In the late 1990s, Uzi Ritte cross-analyzed Y-chromosome and mtDNA sequences in six Jewish communities and found indications of "admixture with neighboring communities of non-Jews". A study of Ashkenazi mtDNA in 2013 meanwhile revealed four matrilineal founders, all of which had ancestry in prehistoric Europe, rather than the Near East or Caucasus. Falk notes that, "not surprisingly, Ashkenazi Jews prove to compose a distinct yet quite integral branch of European genomic tapestry." Several genetic studies demonstrated that approximately half of the ancestry of Ashkenazi Jews may be traced to the ancient Middle East and the other half to Europe, proving proximity to both ancient and present Middle Eastern and European groups. The majority of the European half comes mainly from southern European populations. Several studies estimate that between 50% and 80% of Ashkenazic Y-chromosomal (paternal) lineages originate in the Near East, with some estimating that at least 80% of their maternal lineages originated in Europe and some giving a lower estimate. Most researchers now believe that the early Jewish communities of southern Europe, which are the forebears of Ashkenazi Jews, are descended from both the ancient Israelites and from European converts to Judaism. See also Notes References Further reading External links Media related to Genetic studies on Jews at Wikimedia Commons |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Meat] | [TOKENS: 7039] |
Contents Meat Meat is animal tissue, mostly muscle, that is eaten as food. Humans have hunted and farmed other animals for meat since prehistory. The Neolithic Revolution allowed the domestication of vertebrates, including chickens, sheep, goats, pigs, horses, and cattle, starting around 11,000 years ago. Since then, selective breeding has enabled farmers to produce meat with the qualities desired by producers and consumers. Meat is important to economies and cultures around the world. Meat is mainly composed of water, protein, and fat. Its quality is affected by many factors, including the genetics, health, and nutritional status of the animal involved. Without preservation, bacteria and fungi decompose and spoil unprocessed meat within hours or days. Meat is edible raw, but it is mostly eaten cooked, such as by stewing or roasting, or processed, such as by smoking or salting. The consumption of meat (especially red and processed meat, as opposed to fish and poultry) increases the risk of certain negative health outcomes including cancer, coronary heart disease, and diabetes. Meat production significantly harms the environment by contributing to global warming, pollution, and biodiversity loss. Some people (vegetarians and vegans) choose not to eat meat for ethical, environmental, health or religious reasons. Etymology The word meat comes from the Old English word mete, meaning food in general. In modern usage, meat primarily means skeletal muscle with its associated fat and connective tissue, but it can include offal, here meaning other edible organs such as liver and kidney. The term is sometimes used in a more restrictive sense to mean the flesh of mammalian species (pigs, cattle, sheep, goats, etc.) raised and prepared for human consumption, to the exclusion of fish, other seafood, insects, poultry, or other animals. History Paleontological evidence suggests that meat constituted a substantial proportion of the diet of the earliest humans. Early hunter-gatherers depended on the organized hunting of large animals such as bison and deer. Animals were domesticated in the Neolithic, enabling the systematic production of meat and the breeding of animals to improve meat production. In the postwar period, governments gave farmers guaranteed prices to increase animal production. The effect was to raise output at the cost of increased inputs such as of animal feed and veterinary medicines, as well as of animal disease and environmental pollution. In 1966, the United States, the United Kingdom and other industrialized nations, began factory farming of beef and dairy cattle and domestic pigs. Intensive animal farming became globalized in the later years of the 20th century, replacing traditional stock rearing in countries around the world. In 1990 intensive animal farming accounted for 30% of world meat production and by 2005, this had risen to 40%. Modern agriculture employs techniques such as progeny testing to speed selective breeding, allowing the rapid acquisition of the qualities desired by meat producers. For instance, in the wake of well-publicized health concerns associated with saturated fats in the 1980s, the fat content of United Kingdom beef, pork and lamb fell from 20–26 percent to 4–8 percent within a few decades, due to both selective breeding for leanness and changed methods of butchery. Methods of genetic engineering that could improve the meat-producing qualities of animals are becoming available. Meat production continues to be shaped by the demands of customers. The trend towards selling meat in pre-packaged cuts has increased the demand for larger breeds of cattle, better suited to producing such cuts. Animals not previously exploited for their meat are now being farmed, including mammals such as antelope, zebra, water buffalo and camel, as well as non-mammals, such as crocodile, emu and ostrich. Organic farming supports an increasing demand for meat produced to that standard. Animal growth and development Several factors affect the growth and development of meat. Some economically important traits in meat animals are heritable to some degree, and can thus be selected for by animal breeding. In cattle, certain growth features are controlled by recessive genes which have not so far been excluded, complicating breeding. One such trait is dwarfism; another is the doppelender or "double muscling" condition, which causes muscle hypertrophy and thereby increases the animal's commercial value. Genetic analysis continues to reveal the mechanisms that control numerous aspects of the endocrine system and, through it, meat growth and quality. Genetic engineering can shorten breeding programs significantly because they allow for the identification and isolation of genes coding for desired traits, and for the reincorporation of these genes into the animal genome. To enable this, the genomes of many animals are being mapped. Some research has already seen commercial application. For instance, a recombinant bacterium has been developed which improves the digestion of grass in the rumen of cattle, and some features of muscle fibers have been genetically altered. Experimental reproductive cloning of commercially important meat animals such as sheep, pig or cattle has been successful. Asexual reproduction of animals bearing desirable traits is anticipated. Heat regulation in livestock is of economic significance, as mammals attempt to maintain a constant optimal body temperature. Low temperatures tend to prolong animal development and high temperatures tend to delay it. Depending on their size, body shape and insulation through tissue and fur, some animals have a relatively narrow zone of temperature tolerance and others (e.g. cattle) a broad one. Static magnetic fields, for reasons still unknown, retard animal development. The quality and quantity of usable meat depends on the animal's plane of nutrition, i.e., whether it is over- or underfed. Scientists disagree about how exactly the plane of nutrition influences carcase composition. The composition of the diet, especially the amount of protein provided, is an important factor regulating animal growth. Ruminants, which may digest cellulose, are better adapted to poor-quality diets, but their ruminal microorganisms degrade high-quality protein if supplied in excess. Because producing high-quality protein animal feed is expensive, several techniques are employed or experimented with to ensure maximum utilization of protein. These include the treatment of feed with formalin to protect amino acids during their passage through the rumen, the recycling of manure by feeding it back to cattle mixed with feed concentrates, or the conversion of petroleum hydrocarbons to protein through microbial action. In plant feed, environmental factors influence the availability of crucial nutrients or micronutrients, a lack or excess of which can cause a great many ailments. In Australia, where the soil contains limited phosphate, cattle are fed additional phosphate to increase the efficiency of beef production. Also in Australia, cattle and sheep in certain areas were often found losing their appetite and dying in the midst of rich pasture; this was found to be a result of cobalt deficiency in the soil. Plant toxins are a risk to grazing animals; for instance, sodium fluoroacetate, found in some African and Australian plants, kills by disrupting the cellular metabolism. Some man-made pollutants such as methylmercury and some pesticide residues present a particular hazard as they bioaccumulate in meat, potentially poisoning consumers. Practices such as confinement in factory farming have generated concerns for animal welfare. Animals have abnormal behaviors such as tail-biting, cannibalism, and feather pecking. Invasive procedures such as beak trimming, castration, and ear notching have similarly been questioned. Breeding for high productivity may affect welfare, as when broiler chickens are bred to be very large and to grow rapidly. Broilers often have leg deformities and become lame, and many die from the stress of handling and transport. Meat producers may seek to improve the fertility of female animals through the administration of gonadotrophic or ovulation-inducing hormones. In pig production, sow infertility is a common problem – possibly due to excessive fatness. No methods currently exist to augment the fertility of male animals. Artificial insemination is now routinely used to produce animals of the best possible genetic quality, and the efficiency of this method is improved through the administration of hormones that synchronize the ovulation cycles within groups of females. Growth hormones, particularly anabolic agents such as steroids, are used in some countries to accelerate muscle growth in animals. This practice has given rise to the beef hormone controversy, an international trade dispute. It may decrease the tenderness of meat, although research on this is inconclusive, and have other effects on the composition of the muscle flesh. Where castration is used to improve control over male animals, its side effects can be counteracted by the administration of hormones. Myostatin has been used to produce muscle hypertrophy. Sedatives may be administered to animals to counteract stress factors and increase weight gain. The feeding of antibiotics to certain animals increases growth rates. This practice is particularly prevalent in the US, but has been banned in the EU, partly because it causes antimicrobial resistance in pathogenic microorganisms. Composition The biochemical composition of meat varies in complex ways depending on the species, breed, sex, age, plane of nutrition, training and exercise of the animal, as well as on the anatomical location of the musculature involved. Even between animals of the same litter and sex there are considerable differences in such parameters as the percentage of intramuscular fat. Adult mammalian muscle consists of roughly 75 percent water, 19 percent protein, 2.5 percent intramuscular fat, 1.2 percent carbohydrates and 2.3 percent other soluble substances. These include organic compounds, especially amino acids, and inorganic substances such as minerals. Muscle proteins are either soluble in water (sarcoplasmic proteins, about 11.5 percent of total muscle mass) or in concentrated salt solutions (myofibrillar proteins, about 5.5 percent of mass). There are several hundred sarcoplasmic proteins. Most of them – the glycolytic enzymes – are involved in glycolysis, the conversion of sugars into high-energy molecules, especially adenosine triphosphate (ATP). The two most abundant myofibrillar proteins, myosin and actin, form the muscle's overall structure and enable it to deliver power, consuming ATP in the process. The remaining protein mass includes connective tissue (collagen and elastin). Fat in meat can be either adipose tissue, used by the animal to store energy and consisting of "true fats" (esters of glycerol with fatty acids), or intramuscular fat, which contains phospholipids and cholesterol. Muscle tissue is high in protein, containing all of the essential amino acids, and in most cases is a good source of zinc, vitamin B12, selenium, phosphorus, niacin, vitamin B6, choline, riboflavin and iron. Several forms of meat are high in vitamin K. Muscle tissue is very low in carbohydrates and does not contain dietary fiber. The fat content of meat varies widely with the species and breed of animal, the way in which the animal was raised, what it was fed, the part of the body, and the methods of butchering and cooking. Wild animals such as deer are leaner than farm animals, leading those concerned about fat content to choose game such as venison. Decades of breeding meat animals for fatness is being reversed by consumer demand for leaner meat. Small amounts – in the range 3%–7% – of fat deposited near the muscle fibers ("marbling") in meats can slightly improve perceived flavour, juiciness and tenderness, but contribute no more than about 5% to overall palatability. Fat around meat further contains cholesterol. The increase in meat consumption after 1960 is associated with significant imbalances of fat and cholesterol in the human diet. Production Upon reaching a predetermined age or weight, livestock are usually transported en masse to the slaughterhouse. Depending on its length and circumstances, this may exert stress and injuries on the animals, and some may die en route. Unnecessary stress in transport may adversely affect the quality of the meat. In particular, the muscles of stressed animals are low in water and glycogen, and their pH fails to attain acidic values, all of which results in poor meat quality. Animals are usually slaughtered by being first stunned and then exsanguinated (bled out). Death results from the one or the other procedure, depending on the methods employed. Stunning can be effected through asphyxiating the animals with carbon dioxide, shooting them with a gun or a captive bolt pistol, or shocking them with electric current. The exsanguination is accomplished by severing the carotid artery and the jugular vein in cattle and sheep, and the anterior vena cava in pigs. Draining as much blood as possible from the carcass is necessary because blood causes the meat to have an unappealing appearance and is a breeding ground for microorganisms. After exsanguination, the carcass is dressed; that is, the head, feet, hide (except hogs and some veal), excess fat, viscera and offal are removed, leaving only bones and edible muscle. Cattle and pig carcases, but not those of sheep, are then split in half along the mid ventral axis, and the carcase is cut into wholesale pieces. The dressing and cutting sequence, long a province of manual labor, is being progressively automated. Under hygienic conditions and without other treatment, meat can be stored at above its freezing point (−1.5 °C) for about six weeks without spoilage, during which time it undergoes an aging process that increases its tenderness and flavor. During the first day after death, glycolysis continues until the accumulation of lactic acid causes the pH to reach about 5.5. The remaining glycogen, about 18 g per kg, increases the water-holding capacity and tenderness of cooked meat. Rigor mortis sets in a few hours after death as adenosine triphosphate is used up. This causes the muscle proteins actin and myosin to combine into rigid actomyosin. This in turn lowers the meat's water-holding capacity, so the meat loses water or "weeps". In muscles that enter rigor in a contracted position, actin and myosin filaments overlap and cross-bond, resulting in meat that becomes tough when cooked. Over time, muscle proteins denature in varying degree, with the exception of the collagen and elastin of connective tissue, and rigor mortis resolves. These changes mean that meat is tender and pliable when cooked just after death or after the resolution of rigor, but tough when cooked during rigor. As the muscle pigment myoglobin denatures, its iron oxidizes, which may cause a brown discoloration near the surface of the meat. Ongoing proteolysis contributes to conditioning: hypoxanthine, a breakdown product of ATP, contributes to meat's flavor and odor, as do other products of the decomposition of muscle fat and protein. When meat is industrially processed, additives are used to protect or modify its flavor or color, to improve its tenderness, juiciness or cohesiveness, or to aid with its preservation. Consumption A bioarchaeological (specifically, isotopic analysis) study of early medieval England found, based on the funerary record, that high-meat protein diets were extremely rare, and that (contrary to previously held assumptions) elites did not consume more meat than non-elites, and men did not consume more meat than women. In the nineteenth century, meat consumption in Britain was the highest in Europe, exceeded only by that in British colonies. In the 1830s consumption per head in Britain was about 34 kilograms (75 lb) a year, rising to 59 kilograms (130 lb) in 1912. In 1904, laborers consumed 39 kilograms (87 lb) a year while aristocrats ate 140 kilograms (300 lb). There were some 43,000 butcher's shops in Britain in 1910, with "possibly more money invested in the meat industry than in any other British business" except finance. The US was a meat importing country by 1926. Truncated lifespan as a result of intensive breeding allows more meat to be produced from fewer animals. The world cattle population was about 600 million in 1929, with 700 million sheep and goats and 300 million pigs. According to the Food and Agriculture Organization, the overall consumption for white meat has increased from the 20th to the 21st centuries. Poultry meat has increased by 76.6% per kilo per capita and pig meat by 19.7%. Bovine meat has decreased from 10.4 kg (22 lb 15 oz) per capita in 1990 to 9.6 kg (21 lb 3 oz) per capita in 2009. FAO analysis found that 357 million tonnes of meat were produced in 2021, 53% more than in 2000, with chicken meat representing more than half the increase. Overall, diets that include meat are the most common worldwide according to the results of a 2018 Ipsos MORI study of 16–64 years olds in 28 countries. Ipsos states "An omnivorous diet is the most common diet globally, with non-meat diets (which can include fish) followed by over a tenth of the global population." Approximately 87% of people include meat in their diet in some frequency. 73% of meat eaters included it in their diet regularly and 14% consumed meat only occasionally or infrequently. The type of meat consumed varies between different cultures. The amount and kind of meat consumed varies by income, both between countries and within a given country. Horses are commonly eaten in countries such as France, Italy, Germany and Japan. Horses and other large mammals such as reindeer were hunted during the late Paleolithic in western Europe. Dogs are consumed in China, South Korea and Vietnam. Dogs are occasionally eaten in the Arctic regions. Historically, dog meat has been consumed in various parts of the world, such as Hawaii, Japan, Switzerland and Mexico. Cats are sometimes eaten, such as in Peru. Guinea pigs are raised for their flesh in the Andes. Whales and dolphins are hunted, partly for their flesh, in several countries. Misidentification sometimes occurs; in 2013, products in Europe labelled as beef actually contained horse meat. Meat can be cooked in many ways, including braising, broiling, frying, grilling, and roasting. Meat can be cured by smoking, which preserves and flavors food by exposing it to smoke from burning or smoldering wood. Other methods of curing include pickling, salting, and air-drying. Some recipes call for raw meat; steak tartare is made from minced raw beef. Pâtés are made with ground meat and fat, often including liver. Red vs white meat In the context of nutrition, red meat is defined as meat obtained from mammals, including beef, pork, lamb, mutton, veal, venison, and goat. Red meat does not necessarily appear red in color. Studies on the long-term health effects of meat often use the term white meat for poultry, including chicken and turkey. Some sources use the term white meat for both poultry and fish, and others exclude fish. In culinary contexts, the term white meat is often used more narrowly to refer to only certain cuts of poultry, particularly the breast and wings. Chicken legs and thighs are referred to as dark meat in these contexts. Health effects Meat, in particular red and processed meat, is linked to a variety of health risks. The 2015–2020 Dietary Guidelines for Americans asked men and teenage boys to increase their consumption of vegetables or other underconsumed foods (fruits, whole grains, and dairy) while reducing intake of protein foods (meats, poultry, and eggs) that they currently overconsume. Toxic compounds including heavy metals, mycotoxins, pesticide residues, dioxins, polychlorinated biphenyl can contaminate meat. Processed, smoked and cooked meat may contain carcinogens such as polycyclic aromatic hydrocarbons. Toxins may be introduced to meat as part of animal feed, as veterinary drug residues, or during processing and cooking. Such compounds are often metabolized in the body to form harmful by-products. Negative effects depend on the individual genome, diet, and history of the consumer. The consumption of processed and red meat carries an increased risk of cancer. The International Agency for Research on Cancer (IARC), a specialized agency of the World Health Organization (WHO), classified processed meat (e.g., bacon, ham, hot dogs, sausages) as, "carcinogenic to humans (Group 1), based on sufficient evidence in humans that the consumption of processed meat causes colorectal cancer." IARC classified red meat as "probably carcinogenic to humans (Group 2A), based on limited evidence that the consumption of red meat causes cancer in humans and strong mechanistic evidence supporting a carcinogenic effect." Cancer Research UK, National Health Service (NHS) and the National Cancer Institute have stated that red and processed meat intake increases risk of bowel cancer. The American Cancer Society in their "Diet and Physical Activity Guideline", stated "evidence that red and processed meats increase cancer risk has existed for decades, and many health organizations recommend limiting or avoiding these foods." The Canadian Cancer Society have stated that "eating red and processed meat increases cancer risk". A 2021 review found an increase of 11–51% risk of multiple cancer per 100g/d increment of red meat, and an increase of 8–72% risk of multiple cancer per 50g/d increment of processed meat. Highly carcinogenic nitrosamines are commonly found in processed meat products. Nitrosamines are also formed in the gut when heme iron is consumed; red meat is rich in heme iron. Heterocyclic amines (HCAs) and polycyclic aromatic hydrocarbons (PAHs) are chemicals formed when muscle meat, including beef, pork, fish, or poultry, is cooked using high-temperature methods, such as pan frying or grilling directly over an open flame. In laboratory experiments, HCAs and PAHs have been found to be mutagenic—that is, they cause changes in DNA that may increase the risk of cancer. Microwaving meat before finishing cooking may reduce HCAs significantly. Bacterial contamination has been seen with meat products. A 2011 study by the Translational Genomics Research Institute showed that nearly half (47%) of the meat and poultry in U.S. grocery stores were contaminated with S. aureus, with more than half (52%) of those bacteria resistant to antibiotics. A 2018 investigation by the Bureau of Investigative Journalism and The Guardian found that around 15 percent of the US population suffers from foodborne illnesses every year. The investigation highlighted unsanitary conditions in US-based meat plants, which included meat products covered in excrement and abscesses "filled with pus". Complete cooking and the careful avoidance of recontamination reduce the risk of bacterial infections from meat. A 2022 umbrella review found that each 100 g of red meat consumed per day is associated with a 17% higher risk of type 2 diabetes. Each 50 g of processed meat is associated with a 37% higher risk of type 2 diabetes. Diabetes UK advises people to limit their intake of red and processed meat. Meat production and trade substantially increase risks for infectious diseases (zoonosis), including of pandemics, whether though contact with wild and farmed animals, or via husbandry's environmental impact. For example, avian influenza from poultry meat production is a threat to human health. Furthermore, the use of antibiotics in meat production contributes to antimicrobial resistance – which contributes to millions of deaths – and makes it harder to control infectious diseases. In response to changing meat prices as well as health concerns about saturated fat and cholesterol, consumers have altered their consumption of various meats. Consumption of beef in the United States between 1970 and 1974 and 1990–1994 dropped by 21%, while consumption of chicken increased by 90%. A 2022 umbrella review found that each 100 g of red meat consumed per day is associated with a15 % higher risk of coronary heart disease, 14 % higher risk of hypertension, and 12% higher risk of stroke. Each 50 g of processed meat consumed per day is associated with a 27% higher risk of coronary heart disease, 17% higher risk of stroke, 8% higher risk of heart failure, and 15% higher risk of all-cause mortality. Environmental impact A multitude of serious negative environmental effects are associated with meat production. Among these are greenhouse gas emissions, fossil energy use, water use, water quality changes, and effects on grazed ecosystems. They are so significant that according to University of Oxford researchers, "a vegan diet is probably the single biggest way to reduce your impact on planet Earth... far bigger than cutting down on your flights or buying an electric car". However, this is often ignored in the public consciousness and in plans to tackle serious environmental issues such as the climate crisis. The livestock sector may be the largest source of water pollution (due to animal wastes, fertilizers, pesticides), and it contributes to emergence of antibiotic resistance. It accounts for over 8% of global human water use. It is a significant driver of biodiversity loss and ecosystems, as it causes deforestation, ocean dead zones, species extinction, land degradation, pollution, overfishing and global warming. Cattle farming was estimated to be responsible for 80 per cent of Amazon deforestation in 2008 due to the clearing of forests to grow animal feed (especially soya) and cattle ranching. Environmental effects vary among livestock production systems. Grazing of livestock can be beneficial for some wildlife species, but not for others. Targeted grazing of livestock is used as a food-producing alternative to herbicide use in some vegetation management. Meat production is by far the biggest user of land, as it accounts for nearly 40% of the global land surface. Just in the contiguous United States, 34% of its land area (265 million hectares or 654 million acres) are used as pasture and rangeland, mostly feeding livestock, not counting 158 million hectares (391 million acres) of cropland (20%), some of which is used for producing feed for livestock. Roughly 75% of deforested land around the globe is used for livestock pasture. Deforestation from practices like slash-and-burn releases CO2 and removes the carbon sink of grown tropical forest ecosystems which substantially mitigate climate change. Land use is a major pressure on pressure on fertile soils which is important for global food security. The rising global consumption of carbon-intensive meat products has "exploded the global carbon footprint of agriculture," according to some top scientists. Meat, dairy, and egg production are responsible for 57% of the greenhouse gases attributable to food production, and 20% of all greenhouse gas emissions. Some nations show very different impacts to counterparts within the same group, with Brazil and Australia having emissions over 200% higher than the average of their respective income groups, driven by meat consumption. According to the Assessing the Environmental Impacts of Consumption and Production report produced by United Nations Environment Programme's (UNEP) international panel for sustainable resource management, a worldwide transition in the direction of a meat and dairy free diet is indispensable if adverse global climate change were to be prevented. A 2019 report in The Lancet recommended that global meat (and sugar) consumption be reduced by 50 percent to mitigate climate change. Meat consumption in Western societies needs to be reduced by up to 90% according to a 2018 study published in Nature. The 2019 special report by the Intergovernmental Panel on Climate Change called for significantly reducing meat consumption, particularly in wealthy countries, in order to mitigate and adapt to climate change. Meat consumption is a primary contributor to the sixth mass extinction. A 2017 study by the World Wildlife Fund found that 60% of global biodiversity loss is attributable to meat-based diets, in particular from the use of land for feed crops, resulting in large-scale loss of habitats and species. Livestock make up 60% of the biomass of all mammals on earth, followed by humans (36%) and wild mammals (4%). In November 2017, 15,364 world scientists signed a Warning to Humanity calling for a drastic reduction in per capita consumption of meat and "dietary shifts towards mostly plant-based foods". The 2019 Global Assessment Report on Biodiversity and Ecosystem Services recommended a reduction in meat consumption to mitigate biodiversity loss. A 2021 Chatham House report asserted that a shift towards plant-based diets would free up land for the restoration of ecosystems and biodiversity. Meat consumption is predicted to rise as the human population increases and becomes more affluent; this in turn would increase greenhouse gas emissions and further reduce biodiversity. The environmental impact of meat production can be reduced on the farm by conversion of human-inedible residues of food crops. Manure from meat-producing livestock is used as fertilizer. Substitution of animal manures for synthetic fertilizers in crop production can be environmentally significant, as between 43 and 88 MJ of fossil fuel energy are used per kg of nitrogen in manufacture of synthetic nitrogenous fertilizers. The IPCC and others have stated that meat production has to be reduced substantially for any sufficient mitigation of climate change and, at least initially, largely through shifts towards plant-based diets where meat consumption is high. Meat can be replaced by, for example, high-protein iron-rich low-emission legumes and common fungi, dietary supplements (e.g. of vitamin B12) and fortified foods, cultured meat, microbial foods, mycoprotein, meat substitutes, and other alternatives, such as those based on mushrooms, legumes (pulses), and other food sources. Land previously used for meat production can be rewilded. The biologists Rodolfo Dirzo, Gerardo Ceballos, and Paul R. Ehrlich state that it is the "massive planetary monopoly of industrial meat production that needs to be curbed" while respecting the cultural traditions of indigenous peoples, for whom meat is an important source of protein. Cultural aspects Meat is part of the human diet in most cultures, where it often has symbolic meaning and important social functions. Some people choose not to eat meat (vegetarianism) or any food made from animals (veganism). The reasons for not eating all or some meat may include ethical objections to killing animals for food, health concerns, environmental concerns or religious dietary laws.[citation needed] Ethical issues regarding the consumption of meat include objecting to the act of killing animals or to the agricultural practices used in meat production. Reasons for objecting to killing animals for consumption may include animal rights, environmental ethics, or an aversion to inflicting pain or harm on sentient animals. Some people, while not vegetarians, refuse to eat the flesh of certain animals for cultural or religious reasons. The founders of Western philosophy disagreed about the ethics of eating meat. Plato's Republic has Socrates describe the ideal state as vegetarian. Pythagoras believed that humans and animals were equal and therefore disapproved of meat consumption, as did Plutarch, whereas Zeno and Epicurus were vegetarian but allowed meat-eating in their philosophy. Conversely, Aristotle's Politics assert that animals, as inferior beings, exist to serve humans, including as food. Augustine drew on Aristotle to argue that the universe's natural hierarchy allows humans to eat animals, and animals to eat plants. Enlightenment philosophers were likewise divided. Descartes wrote that animals were merely animated machines, while Kant considered them inferior beings for lack of discernment: means rather than ends. But Voltaire and Rousseau disagreed; Rousseau argued that meat-eating is a social rather than a natural act, because children are not interested in meat. Later philosophers examined the changing practices of eating meat in the modern age as part of a process of detachment from animals as living beings. Norbert Elias, for instance, noted that in medieval times cooked animals were brought to the table whole, but that since the Renaissance only the edible parts are served, which are no longer recognizably part of an animal. Modern eaters, according to Noëlie Vialles, demand an "ellipsis" between meat and dead animals. Fernand Braudel wrote that since the European diet of the 15th and 16th century was particularly heavy in meat, European colonialism helped export meat-eating across the globe, as colonized peoples took up the culinary habits of their colonizers. Among the Indian religions, Jainism opposes the eating of meat, while some schools of Buddhism and Hinduism advocate but do not mandate vegetarianism. Some Sikh groups oppose eating any meat. Jewish Kashrut dietary rules allow certain (kosher) meat and forbid other (treif) meat. Similar rules apply in Islamic dietary laws: The Quran explicitly forbids meat from animals that die naturally, blood, and the meat of pigs, which are haram, forbidden, as opposed to halal, allowed. Research in applied psychology has investigated meat eating in relation to morality, emotions, cognition, and personality. Psychological research suggests meat eating is correlated with masculinity, and reduced openness to experience. Research into the consumer psychology of meat is relevant both to meat industry marketing and to those advocating eating less meat. Unlike most other foods, meat is not perceived as gender-neutral; it is associated with men and masculinity. Sociological research, ranging from African tribal societies to contemporary barbecue, indicates that men are much more likely to participate in preparing meat than other food. This has been attributed to the influence of traditional male gender roles, in view of what Jack Goody calls a "male familiarity with killing", or as Claude Lévi-Strauss suggests, that roasting (meat) is more violent than boiling (grains and vegetables). In modern societies, men tend to consume more meat than women, and men often prefer red meat whereas women tend to prefer chicken and fish. See also References Sources This article incorporates text from a free content work. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken from World Food and Agriculture – Statistical Yearbook 2023, FAO, FAO. External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-191] | [TOKENS: 12858] |
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Stellar_classification#Class_B] | [TOKENS: 8228] |
Contents Stellar classification In astronomy, stellar classification is the classification of stars based on their spectral characteristics. Electromagnetic radiation from the star is analyzed by splitting it with a prism or diffraction grating into a spectrum exhibiting the rainbow of colors interspersed with spectral lines. Each line indicates a particular chemical element or molecule, with the line strength indicating the abundance of that element. The strengths of the different spectral lines vary mainly due to the temperature of the photosphere, although in some cases there are true abundance differences. The spectral class of a star is a short code primarily summarizing the ionization state, giving an objective measure of the photosphere's temperature. Most stars are currently classified under the Morgan–Keenan (MK) system using the letters O, B, A, F, G, K, and M, a sequence from the hottest (O-type) to the coolest (M-type). Each letter class is then subdivided using a numeric digit with 0 being hottest and 9 being coolest (e.g., A8, A9, F0, and F1 form a sequence from hotter to cooler). The sequence has been expanded with three classes for other stars that do not fit in the classical system: W, S and C. Some stellar remnants or objects of deviating mass have also been assigned letters: D for white dwarfs and L, T and Y for brown dwarfs (and exoplanets). In the MK system, a luminosity class is added to the spectral class using Roman numerals. This is based on the width of certain absorption lines in the star's spectrum, which vary with the density of the atmosphere and so distinguish giant stars from dwarfs. Luminosity class 0 or Ia+ is used for hypergiants, class I for supergiants, class II for bright giants, class III for regular giants, class IV for subgiants, class V for main-sequence stars, class sd (or VI) for subdwarfs, and class D (or VII) for white dwarfs. The full spectral class for the Sun is then G2V, indicating a main-sequence star with a surface temperature around 5,800 K. Conventional colour description The conventional colour description takes into account only the peak of the stellar spectrum. In actuality, however, stars radiate in all parts of the spectrum. Because all spectral colours combined appear white, the actual apparent colours the human eye would observe are far lighter than the conventional colour descriptions would suggest. This characteristic of 'lightness' indicates that the simplified assignment of colours within the spectrum can be misleading. Excluding colour-contrast effects in dim light, in typical viewing conditions there are no green, cyan, indigo, or violet stars. "Yellow" dwarfs such as the Sun are white, "red" dwarfs are a deep shade of yellow/orange, and "brown" dwarfs do not literally appear brown, but hypothetically would appear dim red or grey/black to a nearby observer. Modern classification The modern classification system is known as the Morgan–Keenan (MK) classification. Each star is assigned a spectral class (from the older Harvard spectral classification, which did not include luminosity) and a luminosity class using Roman numerals as explained below, forming the star's spectral type. Other modern stellar classification systems, such as the UBV system, are based on color indices—the measured differences in three or more color magnitudes. Those numbers are given labels such as "U−V" or "B−V", which represent the colors passed by two standard filters (e.g. Ultraviolet, Blue and Visual). The Harvard system is a one-dimensional classification scheme by astronomer Annie Jump Cannon, who re-ordered and simplified the prior alphabetical system by Draper (see History). Stars are grouped according to their spectral characteristics by single letters of the alphabet, optionally with numeric subdivisions. Main-sequence stars vary in surface temperature from approximately 2,000 to 50,000 K, whereas more-evolved stars – in particular, newly-formed white dwarfs – can have surface temperatures above 100,000 K. Physically, the classes indicate the temperature of the star's atmosphere and are normally listed from hottest to coldest. The traditional mnemonic for remembering the order of the spectral type letters, from hottest to coolest, is "Oh, Be A Fine Guy/Girl: Kiss Me!". Many alternative mnemonics have been proposed, in contests held by astronomy courses and organizations, but the traditional mnemonic remains the most popular. The spectral classes O through M, as well as other more specialized classes discussed later, are subdivided by Arabic numerals (0–9), where 0 denotes the hottest stars of a given class. For example, A0 denotes the hottest stars in class A and A9 denotes the coolest ones. Fractional numbers are allowed; for example, the star Mu Normae is classified as O9.7. The Sun is classified as G2. The fact that the Harvard classification of a star indicated its surface or photospheric temperature (or more precisely, its effective temperature) was not fully understood until after its development, though by the time the first Hertzsprung–Russell diagram was formulated (by 1914), this was generally suspected to be true. In the 1920s, the Indian physicist Meghnad Saha derived a theory of ionization by extending well-known ideas in physical chemistry pertaining to the dissociation of molecules to the ionization of atoms. First he applied it to the solar chromosphere, then to stellar spectra. Harvard astronomer Cecilia Payne then demonstrated that the O-B-A-F-G-K-M spectral sequence is actually a sequence in temperature. Because the classification sequence predates our understanding that it is a temperature sequence, the placement of a spectrum into a given subtype, such as B3 or A7, depends upon (largely subjective) estimates of the strengths of absorption features in stellar spectra. As a result, these subtypes are not evenly divided into any sort of mathematically representable intervals. The Yerkes spectral classification, also called the MK, or Morgan-Keenan (alternatively referred to as the MKK, or Morgan-Keenan-Kellman) system from the authors' initials, is a system of stellar spectral classification introduced in 1943 by William Wilson Morgan, Philip C. Keenan, and Edith Kellman from Yerkes Observatory. This two-dimensional (temperature and luminosity) classification scheme is based on spectral lines sensitive to stellar temperature and surface gravity, which is related to luminosity (whilst the Harvard classification is based on just surface temperature). Later, in 1953, after some revisions to the list of standard stars and classification criteria, the scheme was named the Morgan–Keenan classification, or MK, which remains in use today. Denser stars with higher surface gravity exhibit greater pressure broadening of spectral lines. The gravity, and hence the pressure, on the surface of a giant star is much lower than for a dwarf star because the radius of the giant is much greater than a dwarf of similar mass. Therefore, differences in the spectrum can be interpreted as luminosity effects and a luminosity class can be assigned purely from examination of the spectrum. A number of different luminosity classes are distinguished, as listed in the table below. Marginal cases are allowed; for example, a star may be either a supergiant or a bright giant, or may be in between the subgiant and main-sequence classifications. In these cases, two special symbols are used between the two luminosity classes: For example, a star classified as A3-4III/IV would be in between spectral types A3 and A4, while being either a giant star or a subgiant. Sub-dwarf classes have also been used: VI for sub-dwarfs (stars slightly less luminous than the main sequence). Nominal luminosity class VII (and sometimes higher numerals) is now rarely used for white dwarf or "hot sub-dwarf" classes, since the temperature-letters of the main sequence and giant stars no longer apply to white dwarfs. Occasionally, letters a and b are applied to luminosity classes other than supergiants; for example, a giant star slightly less luminous than typical may be given a luminosity class of IIIb, while a luminosity class IIIa indicates a star slightly brighter than a typical giant. A sample of extreme V stars with strong absorption in He II λ4686 spectral lines have been given the Vz designation. An example star is HD 93129 B. Additional nomenclature, in the form of lower-case letters, can follow the spectral type to indicate peculiar features of the spectrum. For example, 59 Cygni is listed as spectral type B1.5Vnne, indicating a spectrum with the general classification B1.5V, as well as very broad absorption lines and certain emission lines. History The reason for the odd arrangement of letters in the Harvard classification is historical, having evolved from the earlier Secchi classes and been progressively modified as understanding improved. During the 1860s and 1870s, pioneering stellar spectroscopist Angelo Secchi created the Secchi classes in order to classify observed spectra. By 1866, he had developed three classes of stellar spectra, shown in the table below. In the late 1890s, this classification began to be superseded by the Harvard classification, which is discussed in the remainder of this article. The Roman numerals used for Secchi classes should not be confused with the completely unrelated Roman numerals used for Yerkes luminosity classes and the proposed neutron star classes. After the death of her husband, Mary Anna Draper began to fund the creation of the Harvard Plate Stacks and the study of these plates at the Harvard College Observatory. The director of the Observatory, Edward C. Pickering began to hire pioneering female astronomers collectively known as the Harvard Computers. Though they would study many different astronomical subjects, an early result of this work was the first edition of The Henry Draper Memorial Catalogue of Stellar Spectra, first published in 1890. Williamina Fleming classified most of the spectra in the first edition of the catalogue and is credited with classifying over 10,000 featured stars and discovering 10 novae and more than 200 variable stars. With the help of the Harvard Computers, especially Williamina Fleming, the first iteration of the Henry Draper catalogue was devised to replace the Roman-numeral scheme established by Angelo Secchi. The catalogue used a scheme in which the previously used Secchi classes (I to V) were subdivided into more specific classes, given letters from A to P. Also, the letter Q was used for stars not fitting into any other class. Fleming worked with Pickering to differentiate 17 different classes based on the intensity of hydrogen spectral lines, which causes variation in the wavelengths emanated from stars and results in variation in color appearance. The spectra in class A tended to produce the strongest hydrogen absorption lines while spectra in class O produced virtually no visible lines. The lettering system displayed the gradual decrease in hydrogen absorption in the spectral classes when moving down the alphabet. This classification system was later modified by Annie Jump Cannon and Antonia Maury to produce the Harvard spectral classification scheme. In 1897, another astronomer at Harvard, Antonia Maury, placed the Orion subtype of Secchi class I ahead of the remainder of Secchi class I, thus placing the modern type B ahead of the modern type A. She was the first to do so, although she did not use lettered spectral types, but rather a series of twenty-two types numbered from I–XXII. Because the 22 Roman numeral groupings did not account for additional variations in spectra, three additional divisions were made to further specify differences: Lowercase letters were added to differentiate relative line appearance in spectra; the lines were defined as: Antonia Maury published her own stellar classification catalogue in 1897 called "Spectra of Bright Stars Photographed with the 11 inch Draper Telescope as Part of the Henry Draper Memorial", which included 4,800 photographs and Maury's analyses of 681 bright northern stars. This was the first instance in which a woman was credited for an observatory publication. In 1901, Annie Jump Cannon returned to the lettered types, but dropped all letters except O, B, A, F, G, K, M, and N used in that order, as well as P for planetary nebulae and Q for some peculiar spectra. She also used types such as B5A for stars halfway between types B and A, F2G for stars one fifth of the way from F to G, and so on. Finally, by 1912, Cannon had changed the types B, A, B5A, F2G, etc. to B0, A0, B5, F2, etc. This is essentially the modern form of the Harvard classification system. This system was developed through the analysis of spectra on photographic plates, which could convert light emanated from stars into a readable spectrum. A luminosity classification known as the Mount Wilson system was used to distinguish between stars of different luminosities. This notation system is still sometimes seen on modern spectra. Spectral types The stellar classification system is taxonomic, based on type specimens, similar to classification of species in biology: The categories are defined by one or more standard stars for each category and sub-category, with an associated description of the distinguishing features. Stars are often referred to as early or late types. "Early" is a synonym for hotter, while "late" is a synonym for cooler. Depending on the context, "early" and "late" may be absolute or relative terms. "Early" as an absolute term would therefore refer to O or B, and possibly A stars. As a relative reference it relates to stars hotter than others, such as "early K" being perhaps K0, K1, K2 and K3. "Late" is used in the same way, with an unqualified use of the term indicating stars with spectral types such as K and M, but it can also be used for stars that are cool relative to other stars, as in using "late G" to refer to G7, G8, and G9. In the relative sense, "early" means a lower Arabic numeral following the class letter, and "late" means a higher number. This obscure terminology is a hold-over from a late nineteenth century model of stellar evolution, which supposed that stars were powered by gravitational contraction via the Kelvin–Helmholtz mechanism, which is now known to not apply to main-sequence stars. If that were true, then stars would start their lives as very hot "early-type" stars and then gradually cool down into "late-type" stars. This mechanism provided ages of the Sun that were much smaller than what is observed in the geologic record, and was rendered obsolete by the discovery that stars are powered by nuclear fusion. The terms "early" and "late" were carried over, beyond the demise of the model they were based on. O-type stars are very hot and extremely luminous, with most of their radiated output in the ultraviolet range. These are the rarest of all main-sequence stars. About 1 in 3,000,000 (0.00003%) of the main-sequence stars in the solar neighborhood are O-type stars.[c] Some of the most massive stars lie within this spectral class. O-type stars frequently have complicated surroundings that make measurement of their spectra difficult. O-type spectra formerly were defined by the ratio of the strength of the He II λ4541 relative to that of He I λ4471, where λ is the radiation wavelength. Spectral type O7 was defined to be the point at which the two intensities are equal, with the He I line weakening towards earlier types. Type O3 was, by definition, the point at which said line disappears altogether, although it can be seen very faintly with modern technology. Due to this, the modern definition uses the ratio of the nitrogen line N IV λ4058 to N III λλ4634-40-42. O-type stars have dominant lines of absorption and sometimes emission for He II lines, prominent ionized (Si IV, O III, N III, and C III) and neutral helium lines, strengthening from O5 to O9, and prominent hydrogen Balmer lines, although not as strong as in later types. Higher-mass O-type stars do not retain extensive atmospheres due to the extreme velocity of their stellar wind, which may reach 2,000 km/s. Because they are so massive, O-type stars have very hot cores and burn through their hydrogen fuel very quickly, so they are the first stars to leave the main sequence. When the MKK classification scheme was first described in 1943, the only subtypes of class O used were O5 to O9.5. The MKK scheme was extended to O9.7 in 1971 and O4 in 1978, and new classification schemes that add types O2, O3, and O3.5 have subsequently been introduced. Example spectral standards: B-type stars are very luminous and blue. Their spectra have neutral helium lines, which are most prominent at the B2 subclass, and moderate hydrogen lines. As O- and B-type stars are so energetic, they only live for a relatively short time. Thus, due to the low probability of kinematic interaction during their lifetime, they are unable to stray far from the area in which they formed, apart from runaway stars. The transition from class O to class B was originally defined to be the point at which the He II λ4541 disappears. However, with modern equipment, the line is still apparent in the early B-type stars. Today for main-sequence stars, the B class is instead defined by the intensity of the He I violet spectrum, with the maximum intensity corresponding to class B2. For supergiants, lines of silicon are used instead; the Si IV λ4089 and Si III λ4552 lines are indicative of early B. At mid-B, the intensity of the latter relative to that of Si II λλ4128-30 is the defining characteristic, while for late B, it is the intensity of Mg II λ4481 relative to that of He I λ4471. These stars tend to be found in their originating OB associations, which are associated with giant molecular clouds. The Orion OB1 association occupies a large portion of a spiral arm of the Milky Way and contains many of the brighter stars of the constellation Orion. About 1 in 800 (0.125%) of the main-sequence stars in the solar neighborhood are B-type main-sequence stars.[c] B-type stars are relatively uncommon and the closest is Regulus, at around 80 light years. Massive yet non-supergiant stars known as Be stars have been observed to show one or more Balmer lines in emission, with the hydrogen-related electromagnetic radiation series projected out by the stars being of particular interest. Be stars are generally thought to feature unusually strong stellar winds, high surface temperatures, and significant attrition of stellar mass as the objects rotate at a curiously rapid rate. Objects known as B[e] stars – or B(e) stars for typographic reasons – possess distinctive neutral or low ionisation emission lines that are considered to have forbidden mechanisms, undergoing processes not normally allowed under current understandings of quantum mechanics. Example spectral standards: A-type stars are among the more common naked eye stars, and are white or bluish-white. They have strong hydrogen lines, at a maximum by A0, and also lines of ionized metals (Fe II, Mg II, Si II) at a maximum at A5. The presence of Ca II lines is notably strengthening by this point. About 1 in 160 (0.625%) of the main-sequence stars in the solar neighborhood are A-type stars,[c] which includes 9 stars within 15 parsecs. Example spectral standards: F-type stars have strengthening spectral lines H and K of Ca II. Neutral metals (Fe I, Cr I) beginning to gain on ionized metal lines by late F. Their spectra are characterized by the weaker hydrogen lines and ionized metals. Their color is white. About 1 in 33 (3.03%) of the main-sequence stars in the solar neighborhood are F-type stars,[c] including 1 star Procyon A within 20 ly. Example spectral standards: G-type stars, including the Sun, have prominent spectral lines H and K of Ca II, which are most pronounced at G2. They have even weaker hydrogen lines than F, but along with the ionized metals, they have neutral metals. There is a prominent spike in the G band of CN molecules. Class G main-sequence stars make up about 7.5%, nearly one in thirteen, of the main-sequence stars in the solar neighborhood. There are 21 G-type stars within 10pc.[c] Class G contains the "Yellow Evolutionary Void". Supergiant stars often swing between O or B (blue) and K or M (red). While they do this, they do not stay for long in the unstable yellow supergiant class. Example spectral standards: K-type stars are orangish stars that are slightly cooler than the Sun. They make up about 12% of the main-sequence stars in the solar neighborhood.[c] There are also giant K-type stars, which range from hypergiants like RW Cephei, to giants and supergiants, such as Arcturus, whereas orange dwarfs, like Alpha Centauri B, are main-sequence stars. They have extremely weak hydrogen lines, if those are present at all, and mostly neutral metals (Mn I, Fe I, Si I). By late K, molecular bands of titanium oxide become present. Mainstream theories (those rooted in lower harmful radioactivity and star longevity) would thus suggest such stars have the optimal chances of heavily evolved life developing on orbiting planets (if such life is directly analogous to Earth's) due to a broad habitable zone yet much lower harmful periods of emission compared to those with the broadest such zones. Example spectral standards: Class M stars are by far the most common. About 76% of the main-sequence stars in the solar neighborhood are class M stars.[c][f] However, class M main-sequence stars (red dwarfs) have such low luminosities that none are bright enough to be seen with the unaided eye, unless under exceptional conditions. The brightest-known M class main-sequence star is Lacaille 8760, class M0V, with magnitude 6.7 (the limiting magnitude for typical naked-eye visibility under good conditions being typically quoted as 6.5), and it is extremely unlikely that any brighter examples will be found. Although most class M stars are red dwarfs, most of the largest-known supergiant stars in the Milky Way are class M stars, such as VY Canis Majoris, VV Cephei, Antares, and Betelgeuse. Furthermore, some larger, hotter brown dwarfs are late class M, usually in the range of M6.5 to M9.5. The spectrum of a class M star contains lines from oxide molecules (in the visible spectrum, especially TiO) and all neutral metals, but absorption lines of hydrogen are usually absent. TiO bands can be strong in class M stars, usually dominating their visible spectrum by about M5. Vanadium(II) oxide bands become present by late M. Example spectral standards: Extended spectral types A number of new spectral types have been taken into use from newly discovered types of stars. Spectra of some very hot and bluish stars exhibit marked emission lines from carbon or nitrogen, or sometimes oxygen. Once included as type O stars, the Wolf–Rayet stars of class W or WR are notable for spectra lacking hydrogen lines. Instead their spectra are dominated by broad emission lines of highly ionized helium, nitrogen, carbon, and sometimes oxygen. They are thought to mostly be dying supergiants with their hydrogen layers blown away by stellar winds, thereby directly exposing their hot helium shells. Class WR is further divided into subclasses according to the relative strength of nitrogen and carbon emission lines in their spectra (and outer layers). WR spectra range is listed below: Although the central stars of most planetary nebulae (CSPNe) show O-type spectra, around 10% are hydrogen-deficient and show WR spectra. These are low-mass stars and to distinguish them from the massive Wolf–Rayet stars, their spectra are enclosed in square brackets: e.g. [WC]. Most of these show [WC] spectra, some [WO], and very rarely [WN]. The slash stars are O-type stars with WN-like lines in their spectra. The name "slash" comes from their printed spectral type having a slash in it (e.g. "Of/WNL")). There is a secondary group found with these spectra, a cooler, "intermediate" group designated "Ofpe/WN9". These stars have also been referred to as WN10 or WN11, but that has become less popular with the realisation of the evolutionary difference from other Wolf–Rayet stars. Recent discoveries of even rarer stars have extended the range of slash stars as far as O2-3.5If*/WN5-7, which are even hotter than the original "slash" stars. They are O stars with strong magnetic fields. Designation is Of?p. The new spectral types L, T, and Y were created to classify infrared spectra of cool stars. This includes both red dwarfs and brown dwarfs that are very faint in the visible spectrum. Brown dwarfs, stars that do not undergo hydrogen fusion, cool as they age and so progress to later spectral types. Brown dwarfs start their lives with M-type spectra and will cool through the L, T, and Y spectral classes, faster the less massive they are; the highest-mass brown dwarfs cannot have cooled to Y or even T dwarfs within the age of the universe. Because this leads to an unresolvable overlap between spectral types' effective temperature and luminosity for some masses and ages of different L-T-Y types, no distinct temperature or luminosity values can be given. Class L dwarfs get their designation because they are cooler than M stars and L is the remaining letter alphabetically closest to M. Some of these objects have masses large enough to support hydrogen fusion and are therefore stars, but most are of substellar mass and are therefore brown dwarfs. They are a very dark red in color and brightest in infrared. Their atmosphere is cool enough to allow metal hydrides and alkali metals to be prominent in their spectra. Due to low surface gravity in giant stars, TiO- and VO-bearing condensates never form. Thus, L-type stars larger than dwarfs can never form in an isolated environment. However, it may be possible for these L-type supergiants to form through stellar collisions, an example of which is V838 Monocerotis while in the height of its luminous red nova eruption. Class T dwarfs are cool brown dwarfs with surface temperatures between approximately 550 and 1,300 K (277 and 1,027 °C; 530 and 1,880 °F). Their emission peaks in the infrared. Methane is prominent in their spectra. Study of the number of proplyds (protoplanetary disks, clumps of gas in nebulae from which stars and planetary systems are formed) indicates that the number of stars in the galaxy should be several orders of magnitude higher than what was previously conjectured. It is theorized that these proplyds are in a race with each other. The first one to form will become a protostar, which are very violent objects and will disrupt other proplyds in the vicinity, stripping them of their gas. The victim proplyds will then probably go on to become main-sequence stars or brown dwarfs of the L and T classes, which are quite invisible to us. Brown dwarfs of spectral class Y are cooler than those of spectral class T and have qualitatively different spectra from them. A total of 17 objects have been placed in class Y as of August 2013. Although such dwarfs have been modelled and detected within forty light-years by the Wide-field Infrared Survey Explorer (WISE) there is no well-defined spectral sequence yet and no prototypes. Nevertheless, several objects have been proposed as spectral classes Y0, Y1, and Y2. The spectra of these prospective Y objects display absorption around 1.55 micrometers. Delorme et al. have suggested that this feature is due to absorption from ammonia, and that this should be taken as the indicative feature for the T-Y transition. In fact, this ammonia-absorption feature is the main criterion that has been adopted to define this class. However, this feature is difficult to distinguish from absorption by water and methane, and other authors have stated that the assignment of class Y0 is premature. The latest brown dwarf proposed for the Y spectral type, WISE 1828+2650, is a > Y2 dwarf with an effective temperature originally estimated around 300 K, the temperature of the human body. Parallax measurements have, however, since shown that its luminosity is inconsistent with it being colder than ~400 K. The coolest Y dwarf currently known is WISE 0855−0714 with an approximate temperature of 250 K, and a mass just seven times that of Jupiter. The mass range for Y dwarfs is 9–25 Jupiter masses, but young objects might reach below one Jupiter mass (although they cool to become planets), which means that Y class objects straddle the 13 Jupiter mass deuterium-fusion limit that marks the current IAU division between brown dwarfs and planets. Young brown dwarfs have low surface gravities because they have larger radii and lower masses compared to the field stars of similar spectral type. These sources are marked by a letter beta (β) for intermediate surface gravity and gamma (γ) for low surface gravity. Indication for low surface gravity are weak CaH, KI and NaI lines, as well as strong VO line. Alpha (α) stands for normal surface gravity and is usually dropped. Sometimes an extremely low surface gravity is denoted by a delta (δ). The suffix "pec" stands for peculiar. The peculiar suffix is still used for other features that are unusual and summarizes different properties, indicative of low surface gravity, subdwarfs and unresolved binaries. The prefix sd stands for subdwarf and only includes cool subdwarfs. This prefix indicates a low metallicity and kinematic properties that are more similar to halo stars than to disk stars. Subdwarfs appear bluer than disk objects. The red suffix describes objects with red color, but an older age. This is not interpreted as low surface gravity, but as a high dust content. The blue suffix describes objects with blue near-infrared colors that cannot be explained with low metallicity. Some are explained as L+T binaries, others are not binaries, such as 2MASS J11263991−5003550 and are explained with thin and/or large-grained clouds. Carbon-stars are stars whose spectra indicate production of carbon – a byproduct of triple-alpha helium fusion. With increased carbon abundance, and some parallel s-process heavy element production, the spectra of these stars become increasingly deviant from the usual late spectral classes G, K, and M. Equivalent classes for carbon-rich stars are S and C. The giants among those stars are presumed to produce this carbon themselves, but some stars in this class are double stars, whose odd atmosphere is suspected of having been transferred from a companion that is now a white dwarf, when the companion was a carbon-star. Originally classified as R and N stars, these are also known as carbon stars. These are red giants, near the end of their lives, in which there is an excess of carbon in the atmosphere. The old R and N classes ran parallel to the normal classification system from roughly mid-G to late M. These have more recently been remapped into a unified carbon classifier C with N0 starting at roughly C6. Another subset of cool carbon stars are the C–J-type stars, which are characterized by the strong presence of molecules of 13 CN in addition to those of 12 CN. A few main-sequence carbon stars are known, but the overwhelming majority of known carbon stars are giants or supergiants. There are several subclasses: Class S stars form a continuum between class M stars and carbon stars. Those most similar to class M stars have strong ZrO absorption bands analogous to the TiO bands of class M stars, whereas those most similar to carbon stars have strong sodium D lines and weak C2 bands. Class S stars have excess amounts of zirconium and other elements produced by the s-process, and have more similar carbon and oxygen abundances to class M or carbon stars. Like carbon stars, nearly all known class S stars are asymptotic-giant-branch stars. The spectral type is formed by the letter S and a number between zero and ten. This number corresponds to the temperature of the star and approximately follows the temperature scale used for class M giants. The most common types are S3 to S5. The non-standard designation S10 has only been used for the star Chi Cygni when at an extreme minimum. The basic classification is usually followed by an abundance indication, following one of several schemes: S2,5; S2/5; S2 Zr4 Ti2; or S2*5. A number following a comma is a scale between 1 and 9 based on the ratio of ZrO and TiO. A number following a slash is a more-recent but less-common scheme designed to represent the ratio of carbon to oxygen on a scale of 1 to 10, where a 0 would be an MS star. Intensities of zirconium and titanium may be indicated explicitly. Also occasionally seen is a number following an asterisk, which represents the strength of the ZrO bands on a scale from 1 to 5. In between the M and S classes, border cases are named MS stars. In a similar way, border cases between the S and C-N classes are named SC or CS. The sequence M → MS → S → SC → C-N is hypothesized to be a sequence of increased carbon abundance with age for carbon stars in the asymptotic giant branch. The class D (for Degenerate) is the modern classification used for white dwarfs—low-mass stars that are no longer undergoing nuclear fusion and have shrunk to planetary size, slowly cooling down. Class D is further divided into spectral types DA, DB, DC, DO, DQ, DX, and DZ. The letters are not related to the letters used in the classification of other stars, but instead indicate the composition of the white dwarf's visible outer layer or atmosphere. The white dwarf types are as follows: The type is followed by a number giving the white dwarf's surface temperature. This number is a rounded form of 50400/Teff, where Teff is the effective surface temperature, measured in kelvins. Originally, this number was rounded to one of the digits 1 through 9, but more recently fractional values have started to be used, as well as values below 1 and above 9.(For example DA1.5 for IK Pegasi B) Two or more of the type letters may be used to indicate a white dwarf that displays more than one of the spectral features above. A different set of spectral peculiarity symbols are used for white dwarfs than for other types of stars: Luminous blue variables (LBVs) are rare, massive and evolved stars that show unpredictable and sometimes dramatic variations in their spectra and brightness. During their "quiescent" states, they are usually similar to B-type stars, although with unusual spectral lines. During outbursts, they are more similar to F-type stars, with significantly lower temperatures. Many papers treat LBV as its own spectral type. Finally, the classes P and Q are left over from the system developed by Cannon for the Henry Draper Catalogue. They are occasionally used for certain objects, not associated with a single star: Type P objects are stars within planetary nebulae (typically young white dwarfs or hydrogen-poor M giants); type Q objects are novae. Stellar remnants Stellar remnants are objects associated with the death of stars. Included in the category are white dwarfs, and as can be seen from the radically different classification scheme for class D, stellar remnants are difficult to fit into the MK system. The Hertzsprung–Russell diagram, which the MK system is based on, is observational in nature so these remnants cannot easily be plotted on the diagram, or cannot be placed at all. Old neutron stars are relatively small and cold, and would fall on the far right side of the diagram. Planetary nebulae are dynamic and tend to quickly fade in brightness as the progenitor star transitions to the white dwarf branch. If shown, a planetary nebula would be plotted to the right of the diagram's upper right quadrant. A black hole emits no visible light of its own, and therefore would not appear on the diagram. A classification system for neutron stars using Roman numerals has been proposed: type I for less massive neutron stars with low cooling rates, type II for more massive neutron stars with higher cooling rates, and a proposed type III for more massive neutron stars (possible exotic star candidates) with higher cooling rates. The more massive a neutron star is, the higher neutrino flux it carries. These neutrinos carry away so much heat energy that after only a few years the temperature of an isolated neutron star falls from the order of billions to only around a million Kelvin. This proposed neutron star classification system is not to be confused with the earlier Secchi spectral classes and the Yerkes luminosity classes. Replaced spectral classes Several spectral types, all previously used for non-standard stars in the mid-20th century, have been replaced during revisions of the stellar classification system. They may still be found in old editions of star catalogs: R and N have been subsumed into the new C class as C-R and C-N. Stellar classification, habitability, and the search for life While humans may eventually be able to colonize any kind of stellar habitat, this section will address the probability of life arising around other stars. Stability, luminosity, and lifespan are all factors in stellar habitability. Humans know of only one star that hosts life, the G-class Sun, a star with an abundance of heavy elements and low variability in brightness. The Solar System is also unlike many stellar systems in that it only contains one star (see Habitability of binary star systems). Working from these constraints and the problems of having an empirical sample set of only one, the range of stars that are predicted to be able to support life is limited by a few factors. Of the main-sequence star types, stars more massive than 1.5 times that of the Sun (spectral types O, B, and A) age too quickly for advanced life to develop (using Earth as a guideline). On the other extreme, dwarfs of less than half the mass of the Sun (spectral type M) are likely to tidally lock planets within their habitable zone, along with other problems (see Habitability of red dwarf systems). While there are many problems facing life on red dwarfs, many astronomers continue to model these systems due to their sheer numbers and longevity. For these reasons NASA's Kepler Mission is searching for habitable planets at nearby main-sequence stars that are less massive than spectral type A but more massive than type M—making the most probable stars to host life dwarf stars of types F, G, and K. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://he.wikipedia.org/wiki/2008] | [TOKENS: 514] |
תוכן עניינים 2008 שנת 2008 היא השנה השמינית במאה ה-21. זוהי שנה מעוברת, שאורכה 366 ימים. 1 בינואר 2008 לפי הלוח הגרגוריאני מקדים את 1 בינואר לפי הלוח היוליאני ב-13 ימים. כל התאריכים שלהלן הם לפי הלוח הגרגוריאני. אירועים בולטים בעולם ינואר פברואר מרץ אפריל מאי יוני יולי אוגוסט ספטמבר אוקטובר נובמבר דצמבר נפטרו ינואר פברואר מרץ אפריל מאי יוני יולי אוגוסט ספטמבר אוקטובר נובמבר דצמבר לוח שנה להלן לוח שנה גרגוריאני – עברי משולב עם ימים בינלאומיים. חגים ומועדים עבריים: Nothing. ראו גם קישורים חיצוניים 2003 • 2004 • 2005 • 2006 • 2007 • 2008 • 2009 • 2010 • 2011 • 2012 • 2013 |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Jesus_Christ] | [TOKENS: 18373] |
Contents Jesus Jesus[e] (c. 6 to 4 BC – AD 30 or 33), also referred to as Jesus Christ,[f] Jesus of Nazareth, and by various other names and titles, was a 1st-century Jewish preacher and religious leader in the Roman province of Judaea. He is the central figure of Christianity, the world's largest religion. Most Christians consider Jesus to be the incarnation of God the Son and the awaited messiah, or Christ, a descendant of the Davidic line prophesied in the Old Testament. Virtually all modern scholars of antiquity agree that Jesus existed historically.[g] Accounts of Jesus's life are contained in the Gospels, especially the four canonical Gospels of the New Testament. Since the Enlightenment, academic research has produced various views on the historical reliability of the Gospels and the extent to which they reflect the historical Jesus.[h] According to Christian tradition, as preserved in the Gospels and the Acts of the Apostles, Jesus was circumcised at eight days old, presented at the Temple in Jerusalem at forty days old, baptized by John the Baptist as a young adult, and, after 40 days and nights of fasting in the wilderness, began his public ministry. He was an itinerant teacher whom his followers believed to possess divine authority in interpreting Jewish law. Jesus often debated with other Jews about how best to follow God, engaged in healings, taught in parables, and gathered followers, 12 of whom he appointed as his apostles. According to the New Testament accounts, he was arrested in Jerusalem and tried by the Sanhedrin, handed over to the Roman authorities, and crucified on the order of Pontius Pilate, the Roman prefect of Judaea. After his death, his followers became convinced that he rose from the dead, and the community they formed eventually developed into the early Christian Church, which expanded into a worldwide movement. Christian theology includes the beliefs that Jesus was conceived by the Holy Spirit, was born of a virgin named Mary, performed miracles, founded the Christian Church, died by crucifixion as a sacrifice for atonement for sin, rose from the dead on the third day, and ascended into Heaven, from where he will return. Christians commonly believe that Jesus enables people to be reconciled to God. The Nicene Creed asserts that Jesus will judge the living and the dead, either before or after their bodily resurrection, an event associated with the Second Coming of Jesus in Christian eschatology. The great majority of Christians worship Jesus as the incarnation of God the Son, the second of the three persons of the Trinity.[i] The birth of Jesus is celebrated annually, generally on 25 December,[j] as Christmas. His crucifixion is commemorated on Good Friday and his resurrection on Easter Sunday. The world's most widely used calendar era—in which the current year is AD 2026 (or 2026 CE)—is traditionally based on the approximate date of the birth of Jesus. Mainstream Judaism rejects the belief that Jesus was the awaited messiah, holding that he did not fulfill messianic prophecies, was not lawfully anointed, and was neither divine nor resurrected. In contrast, Jesus in Islam[k] is considered the messiah and a prophet of God, who was sent to the Israelites and will return to Earth before the Day of Judgement. Muslims believe that Jesus was born of the virgin Mary but was neither God nor the son of God. Most Muslims do not believe that he was killed or crucified, but that God raised him into Heaven while he was still alive.[l] Jesus is also revered in the Baháʼí and Druze faiths, as well as in Rastafari. Name A typical Jewish person in Jesus's time had only one name, sometimes followed by a patronymic phrase of the form "son of [father's name]", or by the person's home town. Thus, in the New Testament, Jesus is commonly referred to as "Jesus of Nazareth".[m] Jesus's neighbours in Nazareth referred to him as "the carpenter, the son of Mary and brother of James and Joses and Judas and Simon", "the carpenter's son", or "Joseph's son"; in the Gospel of John, the disciple Philip refers to him as "Jesus son of Joseph from Nazareth". The name Jesus is the English transliteration, through Latin Iesus, of Ancient Greek: Ἰησοῦς, which is the Greek rendering of the Hebrew name Joshua (יְהוֹשֻׁעַ Yehoshua). The Hebrew/Aramaic name was common among Judean Jews at the time of Jesus's birth, although by that period it had been shortened to יֵשׁוּעַ (Yeshua) from יְהוֹשֻׁעַ (Yehoshua); the contraction had already occurred in later biblical books such as Nehemiah, where Joshua is referred to as Yeshua. The name means "God saves" in Hebrew, literally "Yahweh saves", from the root ישׁע (y-š-ʿ, 'to save') and the noun יְשׁוּעָה (yeshuah, 'salvation'). The Gospel of Matthew asserts the etymological significance of Jesus's name explicitly in the prophecy of the angel to Joseph about his birth: "you will call his name Jesus (Ἰησοῦς), for he will save (σώσει) his people from their sins". The fact that Moses' successor Joshua bears the same name as Jesus in the original Greek, Hebrew, and Aramaic has been given theological significance by commentators, as a parallel is often drawn between the two leaders and the etymology of their shared name ('to save'): Joshua leads the Jews into the Promised Land, while in Christianity Jesus is understood to save both Jews and Gentiles from their sins. Since the 1st century, Christians have commonly referred to Jesus as "Jesus Christ". The word Christ is not a given name but was originally a title or office ("the Christ"), meaning "The Messiah". The term derives from the Greek Χριστός (Christos), a calque of the Hebrew word משיח (mashiakh), transliterated into English as messiah. The Hebrew term means "anointed", from the verb מָשַׁח (mashaḥ), “to rub with oil, to anoint”. In the Septuagint, the Hebrew word was rendered into Greek as χριστός (christos), meaning “anointed”, from the verb χρίω (chrio), “to rub with oil, to anoint”. In biblical Judaism, sacred oil was used to anoint certain exceptionally holy people and objects as part of their religious investiture. Early Christians designated Jesus as "the Christ" because they believed him to be the Messiah whose arrival is prophesied in the Hebrew Bible (Old Testament). In post-biblical usage, Christ came to be viewed as a name—one part of "Jesus Christ". The term Christian, meaning a follower of Christ, has been in use since the 1st century. Life and teachings in the New Testament The four canonical gospels (Matthew, Mark, Luke, and John) are the foremost sources for the life and message of Jesus. Other parts of the New Testament also include references to key episodes in his life, such as the Last Supper in 1 Corinthians 11:23–26. Acts of the Apostles refers to Jesus's early ministry and its anticipation by John the Baptist. Acts 1:1–11 provides more detail about the Ascension of Jesus than the canonical gospels do. In the undisputed Pauline letters, which were written earlier than the gospels, Jesus's words or instructions are cited several times.[n] Some early Christian groups had separate descriptions of Jesus's life and teachings that are not included in the New Testament. These include the Gospel of Thomas, Gospel of Peter, Gospel of Judas, the Apocryphon of James, and many other apocryphal writings. Most scholars conclude that these texts were written later and are less historically reliable than the canonical gospels. The canonical gospels are four accounts, each attributed to a different author. The authors of the gospels are generally regarded as pseudonymous and are attributed by tradition to the four evangelists, each associated with Jesus or his close followers: Mark by John Mark, an associate of Peter; Matthew to one of Jesus's disciples; Luke to a companion of Paul mentioned in a few epistles; and John to another of Jesus's disciples, the "beloved disciple". According to the Marcan priority hypothesis, the first to be written was the Gospel of Mark (written AD 60–75), followed by the Gospel of Matthew (AD 65–85), the Gospel of Luke (AD 65–95), and the Gospel of John (AD 75–100). Most scholars agree that the authors of Matthew and Luke used Mark as a source for their gospels. Since Matthew and Luke also share some content not found in Mark, many scholars infer that they used another source (commonly called the "Q source") in addition to Mark. Luke and Matthew treat their sources more conservatively than other ancient historians like Diodorus Siculus, though the parallels and variations of the Synoptic gospels are typical of ancient historical biographies. One important aspect of the study of the gospels is the literary genre under which they fall. Genre "is a key convention guiding both the composition and the interpretation of writings". Whether the gospel authors set out to write novels, myths, histories, or biographies has a significant impact on how their works ought to be interpreted. Some studies have suggested that the gospels ought to be seen as a form of ancient biography. Although not without critics, the view that the gospels are a type of ancient biography represents the consensus among scholars today. Concerning the accuracy of the accounts, viewpoints range from considering them inerrant descriptions of Jesus's life, to doubting their historical reliability on various points, to regarding them as providing very little historical information about his life beyond the basics. Matthew, Mark, and Luke are known as the Synoptic Gospels, from the Greek σύν (syn, 'together') and ὄψις (opsis, 'view'), because they are similar in content, narrative arrangement, language, and paragraph structure, and can readily be set side by side for synoptic comparison. Scholars generally agree that it is impossible to find any direct literary relationship between the Synoptic Gospels and the Gospel of John. Many events—such as Jesus's baptism, crucifixion, and interactions with his apostles—appear in the Synoptic Gospels, but incidents such as the transfiguration and Jesus's exorcising demons do not appear in John, which also differs on other matters, such as the cleansing of the Temple. The Synoptics emphasize different aspects of Jesus. In Mark, Jesus is the Son of God whose mighty works demonstrate the presence of God's Kingdom. He is portrayed as a tireless wonder worker and the servant of both God and humanity. This short gospel records relatively few of Jesus's words or extended teachings. The Gospel of Matthew emphasizes that Jesus is the fulfilment of God's will as revealed in the Old Testament and the Lord of the Church. He is presented as the "Son of David", a "king", and the Messiah. Luke presents Jesus as the divine-human saviour who shows compassion to the needy. He is depicted as the friend of sinners and outcasts, who came to seek and save the lost. This gospel includes well-known parables, such as the Good Samaritan and the Prodigal Son. The prologue to the Gospel of John identifies Jesus as an incarnation of the divine Word (Logos). As the Word, Jesus is described as eternally present with God, active in all creation, and the source of humanity's moral and spiritual nature. In this gospel, Jesus is portrayed as not only greater than any past human prophet but greater than any prophet could be: he not only speaks God's Word; he is God's Word. In the Gospel of John, Jesus reveals his divine role publicly and is depicted as the Bread of Life, the Light of the World, the True Vine, and more. The authors of the New Testament generally showed little interest in establishing an absolute chronology of Jesus's life or in synchronizing the episodes of his life with the secular history of the age. As stated in John 21:25, the gospels do not claim to provide an exhaustive list of the events of Jesus's life. The accounts were primarily written as theological documents in the context of early Christianity, with timelines as a secondary consideration. The gospels devote about one third of their text to the last week of Jesus's life in Jerusalem, referred to as the Passion. They do not provide enough detail to satisfy the demands of modern historians regarding exact dates, but it is possible to draw from them a general picture of Jesus's life story. Jesus was Jewish, born to Mary, the wife of Joseph. The Gospels of Matthew and Luke offer two different accounts of his genealogy. Matthew traces Jesus's ancestry to Abraham through David, while Luke traces Jesus's ancestry through Adam to God. The lists are identical between Abraham and David but differ markedly from that point onward; Matthew has 27 generations from David to Joseph, whereas Luke has 42, with almost no overlap between the names on the two lists.[o] Various theories have been put forward to explain why the two genealogies are so different.[p] Both Matthew and Luke describe Jesus's birth, particularly that he was born to a virgin named Mary in Bethlehem in fulfilment of prophecy. Luke's account emphasizes events before the birth of Jesus and centres on Mary, while Matthew's mostly covers events after the birth and centres on Joseph. Both accounts state that Mary was engaged to a man named Joseph, who was descended from King David and was not Jesus's biological father, and both support the doctrine of the virgin birth of Jesus, according to which Jesus was miraculously conceived by the Holy Spirit in Mary's womb when she was still a virgin. At the same time, there is evidence, at least in the Lukan Acts of the Apostles, that Jesus was thought to have had, like many figures in antiquity, a dual paternity, since there it is stated that he descended from the seed or loins of David. By taking Jesus as his own son, Joseph is understood to confer on him the necessary Davidic descent. Some scholars suggest that Jesus had Levite heritage from Mary, based on her blood relationship with Elizabeth. In Matthew, Joseph is troubled because Mary, his betrothed, is pregnant, but in the first of Joseph's four dreams an angel assures him not to be afraid to take Mary as his wife because her child was conceived by the Holy Spirit. In Matthew 2:1–12, wise men or Magi from the East bring gifts to the young Jesus as the King of the Jews. They find him in a house in Bethlehem. Herod the Great hears of Jesus's birth and, wanting him killed, orders the killings of male infants in Bethlehem and its surroundings. However, an angel warns Joseph in his second dream, and the family flees to Egypt, later returning and settling in Nazareth. In Luke 1:31–38, Mary learns from the angel Gabriel that she will conceive and bear a child called Jesus through the action of the Holy Spirit. When Mary is due to give birth, she and Joseph travel from Nazareth to Joseph's ancestral home in Bethlehem to register in the census ordered by Caesar Augustus. While there, Mary gives birth to Jesus, and, as they have found no room in the inn, she places the newborn in a manger. An angel announces the birth to a group of shepherds, who go to Bethlehem to see Jesus and subsequently spread the news abroad. Luke 2:21 recounts how Joseph and Mary have their baby circumcised on the eighth day after birth and name him Jesus, as Gabriel had commanded Mary. After the presentation of Jesus at the Temple, Joseph, Mary, and Jesus return to Nazareth. Jesus's childhood home is identified in the Gospels of Luke and Matthew as Nazareth, a town in Galilee in present-day Israel, where he lived with his family. Although Joseph appears in descriptions of Jesus's childhood, no mention is made of him thereafter. His other family members, including his mother Mary; his four brothers, James, Joses (or Joseph), Judas, and Simon; and his unnamed sisters, are mentioned in the Gospels and other sources. Jesus's maternal grandparents are named Joachim and Anne in the Gospel of James. The Gospel of Luke records that Mary was a relative of Elizabeth, the mother of John the Baptist. Some extra-biblical contemporary sources consider Jesus and John the Baptist to be second cousins, based on the belief that Elizabeth was the daughter of Sobe, the sister of Anne. The Gospel of Mark reports that at the beginning of his ministry, Jesus comes into conflict with his neighbours and family. Jesus's mother and brothers come to get him because people are saying that he is out of his mind. Jesus responds that his followers are his true family. In the Gospel of John, Jesus and his mother attend a wedding at Cana, where he performs his first miracle at her request. Later, she is present at his crucifixion, and he expresses concern for her well-being. Jesus is called a τέκτων (tektōn) in Mark 6:3, a term traditionally understood as "carpenter" but which can also refer to makers of objects in various materials, including builders. Given the term's broad semantic range and "the socio-historical reality of a common Nazarene τέκτων", Matthew K. Robinson, a minister and academic, prefers to translate τέκτων as 'builder-craftsman'. The Gospels indicate that Jesus could read, paraphrase, and debate scripture, but this does not necessarily mean that he received formal scribal training. The Gospel of Luke reports two journeys of Jesus and his parents in Jerusalem during his childhood. They come to the Temple in Jerusalem for the presentation of Jesus as a baby in accordance with Jewish Law, where a man named Simeon prophesies about Jesus and Mary. When Jesus, at the age of twelve, goes missing on a pilgrimage to Jerusalem for Passover, his parents find him in the Temple sitting among the teachers, listening to them and asking questions, and the people are amazed at his understanding and answers. Mary scolds Jesus for going missing, to which Jesus replies that he must "be in his Father's house". The synoptic gospels describe Jesus's baptism in the Jordan River and the temptations he faced while spending forty days in the Judaean Desert as a preparation for his public ministry. In each of these accounts, the accounts of Jesus's baptism is preceded by information about John the Baptist. They portray John preaching repentance for the forgiveness of sins, encouraging the giving of alms to the poor, baptizing people in the region of the Jordan River around Perea, and foretelling the arrival of someone "more powerful" than he. In the Gospel of Mark, John the Baptist baptizes Jesus, and as Jesus comes up out of the water he sees the Holy Spirit descending on him like a dove, and a voice comes from heaven and declares him to be God's Son. This is one of two events described in the Gospels where a voice from Heaven refers to Jesus as "Son", the other being the Transfiguration. The Spirit then drives him into the wilderness, where he is tempted by Satan. After John's arrest, Jesus begins his ministry in Galilee. In the Gospel of Matthew, when Jesus comes to John to be baptized, John protests, saying, "I need to be baptized by you." Jesus instructs him to proceed with the baptism "to fulfil all righteousness". Matthew then narratives three specific temptations that Satan offers Jesus in the wilderness. In the Gospel of Luke, the Holy Spirit descends in bodily form like a dove after all the people have been baptized and while Jesus is praying. Later, John implicitly acknowledges Jesus by sending his followers to inquire about him. Luke also describes three temptations experienced by Jesus in the wilderness before he begins his ministry in Galilee. The Gospel of John does not narrate Jesus's baptism and temptation. Instead, John the Baptist testifies that he saw the Spirit descend and remain on Jesus. John publicly proclaims Jesus as the Lamb of God, and some of John's followers become disciples of Jesus. Before John is imprisoned, Jesus leads his followers to baptize, and they baptize more people than John. The Synoptics depict two main geographical settings in Jesus's ministry. The first takes place in Galilee, north of Judea, where Jesus conducts a largely successful ministry; the second occurs in Jerusalem, where he is rejected and killed. Often referred to as "rabbi", Jesus delivers his message orally. In these accounts, he forbids those who recognize him as the messiah— including people he heals and demons he is said to exorcise—to speak about it (see Messianic Secret). By contrast, the Gospel of John portrays Jesus's ministry as taking place primarily in and around Jerusalem rather than in Galilee, and his divine nature is more openly proclaimed and recognized. Scholars commonly divide the ministry of Jesus into several stages. The Galilean ministry begins when Jesus returns to Galilee from the Judaean Desert after resisting the temptations of Satan. He then preaches throughout Galilee, and in Matthew 4:18–20 his first disciples—who will later form the core of the early Church— encounter him and begin to follow him. This period includes the Sermon on the Mount, one of Jesus's major discourses, as well as the calming of the storm, the feeding of the 5,000, walking on water, and various other miracles and parables. It concludes with the Confession of Peter and the Transfiguration. As Jesus travels towards Jerusalem, during what is often called the Perean ministry, he returns to the region where he was baptized, roughly a third of the way down from the Sea of Galilee along the Jordan River. The final phase of his ministry, In Jerusalem, begins with his triumphal entry into the city on Palm Sunday. In the Synoptic Gospels, during that week Jesus drives the money changers from the Second Temple, and Judas bargains to betray him. This period culminates in the Last Supper and, in the Johannine account, the Farewell Discourse. Near the beginning of his ministry, Jesus appoints twelve apostles. In Matthew and Mark, Jesus calls his first four apostles, who are fishermen, and they are described as immediately leaving their nets to follow him. In John, Jesus's first two apostles are initially disciples of John the Baptist; the Baptist sees Jesus and calls him the Lamb of God, and the two, hearing this, begin to follow Jesus. In addition to the Twelve Apostles, the introduction of the Sermon on the Plain in Luke identifies a much larger group of people as disciples. In Luke 10:1–16, Jesus sends 70 or 72 of his followers out in pairs to prepare towns for his prospective visits; they are instructed to accept hospitality, heal the sick, and proclaim the Kingdom of God. In the Synoptics, Jesus teaches extensively—often in parables—about the Kingdom of God. Jesus also speaks of the "Son of Man", an apocalyptic figure who will come to gather the chosen. Jesus calls people to repent of their sins and to devote themselves wholly to God. He instructs his followers to observe Jewish law, although he is perceived by some contemporaries as having broken the law himself, for example in relation to Sabbath observance. When asked what the greatest commandment is, Jesus replies: "You shall love the Lord your God with all your heart, and with all your soul, and with all your mind ... And a second is like it: 'You shall love your neighbor as yourself.'" Other ethical teachings attributed to Jesus include loving one's enemies, refraining from hatred and lust, turning the other cheek, and forgiving those who have sinned against oneself. The Gospel of John presents the teachings of Jesus not merely as his own preaching but as divine revelation. John the Baptist, for example, states in John 3:34: "He whom God has sent speaks the words of God, for he gives the Spirit without measure." In John 7:16, Jesus says, "My teaching is not mine but his who sent me." He reiterates this in John 14:10: "Do you not believe that I am in the Father and the Father is in me? The words that I say to you I do not speak on my own; but the Father who dwells in me does his works." Approximately 30 parables constitute about one-third of Jesus's recorded teachings. The parables appear both within longer sermons and at various other places in the narrative. They often contain symbolism and typically relate aspects of the physical world to spiritual realities. Common themes include the kindness and generosity of God, as well as the dangers and consequences of transgression. Some parables, such as that of the Prodigal Son, are relatively straightforward, while others, such as the Growing Seed, are more complex, profound, and difficult to interpret. When his disciples ask why he speaks to the people in parables, Jesus replies that the chosen disciples have been granted "to know the secrets of the kingdom of heaven", unlike the rest, adding: "For the one who has will be given more and he will have in abundance. But the one who does not have will be deprived even more", and he goes on to say that most of their generation have developed "dull hearts" and are therefore unable to understand. In the gospel accounts, Jesus devotes a substantial portion of his ministry to performing miracles, especially healings. These miracles are commonly classified into two main categories: healing miracles and nature miracles. The healing miracles include cures of physical ailments, exorcisms, and the raising of the dead. The nature miracles demonstrate authority over the natural world and include turning water into wine, walking on water, and calming a storm, among others. Jesus attributes his miracles to a divine source. When opponents accuse him of casting out demons by the power of Beelzebub, the prince of demons, he replies that he does so by the "Spirit of God" (Matthew 12:28) or "finger of God", arguing that it would be illogical for Satan to undermine his own domain; he also asks, if he exorcises by Beelzebub, "by whom do your sons cast them out?" In Matthew 12:31–32, he further states that while all kinds of sin, including "insults against God" or "insults against the Son of Man", may be forgiven, blasphemy against "The Holy Spirit" will never be forgiven, and those guilty of it bear their sin permanently. In John, Jesus's miracles are described as "signs", performed to manifest his mission and identity. In the Synoptic Gospels, when some teachers of the law and Pharisees ask him for a miraculous sign to validate his authority, Jesus refuses, saying that no sign will be given to a corrupt and evil generation except the sign of the prophet Jonah. In the Synoptics, the crowds typically respond to his miracles with awe and press upon him to heal their sick, whereas in John, Jesus is depicted as less constrained by the crowds, who often respond to his signs with belief and trust. A feature common to all the miracle narratives is that Jesus performs them freely and does not request or accept payment. The miracle stories are frequently interwoven with teachings, and the miracles themselves often carry a didactic dimension. Many emphasize the importance of faith: in the cleansing of ten lepers and the raising of Jairus's daughter, for instance, the beneficiaries are told that their healing is due to their faith. In A Marginal Jew, scholar John P. Meier argues that "the miracle traditions about Jesus' public ministry are already so widely attested in various sources" that any "total fabrication by the early church is, practically speaking, impossible". He bases this claim on literary sources such as the Gospels of Matthew, Luke, and John, as well as on the writings of the historian Josephus. Meier contends that the "criterion of multiple attestation of sources and forms" supports the conclusion that Jesus performed "extraordinary deeds" which his contemporaries regarded as miracles. Scholar Paul J. Achtemeier argues that such miracles were not unique to Jesus in the ancient world and were perceived as ambiguous even by eyewitnesses. He notes that Jesus likely performed acts understood as exorcisms, which were "accepted as reality by his contemporaries", but that these should not be seen as having "probative value with respect to Jesus," since witnesses could claim that he was working with either Satan or God. Scholar Gregory Sterling observes that, in the case of Jesus's alleged exorcisms, "For first-century Galileans who believed in the personal presence of evil in the form of demons, Jesus' act was a validation of his ministry." At approximately the midpoint of each of the three Synoptic Gospels, two significant events are narrated: the Confession of Peter and the Transfiguration of Jesus—not mentioned in the Gospel of John. In the Confession of Peter, Peter declares to Jesus, "You are the Messiah, the Son of the living God"; Jesus affirms that this is a divinely revealed truth. Following this confession, Jesus begins to tell his disciples about his forthcoming suffering, death, and resurrection. In the Transfiguration, Jesus takes Peter and two other apostles up an unnamed mountain, where "he was transfigured before them, and his face shone like the sun, and his clothes became dazzling white". A bright cloud envelops them, and a voice from the cloud proclaims, "This is my Son, the Beloved; with him I am well pleased; listen to him." The description of the final week of Jesus's life—often referred to as Passion Week—occupies roughly one-third of the narrative in the canonical gospels. This section begins with Jesus's triumphal entry into Jerusalem and concludes with his crucifixion. In the Synoptic Gospels, the final week in Jerusalem concludes the journey through Perea and Judea that Jesus began in Galilee. Jesus enters Jerusalem riding a young donkey, evoking the motif of the Messiah's donkey from the Book of Zechariah, in the humble king of the Jews comes to the city in this manner. As he proceeds, people spread cloaks and small branches of trees (palm fronds) on the road before him and chant lines from Psalm 118:25–26. Jesus next expels the money changers from the Temple, accusing them of turning it into a den of thieves through their commercial activities. Most scholars agree that it is overwhelmingly likely that Jesus did something in the temple and mentioned its destruction. In John, the Cleansing of the Temple occurs at the beginning of Jesus's ministry instead of at the end. Ancient compositional practices involved such chronological displacement and compression, with even reliable biographers like Plutarch displaying them. Jesus comes into conflict with the Jewish elders, such as when they question his authority and when he criticizes them and calls them hypocrites. Judas Iscariot, one of the twelve apostles, secretly strikes a bargain with the Jewish elders, agreeing to betray Jesus to them for 30 silver coins. The Gospel of John recounts two other feasts in which Jesus taught in Jerusalem before the Passion Week. In Bethany, a village near Jerusalem, Jesus raises Lazarus from the dead. This potent sign increases the tension with authorities, who conspire to kill him. Mary of Bethany anoints Jesus's feet, foreshadowing his entombment. Jesus then makes his messianic entry into Jerusalem. The cheering crowds greeting Jesus as he enters Jerusalem add to the animosity between him and the establishment. In John, Jesus has already cleansed the Second Temple during an earlier Passover visit to Jerusalem. John next recounts Jesus's Last Supper with his disciples. The Last Supper is the final meal that Jesus shared with his twelve apostles in Jerusalem before his crucifixion. The Last Supper is mentioned in all four canonical gospels; Paul's First Epistle to the Corinthians also refers to it. During the meal, Jesus predicts that one of his apostles will betray him. Despite each Apostle's assertion that he would not betray him, Jesus reiterates that the betrayer would be one of those present. Matthew 26:23–25 and John 13:26–27 identify Judas as the traitor. In the Synoptics, Jesus takes bread, breaks it, and gives it to the disciples, saying, "This is my body, which is given for you." He then has them all drink from a cup, saying, "This cup that is poured out for you is the new covenant in my blood." The Christian sacrament or ordinance of the Eucharist is based on these events. Although the Gospel of John does not include a description of the bread-and-wine ritual during the Last Supper, most scholars agree that John 6:22–59 (the Bread of Life Discourse) has a eucharistic character and resonates with the institution narratives in the Synoptic Gospels and in the Pauline writings on the Last Supper. In all four gospels, Jesus predicts that Peter will deny knowledge of him three times before the cock crows the next morning. In Luke and John, the prediction is made during the Supper. In Matthew and Mark, the prediction is made after the Supper; Jesus also predicts that all his disciples will desert him. The Gospel of John provides the only account of Jesus washing his disciples' feet after the meal. John also includes a long sermon by Jesus, preparing his disciples (now without Judas) for his departure. Chapters 14–17 of the Gospel of John are known as the Farewell Discourse and are a significant source of Christological content. In the Synoptics, Jesus and his disciples go to the garden Gethsemane, where Jesus prays to be spared his coming ordeal. Then Judas comes with an armed mob, sent by the chief priests, scribes and elders. He kisses Jesus to identify him to the crowd, which then arrests Jesus. In an attempt to stop them, an unnamed disciple of Jesus uses a sword to cut off the ear of a man in the crowd. After Jesus's arrest, his disciples go into hiding, and Peter, when questioned, thrice denies knowing Jesus. After the third denial, Peter hears the cock crow and recalls Jesus's prediction about his denial. Peter then weeps bitterly. In John 18:1–11, Jesus does not pray to be spared his crucifixion, as the gospel portrays him as scarcely touched by such human weakness. The people who arrest him are Roman soldiers and Temple guards. Instead of being betrayed by a kiss, Jesus proclaims his identity, and when he does, the soldiers and officers fall to the ground. The gospel identifies Peter as the disciple who used the sword, and Jesus rebukes him for it. After his arrest, Jesus is taken late at night to the private residence of the high priest, Caiaphas, who had been installed by Pilate's predecessor, the Roman procurator Valerius Gratus. The Sanhedrin was a Jewish judicial body. The gospel accounts differ on the details of the trials. In Matthew 26:57, Mark 14:53, and Luke 22:54, Jesus is taken to the house of the high priest, Caiaphas, where he is mocked and beaten that night. Early the next morning, the chief priests and scribes lead Jesus away into their council. John 18:12–14 states that Jesus is first taken to Annas, Caiaphas's father-in-law, and then to the high priest. During the trials Jesus speaks very little, mounts no defence, and gives very infrequent and indirect answers to the priests' questions, prompting an officer to slap him. In Matthew 26:62, Jesus's unresponsiveness leads Caiaphas to ask him, "Have you no answer?". In Mark 14:61, the high priest then asks Jesus, "Are you the Messiah, the Son of the Blessed One?". Jesus replies, "I am", and then predicts the coming of the Son of Man. This provokes Caiaphas to tear his own robe in anger and to accuse Jesus of blasphemy. In Matthew and Luke, Jesus's answer is more ambiguous: in Matthew 26:64, he responds, "You have said so", and in Luke 22:70 he says, "You say that I am." The Jewish elders take Jesus to Pilate's Court and ask the Roman governor, Pontius Pilate, to judge and condemn Jesus for various allegations: subverting the nation, opposing the payment of tribute, claiming to be Christ, a king, and claiming to be the son of God.[q] The use of the word "king" is central to the discussion between Jesus and Pilate. In John 18:36, Jesus states, "My kingdom is not from this world", but he does not unequivocally deny being the King of the Jews. In Luke 23:7–15, Pilate realizes that Jesus is a Galilean, and thus comes under the jurisdiction of Herod Antipas, the Tetrarch of Galilee and Perea. Pilate sends Jesus to Herod to be tried, but Jesus says almost nothing in response to Herod's questions. Herod and his soldiers mock Jesus, put an expensive robe on him to make him look like a king, and return him to Pilate, who then calls together the Jewish elders and announces that he has "not found this man guilty". Observing a Passover custom of the time, Pilate allows one prisoner chosen by the crowd to be released. He gives the people a choice between Jesus and a murderer called Barabbas (בר-אבא or Bar-abbâ, "son of the father", from the common given name Abba: 'father'). Persuaded by the elders, the mob chooses to release Barabbas and crucify Jesus. Pilate writes a sign in Hebrew, Latin, and Greek that reads "Jesus of Nazareth, the King of the Jews" (abbreviated as INRI in depictions) to be affixed to Jesus's cross, then scourges Jesus and sends him to be crucified. The soldiers place a crown of thorns on Jesus's head and ridicule him as the King of the Jews. They beat and taunt him before taking him to Calvary, also called Golgotha, for crucifixion. Jesus's crucifixion is described in all four canonical gospels. After the trials, Jesus is led to Calvary carrying his cross; the route traditionally thought to have been taken is known as the Via Dolorosa. The three Synoptic Gospels indicate that Simon of Cyrene assists him, having been compelled by the Romans to do so. In Luke 23:27–28, Jesus tells the women in the multitude of people following him not to weep for him but for themselves and their children. At Calvary, Jesus is offered a sponge soaked in a concoction usually offered as a painkiller. According to Matthew and Mark, he refuses it. The soldiers then crucify Jesus and cast lots for his clothes. Above Jesus's head on the cross is Pilate's multilingual inscription, "Jesus of Nazareth, the King of the Jews." Soldiers and passersby mock him about it. Two convicted thieves are crucified along with Jesus. In Matthew and Mark, both thieves mock Jesus. In Luke, one of them rebukes Jesus, while the other defends him. Jesus tells the latter: "today you will be with me in Paradise." The four gospels mention the presence of a group of female disciples of Jesus at the crucifixion. In John, Jesus sees his mother Mary and the beloved disciple and tells him to take care of her. In John 19:33–34, Roman soldiers break the two thieves' legs to hasten their death, but not those of Jesus, as he is already dead. Instead, one soldier pierces Jesus's side with a lance, and blood and water flow out. The Synoptics report a period of darkness, and the heavy curtain in the Temple is torn when Jesus dies. In Matthew 27:51–54, an earthquake breaks open tombs. In Matthew and Mark, terrified by the events, a Roman centurion states that Jesus was the Son of God. On the same day, Joseph of Arimathea, with Pilate's permission and with Nicodemus's help, removes Jesus's body from the cross, wraps it in a clean cloth, and buries it in a new rock-hewn tomb. In Matthew 27:62–66, on the following day the chief Jewish priests ask Pilate for the tomb to be secured, and with Pilate's permission the priests place seals on the large stone covering the entrance. The Gospels do not describe the moment of the resurrection of Jesus. They describe the discovery of his empty tomb and several appearances of Jesus, with distinct differences in each narrative. In the four Gospels, Mary Magdalene goes to the tomb on Sunday morning, alone or with one or several other women. The tomb is empty, with the stone rolled away, and there are one or two angels, depending on the accounts. In the Synoptics, the women are told that Jesus is not here and that he is risen. In Mark and Matthew, the angel also instructs them to tell the disciples to meet Jesus in Galilee. In Luke, Peter visits the tomb after he is told it is empty. In John, he goes there with the beloved disciple. Matthew mentions Roman guards at the tomb, who report to the priests of Jerusalem what happened. The priests bribe them to say that the disciples stole Jesus's body during the night. The four Gospels then describe various appearances of Jesus in his resurrected body. Jesus first reveals himself to Mary Magdalene in Mark 16:9 and John 20:14–17, along with "the other Mary" in Matthew 28:9, while in Luke the first reported appearance is to two disciples heading to Emmaus. Jesus then reveals himself to the eleven disciples, in Jerusalem or in Galilee. In Luke 24:36–43, he eats and shows them his tangible wounds to prove that he is not a spirit. He also shows them to Thomas to end his doubts, in John 20:24–29. In the Synoptics, Jesus commissions the disciples to spread the gospel message to all nations, while in John 21, he tells Peter to take care of his sheep. Jesus's ascension into Heaven is described in Luke 24:50–53, Acts 1:1–11, and mentioned in 1 Timothy 3:16. In the Acts of the Apostles, forty days after the Resurrection, as the disciples look on, "he was lifted up, and a cloud took him out of their sight". 1 Peter 3:22 states that Jesus has "gone into heaven and is at the right hand of God". The Acts of the Apostles describes several appearances of Jesus after his Ascension. In Acts 7:55, Stephen gazes into heaven and sees "Jesus standing at the right hand of God" just before his death. On the road to Damascus, the Apostle Paul is converted to Christianity after seeing a blinding light and hearing a voice saying, "I am Jesus, whom you are persecuting." In Acts 9:10–18, Jesus instructs Ananias of Damascus in a vision to heal Paul. The Book of Revelation includes a revelation from Jesus concerning the last days of Earth. Early Christianity After Jesus's life, his followers, as described in the first chapters of the Acts of the Apostles, were all Jews either by birth or conversion, for which the biblical term "proselyte" is used, and referred to by historians as Jewish Christians. The early Gospel message was spread orally, probably in Aramaic, but almost immediately also in Greek. The New Testament's Acts of the Apostles and Epistle to the Galatians record that the first Christian community was centred in Jerusalem and its leaders included Peter, James, the brother of Jesus, and John the Apostle. After his conversion, Paul the Apostle spread the teachings of Jesus to various non-Jewish communities throughout the eastern Mediterranean region. Paul's influence on Christian thinking is said to be more significant than that of any other New Testament author. By the end of the 1st century, Christianity began to be recognized internally and externally as a separate religion from Judaism which itself was refined and developed further in the centuries after the destruction of the Second Temple. Numerous quotations in the New Testament and other Christian writings of the first centuries indicate that early Christians generally used and revered the Hebrew Bible (the Tanakh) as religious text, mostly in the Greek (Septuagint) or Aramaic (Targum) translations. Early Christians wrote many religious works, including the ones included in the canon of the New Testament. The canonical texts, which have become the main sources used by historians to try to understand the historical Jesus and sacred texts within Christianity, were probably written between AD 50 and 120. Historical views Prior to the Enlightenment, the Gospels were usually regarded as accurate historical accounts, but since then scholars have emerged who question the reliability of the Gospels and draw a distinction between the Jesus described in the Gospels and the Jesus of history. Since the 18th century, three separate scholarly quests for the historical Jesus have taken place, each with distinct characteristics and based on different research criteria, which were often developed during the quest that applied them. While there is widespread scholarly agreement on the existence of Jesus,[g] and a basic consensus on the general outline of his life,[r] the portraits of Jesus constructed by various scholars often differ from each other, and from the image portrayed in the gospel accounts. Approaches to the historical reconstruction of the life of Jesus have varied from the "maximalist" approaches of the 19th century, in which the gospel accounts were accepted as reliable evidence wherever it is possible, to the "minimalist" approaches of the early 20th century, where hardly anything about Jesus was accepted as historical. In the 1950s, as the second quest for the historical Jesus gathered pace, the minimalist approaches faded away, and in the 21st century, minimalists such as Price are a small minority. Although a belief in the inerrancy of the Gospels cannot be supported historically, many scholars since the 1980s have held that, beyond the few facts considered to be historically certain, certain other elements of Jesus's life are "historically probable". Modern scholarly research on the historical Jesus thus focuses on identifying the most probable elements. In AD 6, Judea, Idumea, and Samaria were transformed from a Herodian client state of the Roman Empire into an imperial province, also called Judea. A Roman prefect, rather than a client ruler, governed the land. The prefect governed from Caesarea Maritima, leaving Jerusalem to be run by the High Priest of Israel. As an exception, the prefect came to Jerusalem during religious festivals, when religious and patriotic enthusiasm sometimes inspired unrest or uprisings. Galilee with Perea was a Herodian client state under the rule of Herod Antipas since 4 BC. Galilee was evidently prosperous, and poverty was limited enough that it did not threaten the social order. Philip (d. AD 34), half-brother of Herod Antipas, ruled as Tetrarch yet another Herodian client state to the north and east of the sea of Galilee that included Gaulanitis, Batanea, and Iturea; it was mostly non-Jewish. South of this on the east bank of the Jordan was the Decapolis; a collection of Hellenistic city-states that were clients of the Roman Empire. North of Galilee were the cities of Tyre and Sidon which were in the Roman province of Syria. Though non-Jewish lands surrounded the mostly Jewish territories of Judea and Galilee, Roman law and practice allowed Jews to remain separate legally and culturally. This was the era of Hellenistic Judaism, which combined Jewish religious tradition with elements of Hellenistic culture. Until the fall of the Western Roman Empire and the Muslim conquests of the Eastern Mediterranean, the main centres of Hellenistic Judaism were Alexandria (Egypt) and Antioch (now Southern Turkey), the two main Greek colonies of the Middle East and North Africa area, both founded at the end of the 4th century BC in the wake of the conquests of Alexander the Great. Hellenistic Judaism also existed in Jerusalem during the Second Temple Period, where there was conflict between Hellenizers and traditionalists (sometimes called Judaizers). The Hebrew Bible was translated from Biblical Hebrew and Biblical Aramaic into Jewish Koine Greek; the Targum translations into Aramaic were also generated during this era, both due to the decline of knowledge of Hebrew. Jews based their faith and religious practice on the Torah, five books said to have been given by God to Moses. The three prominent religious parties were the Pharisees, the Essenes, and the Sadducees. Together these parties represented only a small fraction of the population. Most Jews looked forward to a time when God would deliver them from their pagan rulers, possibly through war against the Romans. New Testament scholars face a formidable challenge when they analyse the canonical Gospels. The Gospels are not biographies in the modern sense, and the authors explain Jesus's theological significance and recount his public ministry while omitting many details of his life. James Dunn has argued that the accounts of his teachings and life were initially conserved by oral transmission, which was the source of the written Gospels. The Gospels are commonly seen as literature that is based on oral traditions, Christian preaching, and Old Testament exegesis with the consensus being that they are a variation of Greco-Roman biography; similar to other ancient works such as Xenophon's Memoirs of Socrates. The reports of supernatural events associated with Jesus's death and resurrection make the challenge even more difficult. Scholars regard the Gospels as compromised sources of information because the writers were trying to glorify Jesus. Ed Sanders argues that surviving textual sources provide more reliable details for Jesus's thoughts than they do for the thoughts of Alexander the Great, owing to the texts discussing Jesus being authored closer in time to the events they relate to. Biographies written about Alexander the Great during his own lifetime (who lived 330 years earlier) have all been lost, but are known of through references in biographies written by later authors. Although the texts about Jesus contain ideas from both Jesus and his later followers, it is possible to distinguish which parts originate from Jesus's own view, and which were ideas from his later followers. Scholars use several criteria, such as the criterion of independent attestation, the criterion of coherence, and the criterion of discontinuity to judge the historicity of events. The historicity of an event also depends on the reliability of the source; indeed, the Gospels are not independent nor consistent records of Jesus's life. The Synoptics, especially Mark, the earliest written gospel, have been considered the most reliable sources of information about Jesus for many decades. John, the latest written gospel, differs considerably from the Synoptic Gospels, and has been considered less reliable, although John's gospel is seen as having more reliability than previously thought or sometimes even more reliable than the synoptics since the third quest. Some scholars (such as the Jesus Seminar) believe that the non-canonical Gospel of Thomas might be an independent witness to many of Jesus's parables and aphorisms. For example, Thomas confirms that Jesus blessed the poor and that this saying circulated independently before being combined with similar sayings in the Q source. The majority of scholars are sceptical about this text and believe it should be dated to the 2nd century AD. Other select non-canonical Christian texts may also have value for historical Jesus research. Early non-Christian sources that attest to the historical existence of Jesus include the works of the historians Josephus and Tacitus.[s] Josephus scholar Louis Feldman has stated that "few have doubted the genuineness" of Josephus's reference to Jesus in book 20 of the Antiquities of the Jews, and it is disputed only by a small number of scholars. Tacitus referred to Christ and his execution by Pilate in book 15 of his work Annals. Scholars generally consider Tacitus's reference to the execution of Jesus to be both authentic and of historical value as an independent Roman source. Non-Christian sources are valuable as they show that even neutral or hostile parties never show any doubt that Jesus existed. They present a rough picture of Jesus that is compatible with that found in the Christian sources: that Jesus was a teacher, had a reputation as a miracle worker, had a brother James, and died a violent death. Archaeology helps scholars better understand Jesus's social world. For example, it indicates that Capernaum, a city important in Jesus's ministry, was poor and small, without even a forum or an agora. This archaeological discovery resonates well with the scholarly view that Jesus advocated reciprocal sharing among the destitute in that area of Galilee. Jesus was a Galilean Jew, born around the beginning of the 1st century, who died in AD 30 or 33 in Judea. The general scholarly consensus is that Jesus was a contemporary of John the Baptist and was crucified as ordered by the Roman governor Pontius Pilate, who held office from AD 26 to 36. The Gospels offer several indications concerning the year of Jesus's birth. Matthew 2:1 associates the birth of Jesus with the reign of Herod the Great, who died around 4 BC, and Luke 1:5 mentions that Herod was on the throne shortly before the birth of Jesus, although this gospel also associates the birth with the Census of Quirinius which took place ten years later. Luke 3:23 states that Jesus was "about thirty years old" at the start of his ministry, which according to Acts 10:37–38 was preceded by John the Baptist's ministry, which was recorded in Luke 3:1–2 to have begun in the 15th year of Tiberius's reign (AD 28 or 29). By collating the gospel accounts with historical data and using various other methods, most scholars arrive at a date of birth for Jesus between 6 and 4 BC, but some propose estimates that include a wider range.[t] The date range for Jesus's ministry has been estimated using several different approaches. One of these applies the reference in Luke 3:1–2, Acts 10:37–38, and the dates of Tiberius's reign, which are well known, to give a date of around AD 28–29 for the start of Jesus's ministry. Another approach estimates a date around AD 27–29 by using the statement about the temple in John 2:13–20, which asserts that the temple in Jerusalem was in its 46th year of construction at the start of Jesus's ministry, together with Josephus's statement that the temple's reconstruction was started by Herod the Great in the 18th year of his reign. A further method uses the date of the death of John the Baptist and the marriage of Herod Antipas to Herodias, based on the writings of Josephus, and correlates it with Matthew 14:4 and Mark 6:18. Given that most scholars date the marriage of Herod and Herodias as AD 28–35, this yields a date about AD 28–29. Various approaches have been used to estimate the year of the crucifixion of Jesus. Most scholars agree that he died in AD 30 or 33. The Gospels state that the event occurred during the prefecture of Pilate. The date for the conversion of Paul (estimated to be AD 33–36) acts as an upper bound for the date of Crucifixion. The dates for Paul's conversion and ministry can be determined by analysing the Pauline epistles and the Acts of the Apostles. Astronomers have tried to estimate the precise date of the Crucifixion by analysing lunar motion and calculating historic dates of Passover, a festival based on the lunisolar Hebrew calendar. The most widely accepted dates derived from this method are 7 April AD 30, and 3 April AD 33 (both Julian). Nearly all historians (both modern and historical) agree that Jesus was a real person who historically existed.[g] Scholars have reached a limited consensus on the basics of Jesus's life. Many scholars including biblical ones agree that Joseph, Jesus's father, died before Jesus began his ministry. Joseph is not mentioned in the Gospels during Jesus's ministry. Joseph's death would explain why in Mark 6:3, Jesus's neighbors refer to Jesus as the "son of Mary" (sons were usually identified by their fathers). According to Theissen and Merz, it is common for extraordinary charismatic leaders, such as Jesus, to come into conflict with their ordinary families. In Mark, Jesus's family comes to get him, fearing that he is mad (Mark 3:20–34), and this account is thought to be historical because early Christians would probably not have invented it. After Jesus's death, many members of his family joined the Christian movement. Jesus's brother James became a leader of the Jerusalem Church. Géza Vermes says that the doctrine of the virgin birth of Jesus arose from theological development rather than from historical events. Other scholars take it as significant that the virgin birth is attested by two separate gospels, Matthew and Luke. E. P. Sanders and Marcus Borg note that the birth narratives in Matthew and Luke are ahistorical and the clearest cases of invention in the Gospel narratives of Jesus's life. Dale Allison and W. D. Davies argue that Matthew presents a unified and preexisting infancy narrative based on haggadic legends about Moses, though they maintain that elements in the story such as the names of Mary and Joseph and Jesus being in Nazareth during Herod's reign are historical. Both accounts have Jesus born in Bethlehem, in accordance with Jewish salvation history, and both have him growing up in Nazareth, but Sanders points out that the two report different explanations for how that happened. Luke's account of a worldwide census is not plausible, while Matthew's account is more plausible, but the story reads as though it was invented to identify Jesus as a new Moses, and the historian Josephus reports Herod the Great's brutality without ever mentioning that he massacred little boys. The differences found in the gospel accounts are typical of ancient historical biographies. The contradictions were apparent to early Christians, with harmonizations present in the infancy gospels of Thomas and the Gospel of James, which are dated to the 2nd century AD. Conservative scholars argue that despite the uncertainty of the details, the gospel birth narratives trace back to historical, or at least much earlier pre-gospel traditions. For instance, according to Ben Witherington: What we find in Matthew and Luke is not the story of ... a [god] descending to earth and, in the guise of a man, mating with a human woman, but rather the story of a miraculous conception without the aid of any man, divine or otherwise. As such, this story is without precedent either in Jewish or pagan literature. Sanders says that the genealogies of Jesus are based not on historical information but on the author's desire to show that Jesus was the universal Jewish saviour. In any event, once the doctrine of the virgin birth of Jesus became established, that tradition superseded the earlier tradition that he was descended from David through Joseph. The Gospel of Luke reports that Jesus was a blood relative of John the Baptist, but scholars generally consider this connection to be invented. Most modern scholars consider Jesus's baptism to be a historical fact, along with his crucifixion. The theologian James D. G. Dunn states that they "command almost universal assent" and "rank so high on the 'almost impossible to doubt or deny' scale of historical facts" that they are often the starting points for the study of the historical Jesus. Scholars adduce the criterion of embarrassment, saying that early Christians would not have invented a baptism that might imply that Jesus committed sins and wanted to repent. According to Theissen and Merz, Jesus was inspired by John the Baptist and took over from him many elements of his teaching. Most scholars hold that Jesus lived in Galilee and Judea and did not preach or study elsewhere. They agree that Jesus debated with Jewish authorities on the subject of God, performed some healings, taught in parables and gathered followers. Jesus's Jewish critics considered his ministry to be scandalous because he feasted with sinners, fraternized with women, and allowed his followers to pluck grain on the Sabbath. According to Sanders, it is not plausible that disagreements over how to interpret the Law of Moses and the Sabbath would have led Jewish authorities to want Jesus killed. According to Ehrman, Jesus taught that a coming kingdom was everyone's proper focus, not anything in this life. He taught about the Jewish Law, seeking its true meaning, sometimes in opposition to traditions. Jesus put love at the centre of the Law, and following that Law was an apocalyptic necessity. His ethical teachings called for forgiveness, not judging others, loving enemies, and caring for the poor. Funk and Hoover note that typical of Jesus were paradoxical or surprising turns of phrase, such as advising one, when struck on the cheek, to offer the other cheek to be struck as well. The Gospels portray Jesus teaching in well-defined sessions, such as the Sermon on the Mount in the Gospel of Matthew or the parallel Sermon on the Plain in Luke. While these teaching sessions include authentic teachings of Jesus, Theissen and Merz contend that the scenes were invented by the evangelists to frame these teachings, originally recorded without context. Le Donne, however, rejects the form-critical notion that smaller units of traditions held a defined stage of circulation before the gospels’ composition. While Jesus's miracles fit within the social context of antiquity, he defined them differently. First, he attributed them to the faith of those healed. Second, he connected them to end times prophecy. Jesus chose twelve disciples (the "Twelve"), According to Bart Ehrman, Jesus's promise that the Twelve would rule is historical, because the Twelve included Judas Iscariot. In Ehrman's view, no Christians would have invented a line from Jesus, promising rulership to the disciple who betrayed him. In Mark, the disciples play hardly any role other than a negative one. While others sometimes respond to Jesus with complete faith, his disciples are puzzled and doubtful. They serve as a foil to Jesus and to other characters. The failings of the disciples are probably exaggerated in Mark, and the disciples make a better showing in Matthew and Luke. Recent studies tend to suggest that Mark is not as negative towards Peter as a previous generation of scholars thought. Sanders says that Jesus's mission was not about repentance, although he acknowledges that this opinion is unpopular. He argues that repentance appears as a strong theme only in Luke, that repentance was John the Baptist's message, and that Jesus's ministry would not have been scandalous if the sinners he ate with had been repentant. According to Theissen and Merz, Jesus taught that God was generously giving people an opportunity to repent. Jesus referred to himself as a "son of man" in the colloquial sense of "a person", but scholars do not know whether he also meant himself when he referred to the heavenly "Son of Man". Paul the Apostle and other early Christians interpreted the "Son of Man" as the risen Jesus. Dale Allison argues Jesus identified himself as the son of man in Daniel, rejecting the notion of another eschatological figure. The Gospels refer to Jesus not only as a messiah but in the absolute form as "the Messiah" or, equivalently, "the Christ". In early Judaism, this absolute form of the title is not found, but only phrases such as "his messiah". The tradition is ambiguous enough to leave room for debate as to whether Jesus defined his eschatological role as that of the Messiah. The Jewish messianic tradition included many different forms, some of them focused on a messiah figure and others not. Based on the Christian tradition, Gerd Theissen advances the hypothesis that Jesus saw himself in messianic terms but did not claim the title "Messiah". Bart Ehrman argues that Jesus did consider himself to be the Messiah, albeit in the sense that he would be the king of the new political order that God would usher in, not in the sense that most people today think of the term. Around AD 30, Jesus and his followers travelled from Galilee to Jerusalem to observe Passover. Jesus caused a disturbance in the Second Temple, which was the centre of Jewish religious and civil authority. Sanders associates it with Jesus's prophecy that the Temple would be totally demolished. Jesus held a last meal with his disciples, which is the origin of the Sacrament of the Holy Eucharist. According to John P. Meier, Jesus having a final meal with his disciples is generally accepted among scholars, and belongs to the framework of the narrative of Jesus' life, with a majority viewing Mark 14 as substantially historical. The meal appears to have pointed to Jesus's place in the Kingdom of God when Jesus probably knew he was to be killed, although he may have still hoped that God might intervene. The Gospels say that Jesus was betrayed to the authorities by a disciple, and many scholars consider this report to be highly reliable. He was executed on the orders of Pontius Pilate, the Roman prefect of Judaea. Pilate most likely saw Jesus's reference to the Kingdom of God as a threat to Roman authority and worked with the Temple elites to have Jesus executed. The Sadducean high-priestly leaders of the Temple more plausibly had Jesus executed for political reasons than for his teaching. They may have regarded him as a threat to stability, especially after he caused a disturbance at the Second Temple. Other factors, such as Jesus's triumphal entry into Jerusalem, may have contributed to this decision. Most scholars consider Jesus's crucifixion to be factual because early Christians would not have invented the painful death of their leader. After Jesus's death, his followers said he was restored to life, although the exact details of their experiences are unclear. The gospel reports contradict each other; Sanders suggests competition among those claiming to have seen him first rather than deliberate fraud, while White emphasizes differences in the agendas of the evangelists. Differences between accounts were a feature of ancient biographies, such as the accounts of Otho in Suetonius and Plutarch. Another common hypothesis among historians is that all reported perceptions of Jesus are confabulated or a case of mistaken identity. The followers of Jesus formed a community to wait for his return and the founding of his kingdom. Modern research on the historical Jesus has not led to a unified picture of the historical figure, partly because of the variety of academic traditions represented by the scholars. Given the scarcity of historical sources, it is generally difficult for any scholar to construct a portrait of Jesus that can be considered historically valid beyond the basic elements of his life. The portraits of Jesus constructed in these quests often differ from each other, and from the image portrayed in the Gospels. Jesus is seen as the founder of, in the words of Sanders, a "renewal movement within Judaism". One of the criteria used to discern historical details in the "third quest" is the criterion of plausibility, relative to Jesus's Jewish context and to his influence on Christianity. A disagreement in contemporary research is whether Jesus was apocalyptic. Most scholars conclude that he was an apocalyptic preacher, like John the Baptist and Paul the Apostle. Certain prominent North American scholars, such as Burton Mack and John Dominic Crossan, advocate for a non-eschatological Jesus, one who is more of a Cynic sage than an apocalyptic preacher. In addition to portraying Jesus as an apocalyptic prophet, a charismatic healer or a cynic philosopher, some scholars portray him as the true messiah or an egalitarian prophet of social change. The attributes described in the portraits sometimes overlap, and scholars who differ on some attributes sometimes agree on others. Since the 18th century, scholars have occasionally put forth that Jesus was a political national messiah, but the evidence for this portrait is negligible. Likewise, the proposal that Jesus was a Zealot does not fit with the earliest strata of the Synoptic tradition. Jesus grew up in Galilee and much of his ministry took place there. The languages spoken in Galilee and Judea during the 1st century AD included Jewish Palestinian Aramaic, Hebrew, and Greek, with Aramaic being predominant. There is substantial consensus that Jesus gave most of his teachings in Aramaic in the Galilean dialect. Other than Aramaic and Hebrew, it is likely that he was also able to speak Greek. Modern scholars agree that Jesus was a Jew of 1st-century Judea. Ioudaios in New Testament Greek[u] is a term which in the contemporary context may refer to religion (Second Temple Judaism), ethnicity (of Judea), or both. In a review of the state of modern scholarship, Amy-Jill Levine writes that the entire question of ethnicity is "fraught with difficulty", and that "beyond recognizing that 'Jesus was Jewish', rarely does the scholarship address what being 'Jewish' means". The New Testament gives no description of the physical appearance of Jesus before his death—it is generally indifferent to racial appearances and does not refer to the features of the people it mentions. Jesus probably looked like a typical Jewish man of his time and place; standing around 166 cm (5 ft 5 in) tall with a thin but fit build, olive-brown skin, brown eyes and short, dark hair. He also probably had a beard that was not particularly long or heavy. The Christ myth theory is the hypothesis that Jesus of Nazareth never existed; or that if he did, he had virtually nothing to do with the founding of Christianity and the accounts in the gospels.[v] Stories of Jesus's birth, along with other key events, have so many mythic elements that some scholars have suggested that Jesus himself was a myth. Bruno Bauer (1809–1882) taught that the first Gospel was a work of literature that produced history rather than described it. According to Albert Kalthoff (1850–1906), a social movement produced Jesus when it encountered Jewish messianic expectations. Arthur Drews (1865–1935) saw Jesus as the concrete form of a myth that predated Christianity. Despite arguments put forward by authors who have questioned the existence of a historical Jesus, virtually all scholars of antiquity accept that Jesus was a historical figure and consider the myth theory to be fringe. Religious perspectives Jesus's teachings and the retelling of his life story have significantly influenced the course of human history, and have directly or indirectly affected the lives of billions of people, even non-Christians, worldwide. He is considered by many people to be the most influential figure to have ever lived, finding a significant place in numerous cultural contexts. Apart from his own disciples and followers, the Jews of Jesus's day generally rejected him as the messiah, as does Judaism today. Christian theologians, ecumenical councils, reformers and others have written extensively about Jesus over the centuries. Christian denominations have often been defined or characterized by their descriptions of Jesus. Meanwhile, Manichaeans, Gnostics, Muslims, Druzes, the Baháʼís, and others have found prominent places for Jesus in their religions. Jesus is the central figure of Christianity. Although Christian views of Jesus vary, it is possible to summarize the key beliefs shared by the major denominations, as stated in their catechetical or confessional texts. Christian views of Jesus are derived from the texts of the New Testament, including the canonical gospels and letters such as the Pauline epistles and the Johannine writings. These documents outline the key beliefs held by Christians about Jesus, including his divinity, humanity, and earthly life, and that he is the Christ and the Son of God. Despite their many shared beliefs, not all Christian denominations agree on all doctrines, and both major and minor differences on teachings and beliefs have persisted throughout Christianity for centuries. The New Testament states that the resurrection of Jesus is the foundation of the Christian faith. Christians believe that through his sacrificial death and resurrection, humans can be reconciled with God and are thereby offered salvation and the promise of eternal life. Recalling the words of John the Baptist in the gospel of John, these doctrines sometimes refer to Jesus as the Lamb of God, who was crucified to fulfil his role as the servant of God. Jesus is thus seen as the new and last Adam, whose obedience contrasts with Adam's disobedience. Christians view Jesus as a role model, whose God-focused life believers are encouraged to imitate. Most Christians believe that Jesus is both human and the Son of God. While there has been theological debate over his nature,[w] Trinitarian Christians generally believe that Jesus is the Logos, God's incarnation and God the Son, both fully divine and fully human. The doctrine of the Trinity is not universally accepted among Christians. With the Reformation, Christians such as Michael Servetus and the Socinians started questioning the ancient creeds that had established Jesus's two natures. Nontrinitarian Christian groups include the Church of Jesus Christ of Latter-day Saints, Unitarians and Jehovah's Witnesses. Christians revere not only Jesus but also his name. Devotions to the Holy Name of Jesus go back to the earliest days of Christianity. These devotions and feasts exist in both Eastern and Western Christianity. Judaism rejects the idea of Jesus (or any future Jewish messiah) being God, or a mediator to God, or part of a Trinity. It holds that Jesus is not the messiah, arguing that he neither fulfilled the messianic prophecies in the Tanakh nor embodied the personal qualifications of the Messiah. Jews argue that Jesus did not fulfil prophecies to build the Third Temple, gather Jews back to Israel, bring world peace, and unite humanity under the God of Israel. Furthermore, according to Jewish tradition, there were no prophets after Malachi, who delivered his prophecies in the 5th century BC. Judaic criticism of Jesus is long-standing and includes a range of stories found in the Talmud, written and compiled between the 3rd and 5th centuries. In one such story, Yeshu HaNozri ('Jesus the Nazarene'), a lewd apostate, is executed by the Jewish high court for spreading idolatry and practising magic. According to some, the form Yeshu is an acronym which in Hebrew reads "may his name and memory be blotted out". The majority of contemporary scholars consider that this material provides no information on the historical Jesus. The Mishneh Torah, a late 12th-century work of Jewish law written by Moses Maimonides, states that Jesus is a "stumbling block" who makes "the majority of the world to err and serve a god other than the Lord". Medieval Hebrew literature contains the anecdotal "Episode of Jesus" (known also as Toledot Yeshu), in which Jesus is described as being the son of Joseph, the son of Pandera (see: Episode of Jesus). The account portrays Jesus as an impostor. Manichaeism, an ancient religious movement, became one of the earliest organized religions outside of Christianity to honour Jesus as a significant figure. Within the Manichaean belief system, Jesus is revered alongside other prominent prophets such as Zoroaster, Gautama Buddha, and Mani himself. A major figure in Islam, Jesus (often referred to by his Quranic name ʿĪsā ([/ʕiːsaː/]) and in some qiraʼat pronounced as ʿĪsē ([/ʕiːseː/])) is considered to be a messenger of God and the messiah (al-Masīḥ) who was sent to guide the Children of Israel (Banī Isrāʾīl) with a new scripture, the Gospel (referred to in Islam as Injīl). The form ʿĪsē is a pre-Islamic phonosemantic correspondence with the Safaitic name ʿsy, attested in Arabian inscriptions. Muslims regard the gospels' accounts in the New Testament as partially authentic, and believe that Jesus's original message was altered (taḥrīf) and that Muhammad came later to revive it. Belief in Jesus (and all other messengers of God) is a requirement for being a Muslim. The Quran mentions Jesus by name 25 times—more often than Muhammad—and emphasizes that Jesus was a mortal human who, like all other prophets, had been divinely chosen to spread God's message. While the Quran affirms the Virgin birth of Jesus, he is considered to be neither an incarnation nor the son of God. Islamic texts emphasize a strict notion of monotheism (tawḥīd) and forbid the association of partners with God, which would be idolatry. The Quran describes the annunciation to Mary (Maryam) by the Holy Spirit that she is to give birth to Jesus while remaining a virgin. It calls the virgin birth a miracle that occurred by the will of God. The Quran (21:91 and 66:12) states that God breathed his spirit into Mary while she was chaste. Jesus is called a "spirit from God" because he was born through the action of the Spirit, but that belief does not imply his pre-existence. To aid in his ministry to the Jewish people, Jesus was given the ability to perform miracles, by permission of God rather than by his own power. Through his ministry, Jesus is seen as a precursor to Muhammad. In the Quran (4:157–159) it is said that Jesus was not killed but was merely made to appear that way to unbelievers, and that he was raised into the heavens while still alive by God. According to most classic Sunni and Twelver Shi'ite interpretations of these verses, the likeness of Jesus was cast upon a substitute (most often one of the apostles), who was crucified in Jesus's stead. Some medieval Muslims, including the ghulāt writing under the name of al-Mufaddal ibn Umar al-Ju'fi, the Brethren of Purity, various Isma'ili philosophers, and the Sunni mystic al-Ghazali, affirmed the historicity of Jesus's crucifixion. These thinkers held the docetic view that, although Jesus's human body had died on the cross, his spirit had survived and ascended into heaven, so that his death was only an appearance. Nevertheless, to Muslims it is the ascension rather than the crucifixion that constitutes a major event in the life of Jesus. There is no mention of his resurrection on the third day, and his death plays no special role in Islamic theories of salvation. Jesus is a central figure in Islamic eschatology: Muslims believe that he will return to Earth at the end of time and defeat the Antichrist (ad-Dajjal) by killing him. According to the Quran, the coming of Muhammad (also called "Ahmad") was predicted by Jesus: And ˹remember˺ when Jesus, son of Mary, said, "O children of Israel! I am truly Allah's messenger to you, confirming the Torah which came before me, and giving good news of a messenger after me whose name will be Aḥmad." Yet when the Prophet came to them with clear proofs, they said, "This is pure magic." — Surah As-Saf 61:6 Through this verse, early Arab Muslims claimed legitimacy for their new faith in the existing religious traditions and the predictions of Jesus. The Ahmadiyya Muslim Community has several teachings about Jesus. Ahmadis believe that he was a mortal man who survived his crucifixion and died a natural death at the age of 120 in Kashmir, India, and is buried at Roza Bal. In the Druze faith, Jesus is considered and revered as one of the seven spokesmen or prophets (natiq), defined as messengers or intermediaries between God and mankind, along with figures including Moses, Muhammad and Muhammad ibn Isma'il, each of them sent at a different period of history to preach the message of God. In Druze tradition, Jesus is known under three titles: the True Messiah (al-Masih al-Haq), the Messiah of all Nations (Masih al-Umam), and the Messiah of Sinners. This is due, respectively, to the belief that Jesus delivered the true Gospel message, the belief that he was the Saviour of all nations, and the belief that he offers forgiveness. In the Baháʼí Faith, Jesus is considered one of the Manifestations of God, defined as divine messengers or prophets sent by God to guide humanity, along with other religious figures such as Moses, Krishna, Zoroaster, Buddha, Muhammad, and Baháʼu'lláh. Baháʼís believe that these religious founders or leaders have contributed to the progressive revelation by bringing spiritual and moral values to humanity in their own time and place. As a Manifestation of God, Jesus is believed to reflect God's qualities and attributes, but is not considered the only saviour of humanity nor the incarnation of God. Baháʼís believe in the virgin birth, but see the resurrection and the miracles of Jesus as symbolic. In Christian Gnosticism (now a largely extinct religious movement), Jesus was sent from the divine realm and provided the secret knowledge (gnosis) necessary for salvation. It is important to note that Gnosticism is not a homogeneous religion, but an umbrella term used by modern scholars to describe diverse religious and philosophical ideas and systems that emerged in the late first century among early Christian sects and other religious movements. Most Gnostics believed that Jesus was a human who became possessed by the spirit of "the Christ" at his baptism. This spirit left Jesus's body during the crucifixion but was rejoined to him when he was raised from the dead. Some Gnostics were docetics, believing that Jesus did not have a physical body, but only appeared to possess one. The Gnostic Jesus can both differ greatly from the Christian Jesus, but also build on him. For instance, the Testimony of Truth, a Gnostic Christian text found in the Nag Hammadi library buried around 400 AD, explains that the serpent in Genesis 3 who instructs Adam and Eve is Jesus. Some Hindus consider Jesus to be an avatar or a sadhu. Paramahansa Yogananda, an Indian guru, taught that Jesus was the reincarnation of Elisha and a student of John the Baptist, the reincarnation of Elijah. Some Buddhists, including Tenzin Gyatso, the 14th Dalai Lama, regard Jesus as a bodhisattva who dedicated his life to the welfare of people. The New Age movement entertains a wide variety of views on Jesus. Theosophists, from whom many New Age teachings originated, refer to Jesus as the Master Jesus, a spiritual reformer, and they believe that Christ, after various incarnations, occupied the body of Jesus. In the Anthroposophy founded by Rudolf Steiner, Jesus Christ is a central balancing force mediating between the two opposing polarities of evil, namely the fanatical exalted mysticism of Lucifer, and the cold materialism of Ahriman. The Urantia Book teaches that Jesus is one of more than 700,000 heavenly sons of God. Antony Theodore in the book Jesus Christ in Love writes that there is an underlying oneness of Jesus's teachings with the messages contained in Quran, Vedas, Upanishads, Talmud and Avesta. Atheists reject Jesus's divinity, but have different views about him—from challenging his mental health to emphasizing his "moral superiority" (Richard Dawkins). Artistic depictions As in other Early Christian art, the earliest depictions date to the late 2nd or early 3rd century, and surviving images are found in the Catacombs of Rome. Some of the earliest depictions of Jesus at the Dura-Europos church date to before 256. A wide range of depictions of Jesus appeared during the next two millennia, influenced by cultural settings, political circumstances and theological contexts. The depiction of Christ in pictorial form was highly controversial in the early Church.[x] From the 5th century, flat painted icons became popular in the Eastern Church. The Byzantine Iconoclasm acted as a barrier to developments in the East, but by the 9th century, art was permitted again. The Protestant Reformation brought renewed resistance to imagery, but total prohibition was atypical, and Protestant objections to images have tended to reduce since the 16th century. Although large images are generally avoided, few Protestants now object to book illustrations depicting Jesus. The use of depictions of Jesus is advocated by the leaders of denominations such as Anglicans and Catholics and is a key element of the Eastern Orthodox tradition. In Eastern Christian art, the Transfiguration was a major theme, with every Eastern Orthodox monk trained in icon painting having to prove his craft by painting an icon depicting it. Icons receive the external marks of veneration, such as kisses and prostration, and they are thought to be powerful channels of divine grace. In Western Europe, the Renaissance brought forth artists who focused on depictions of Jesus; Fra Angelico and others followed Giotto in the systematic development of uncluttered images. Before the Protestant Reformation, the crucifix was common in Western Christianity. It is a model of the cross with Jesus crucified on it. The crucifix became the central ornament of the altar in the 13th century, a use that has been nearly universal in Roman Catholic churches since then. Associated relics The total destruction that ensued with the siege of Jerusalem by the Romans in AD 70 made the survival of items from 1st-century Judea very rare and almost no direct records survive about the history of Judaism from the last part of the 1st century to the 2nd century.[y] Biblical scholar Margaret M. Mitchell writes that, although Eusebius reports (Ecclesiastical History III 5.3) that the early Christians left Jerusalem for Pella just before Jerusalem was subjected to the final lockdown, we must accept that no items from the early Jerusalem Church have survived. Joe Nickell writes, "as investigation after investigation has shown, not a single, reliably authenticated relic of Jesus exists."[z] Throughout the history of Christianity, relics attributed to Jesus have been claimed, but doubt has been cast on them. The 16th-century Catholic theologian Erasmus wrote sarcastically about the proliferation of relics and the number of buildings that could have been constructed from the wood claimed to be from the cross used in the Crucifixion. Similarly, while experts debate whether Jesus was crucified with three nails or four, at least thirty holy nails are venerated as relics across Europe. Some relics, such as purported remnants of the crown of thorns placed on the head of Jesus, receive only a modest number of pilgrims, while the Shroud of Turin (which is associated with an approved Catholic devotion to the Holy Face of Jesus), has received millions, including the popes John Paul II and Benedict XVI. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Sociology_of_space] | [TOKENS: 3986] |
Contents Sociology of space 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias The sociology of space is a sub-discipline of sociology that mostly borrows from theories developed within the discipline of geography, including the sub fields of human geography, economic geography, and feminist geography. The "sociology" of space examines the social and material constitution of spaces. It is concerned with understanding the social practices, institutional forces, and material complexity of how humans and spaces interact. The sociology of space is an inter-disciplinary area of study, drawing on various theoretical traditions including Marxism, postcolonialism, and Science and Technology Studies, and overlaps and encompasses theorists with various academic disciplines such as geography and architecture. Edward T. Hall developed the study of Proxemics which concentrates on the empirical analysis of space in psychology. Definition of space Space is one of the most important concepts within the disciplines of social science as it is fundamental to our understanding of geography. The term "space" has been defined variously by scholars: In general terms, the Oxford English Dictionary defines space in two ways: However, the human geographers' interest is in the objects within the space and their relative position, which involves the description, explanation and prediction of the distribution of phenomena. Thus, the relationships between objects in space is the central of the study. Michel Foucault defines space as; "The space in which we live, which draws us out of ourselves, in which the erosion of our lives, our time and our history occurs, the space that claws and gnaws at us, is also, in itself, a heterogeneous space…..we live inside a set of relations." Nigel Thrift also defines space as; "The outcome of a series of highly problematic temporary settlements that divide and connect things up into different kinds of collectives which are slowly provided with the meaning which render them durable and sustainable." In short, "space" is the social space in which we live and create relationships with other people, societies and surroundings. Space is an outcome of the hard and continuous work of building up and maintaining collectives by bringing different things into alignments. All kinds of different spaces can and therefore do exist which may or may not relate to each other. Thus, through space, we can understand more about social action. History of the sociology of space Georg Simmel has been seen as the classical sociologist who was most important to this field. Simmel wrote on "the sociology of space" in his 1908 book "Sociology: Investigations on the Forms of Sociation". His concerns included the process of metropolitanisation and the separation of leisure spaces in modern economic societies. The category of space long played a subordinate role in sociological theory formation. Only in the late 1980s did it come to be realised that certain changes in society cannot be adequately explained without taking greater account of the spatial components of life. This shift in perspective is referred to as the spatial turn. The space concept directs attention to organisational forms of juxtaposition. The focus is on differences between places and their mutual influence. This applies equally for the micro-spaces of everyday life and the macro-spaces at the nation-state or global levels. The theoretical basis for the growing interest of the social sciences in space was set primarily by English and French-speaking sociologists, philosophers, and human geographers. Of particular importance is Michel Foucault’s essay on "Of Other Spaces", in which the author proclaims the "age of space", and Henri Lefebvre's seminal work "La production de l'espace". The latter provided the grounding for Marxist spatial theory on which David Harvey, Manuel Castells, Edward Soja, and others have built. Marxist theories of space, which are predicated on a structural, i.e., capitalist or global determinants of spaces and the growing homogenization of space, are confronted by action theoretical conceptions, which stress the importance of the corporeal placing and the perception of spaces as albeit habitually predetermined but subjective constructions. One example is the theory of space of the German sociologist Martina Löw. Approaches deriving from the post-colonialism discourse have attracted greater attention in recent years. Also in contrast to Marxist concepts of space, British geographer Doreen Massey and German sociologist Helmuth Berking, for instance, emphasise the heterogeneity of local contexts and the place-relatedness of our knowledge about the world. Duality of space Martina Löw developed the idea of a "relational" model of space, which focuses on the "orderings" of living entities and social goods, and examines how space is constituted in processes of perception, recall, or ideation to manifest itself as societal structure. From a social theory point of view, it follows on from the theory of structuration proposed by Anthony Giddens, whose concept of the "duality of structure" Löw extends sociological terms into a "duality of space". The basic idea is that individuals act as social agents (and constitute spaces in the process), but that their action depends on economic, legal, social, cultural, and, finally, spatial structures. Spaces are hence the outcome of action. At the same time, spaces structure action, that is to say spaces can both constrain and enable action. With respect to the constitution of space, Löw distinguishes analytically between two, generally mutually determining factors: "spacing" and "synthesis". Spacing refers to the act of placing or the state of being placed of social goods and people in places. According to Löw, however, an ordering created through placings is only effectively constituted as space where the elements that compose it are actively interlinked by people – in processes of perception, ideation, or recall. Löw calls this synthesis. This concept has been empirically tested in studies such as those by Lars Meier (who examined the constitution of space in the everyday life of financial managers in London and Singapore), Cedric Janowicz (who carried out an ethnographical-space sociological study of food supply in the Ghanaian city of Accra), and Silke Streets (who looked at processes of space constitution in the creative industries in Leipzig). Marxist approaches The most important proponent of Marxist spatial theory was Henri Lefebvre. He proposed "social space" to be where the relations of production are reproduced and that dialectical contradictions were spatial rather than temporal. Lefebvre sees the societal production of space as a dialectical interaction between three factors, which exist in a kind of trialectics: Lefebvre's statement that "(social) space is a (social) product" was influenced by Marx's commodity fetishism. His theory on social space was influenced by the Bauhaus art movement. In Lefebvre's view of the 1970s, this spatial production resulted in a space of non-reflexive everydayness marked by alienation, dominating through mathematical-abstract concepts of space, and reproduced in spatial practice. Lefebvre sees a line of flight from alienated spatiality in the spaces of representation – in notions of non-alienated, mythical, pre-modern, or artistic visions of space. The Marxist spatial theory was given decisive new impetus by David Harvey, in particular, who was interested in the effects of the transition from Fordism to "flexible accumulation" on the experience of space and time. He shows how various innovations at the economic and technological levels have breached the crisis-prone inflexibility of the Fordist system, thus increasing the turnover rate of capital. This causes a general acceleration of economic cycles. According to Harvey, the result is "time–space compression". While the feeling for the long term, for the future, for continuity is lost, the relationship between proximity and distance becomes more and more difficult to determine. Lefebvre's spatial triad was then appropriated by different scholars, including Edward Soja and David Harvey, who carried on this new tradition in human geography. Among them, the most well-known appropriated version of the spatial triad is Thirdspace formulated by Soja. His theory categorizes urban space into three types: Soja argues that our old ways to thinking about space (first and second space theories) can no longer accommodate the way the world works because he believed that spaces may not be contained within one social category, they may include different aspects of many categories or developed within the boundaries of a number of categories (e.g., two different cultures combine together and emerge as a third culture; this third hybrid space displaces the original values that constitute it and set up new values and perspectives that is different from the first two spaces; thus, the third space theory can explain some of the complexity of poverty, social exclusion and social inclusion, gender and race issues.). [citation needed] Lefebvre introduced the concept of triadic representational spaces as a synthesis of mind–body dualism, as opposed to monism or phenomenology. Under a Lefebvrian "unity theory", the mind–body problem is brought together through the triad of social space, mental space, and physical space. Influenced by Paul Ricœur, J. N. Entrikin attempts to solve the mind–body problem of social space by presupposing Cartesian dualism to argue that narrative can be an intermediary between mind and extension. Postcolonial theories of space Theories of space that are inspired by the post-colonialism discourse focus on the heterogeneity of spaces. According to Doreen Massey, calling a country in Africa a "developing country" is not appropriate, since this expression implies that spatial difference is temporal difference (Massey 1999b). This logic treats such a country not as different but merely as an early version of countries in the "developed" world, a view she condemns as "Eurocentrism". In this vein, Helmuth Berking criticises theories that postulate the increasing homogenisation of the world through globalisation as "globocentrism". He confronts this with the distinctiveness and importance of local knowledge resources for the production of (different and specific) places. He claims that local contexts form a sort of framework or filter through which global processes and globally circulating images and symbols are appropriated, thus attaining meaning. For instance, the film character Conan the Barbarian is a different figure in radical rightwing circles in Germany than in the black ghettoes of the Chicago Southside, just as McDonald's means something different in Moscow than in Paris. Relational view of space In the work of geographer and critical theorist Nigel Thrift, he wrote a relational view of space in which, rather than seeing space being viewed as a container within which the world proceeds, space should be seen as a co-product of these proceedings. He explained about four constructed space in modern human geography. There are four different kinds of space according to how modern geography thinks about space. They are 1. Empirical Construction of Space, 2. Unblocking space, 3. Image space and 4. Place Space. First Space is the empirical construction of space. Empirical space refers to the process whereby the mundane fabric of daily life is constructed. These simple things like, cars, houses, mobiles, computers, and roads are very simple but they are great achievements of our daily life and they play very important role in making up who we are today. For example, today's technology such as GPS did not suddenly come into existence; in fact, it is laid down in the 18th century and developed throughout time. The first space is real and tangible, and it is also known as physical space. Second space is the unblocking space. This type of space refers to the process whereby routine pathways of interaction as set up around which boundaries are often drawn. The routine may include the movement of office workers, the interaction of drunk teenagers, and the flow of goods, money, people, and information. Unlike the old time in geography when people accepted a space as blocked boundary (Example: A capitalist space, neoliberal space or city space), we began to realize that there is no such thing like boundaries in space. The space of the world is flowing and transforming continuously that it is very difficult to describe in a fixed way. The second space is ideology/conceptual and it is also known as mental space. For example, the second space will explain the behaviors of people from different social class and the social segregation among rich and poor people. Third space is the image space that refers to the process whereby the images has produced new kind of space. The images may be in different form and shape; ranging from painting to photograph, from portrait to post card, and from religious theme to entertainment. Nowadays, we are highly influenced by images in many ways and these certain images can tell us new social and cultures values, or something new about how we see the world. Images, symbols and sign do have some kind of spatial expression. Fourth space is the place that refers to the process whereby spaces are ordered in ways that open up affective and other embodied potentials. Place space has more meaning than a place, and it can represent as different type of space. This fourth type of space tries to understand that place is a vital actor in bringing up people's lives in certain ways and place will let us to understand all kind of things which are hidden form us.. Scale: the local and the global Andrew Herod mentioned that scale, within human geography, is typically seen in one of the two ways: either as a real material thing which actually exists and is the result of political struggle and/or social process, or as a way of framing our understanding of the world. People's lives across the globe have been re-scaling by contemporary economic, political, cultural and social processes, such as globalization, in complex ways. As a result, we have seen the creation of supranational political bodies such as the European Union, the devolution of political power from the nation-state to regional political bodies. We have also experienced the increasing homogenization and 'Americanization' through the process of globalization while the locals' tendencies (or counter force) among people who defend traditional ways of life increase around the world. The process of re-scaling people's lives and the relationship between the two extremes of our scaled lives- the 'global' and the 'local' were brought into question. Until the 1980s, theorizing the concept of 'scale' itself was taken for granted although physical and human geographers looked at issues from 'regional scale' or'national scale'. The questions such as whether scale is simply a mental device categorizing and ordering the world or whether scales really exists as material social products, particularly, were debated among materialists and idealists. Some geographers draw upon Immanuel Kant's idealist philosophy that scales were handy conceptual mechanism for ordering the world while others, by drawing upon Marxist ideas of materialism, argue that scales really exist in the world and they were the real social products. For those idealists based on Kantian's inspiration, the 'global' is defined by the geologically given limits of the earth and the 'local' is defined as a spatial resolution useful for comprehending the process and practices. For materialists, the 'national' scale is a scale that had to be actively created through economic and political processes but not a scale existed in a logical hierarchy between global and the regional. The notion of 'becoming' and the focus on the politics of producing scales have been central to materialist arguments concerning the global scale. It is important to recognize that social actors may have to work just as hard to become 'local'as they have to work to become 'global'. People paid attention to how transnational corporations have 'gone global', how institutions of governance have 'become' supranational and how labour unions have sought to 'globalize' their operations to match those of an increasingly 'globalized' city. For the scale 'global' and 'local', Kevin Cox mentioned that moving from the local to the global scale 'is not a movement from one discrete arena to another' but a process of developing networks of associations that allow actors to shift between various spaces of engagement. According to his view, 'scale' is seen as a process rather than as a fixed entity and, in other words, the global and the local are not static 'arenas'within which social life plays out but are constantly made by social actions. For example, a political organization might attempt to go 'global' to engage with actors or opportunities outside of its own space; likewise, a transnational corporation may attempt to 'go local' through tailoring its products and operations in different places. Gibson-Graham (2002) has identified at least six ways in which the relationship between the local and the global is often viewed. There are some western thoughts that greater size and extensiveness imply domination and superior power, such that the local is often represented as 'small and relatively powerless, defined and confined by the global'. So, the global is a force and the local is its field of play. However, the local can serve as a powerful scale of political organization; the global is not a scale just controlled by capital – those who challenge capital can also organize globally( Herod, A). There has been the concept 'Think globally and act locally' viewed by neoliberals. For representing how the world is scaled, there are five different and popular metaphors: they are the ladder, concentric circles, Matryoshka nesting dolls, earthworm burrows and tree roots. First, in using such a metaphor of hierarchical ladder, the global as the highest rung on the ladder is seen to be above the local and all other scales. Second, the use of concentric metaphor leaves us with a particular way of conceptualizing the scalar relationship between places. In this second metaphor, the local is seen as a relatively small circle, with the regional as a larger circle encompassing it, while the national and the global scales are still larger circles encompassing the local and the regional. For the hierarchy of Russian Matryoshka nesting dolls, the global can contain other scales but this does not work the other way round; for instance, the local cannot contain the global. For the fourth metaphor concerning with thinking on scale, what French social theorist Bruno Latour argued is that a world of places is 'networked' together. Such the metaphor leaves us with an image of scale in which the global and the local are connected together and not totally separated from each other. For the tree roots metaphor similar with the earthworm burrow metaphor, as the earthworm burrows or tree roots penetrating different strata of the soil, it is difficult to determine exactly where one scale ends and another begins. When thinking about the use of metaphor, it should be aware that the choice of metaphor over another is not made on the basis of which is empirically a 'more accurate'representation of something but, on the basis of how someone is attempting to understand a particular phenomenon. Such an appreciation of metaphors is important because it suggests that how we talk about scale impacts upon the ways in which we engage socially and politically with our scaled world and that may impact on how we conduct our social, economic and political praxis and so make landscapes ( Herod, A ) See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Karnak] | [TOKENS: 3234] |
Contents Karnak The Karnak Temple Complex, commonly known as Karnak (/ˈkɑːr.næk/), comprises a vast mix of temples, pylons, chapels, and other buildings near Luxor, Egypt. Construction at the complex began during the reign of Senusret I (reigned 1971–1926 BC) in the Middle Kingdom (c. 2000–1700 BC) and continued into the Ptolemaic Kingdom (305–30 BC), although most of the extant buildings date from the New Kingdom. The area around Karnak was the ancient Egyptian Ipet-isut ("The Most Selected of Places") and the main place of worship of the 18th Dynastic Theban Triad, with the god Amun as its head. It is part of the monumental city of Thebes, and in 1979 it was added to the UNESCO World Heritage List along with the rest of the city. Karnak gets its name from the nearby, and partly surrounded, modern village of El-Karnak, 2.5 kilometres (1.6 miles) north of Luxor. Name The original name of the temple was Ipet-isut, meaning "The Most Select of Places". The complex's modern name "Karnak" comes from the nearby village of el-Karnak, which means "fortified village". Overview The complex is a vast open site and includes the Karnak Open Air Museum. It is believed to be the second-most-visited historical site in Egypt; only the Giza pyramid complex near Cairo receives more visits. It consists of four main parts, of which only the largest is currently open to the public. The term Karnak often is understood as being the Precinct of Amun-Re only, because this is the only part most visitors see. The three other parts, the Precinct of Mut, the Precinct of Montu, and the dismantled Temple of Amenhotep IV, are closed to the public. There also are a few smaller temples and sanctuaries connecting the Precinct of Mut, the Precinct of Amun-Re, and the Luxor Temple. The Precinct of Mut is very ancient, being dedicated to an Earth and creation deity, but not yet restored. The original temple was destroyed and partially restored by Hatshepsut, although another pharaoh built around it in order to change the focus or orientation of the sacred area. Many portions of it may have been carried away for use in other buildings. The key difference between Karnak and most of the other temples and sites in Egypt is the length of time over which it was developed and used. Construction of temples started in the Middle Kingdom and continued into Ptolemaic times. Approximately thirty pharaohs contributed to the buildings, enabling it to reach a size, complexity, and diversity not seen elsewhere. Few of the individual features of Karnak are unique, but the size and number of features are vast. The deities represented range from some of the earliest worshipped to those worshipped much later in the history of the Ancient Egyptian culture. Although destroyed, it also contained an early temple built by Amenhotep IV (Akhenaten), the pharaoh who later would celebrate a nearly monotheistic religion he established that prompted him to move his court and religious center away from Thebes. It also contains evidence of adaptations, where the buildings of the ancient Egyptians were used by later cultures for their own religious purposes, such as Coptic churches. The Great Hypostyle Hall in the Precinct of Amun-Re has an area of 5,000 m2 (1.2 acres) with 134 massive columns arranged in 16 rows. One hundred and twenty-two of these columns are 10 metres (33 ft) tall, and the other 12 are 21 metres (69 ft) tall with a diameter of over 3 metres (9.8 ft). The architraves, on top of these columns, are estimated to weigh 70 tons. These architraves may have been lifted to these heights using levers. This would be a time-consuming process and also would require great balance to get to such heights. A common alternative theory regarding how they were moved is that large ramps were constructed of sand, mud, brick or stone and that the stones were then towed up the ramps. If stone had been used for the ramps, they would have been able to use much less material. The top of the ramps presumably would have employed either wooden tracks or cobblestones for towing the megaliths. There is an unfinished pillar in an out-of-the-way location that indicates how it would have been finished. Final carving was executed after the drums were put in place so that it was not damaged while being placed. Several experiments moving megaliths with ancient technology were made at other locations – some of which are amongst the largest monoliths in the world. The sun god's shrine was built so that it has light focused upon it during the winter solstice. In 2009, UCLA launched a website dedicated to virtual reality digital reconstructions of the Karnak complex and other resources. History The history of the Karnak complex is largely the history of Thebes and its changing role in the culture. Religious centers varied by region, and when a new capital of the unified culture was established, the religious centers in that area gained prominence. The city of Thebes does not appear to have been of great significance before the Eleventh Dynasty and previous temple building there would have been relatively small, with shrines being dedicated to the early deities of Thebes, the Earth goddess Mut and Montu. Early building was destroyed by invaders. The earliest known artifact found in the area of the temple is a small, eight-sided column from the Eleventh Dynasty, which mentions Amun-Re. Amun (sometimes called Amen) was long the local tutelary deity of Thebes. He was identified with the ram and the goose. The Egyptian meaning of Amun is "hidden" or the "hidden god". Major construction work in the Precinct of Amun-Re took place during the Eighteenth Dynasty, when Thebes became the capital of the unified Ancient Egypt. Almost every pharaoh of that dynasty added something to the temple site. Thutmose I erected an enclosure wall connecting the Fourth and Fifth pylons, which comprise the earliest part of the temple still standing in situ. Hatshepsut had monuments constructed and also restored the original Precinct of Mut, that had been ravaged by the foreign rulers during the Hyksos occupation. She had twin obelisks, at the time the tallest in the world, erected at the entrance to the temple. One still stands, as the second-tallest ancient obelisk still standing on Earth; the other has toppled and is broken. Another of her projects at the site, Karnak's Red Chapel or Chapelle Rouge, was intended as a barque shrine and originally may have stood between her two obelisks. She later ordered the construction of two more obelisks to celebrate her sixteenth year as pharaoh; one of the obelisks broke during construction, and thus, a third was constructed to replace it. The broken obelisk was left at its quarrying site in Aswan, where it still remains. Known as the unfinished obelisk, it provides evidence of how obelisks were quarried. Construction of the Great Hypostyle Hall also may have begun during the Eighteenth Dynasty (although most new building was undertaken under Seti I and Ramesses II in the Nineteenth). Merneptah, also of the Nineteenth Dynasty, commemorated his victories over the Sea Peoples on the walls of the Cachette Court, the start of the processional route (also known as the Avenue of Sphinxes) to the Luxor Temple. The last major change to the Precinct of Amun-Re's layout was the addition of the First Pylon and the massive enclosure walls that surround the precinct, both constructed by Nectanebo I of the Thirtieth Dynasty. Ancient Greek and Roman writers wrote about a range of monuments in Upper Egypt and Nubia, including Karnak, Luxor temple, the Colossi of Memnon, Esna, Edfu, Kom Ombo, Philae, and others. In 323 AD, Roman emperor Constantine the Great recognized the Christian religion, and in 356 Constantius II ordered the closing of pagan temples throughout the Roman empire, into which Egypt had been annexed in 30 BC. Karnak was by this time mostly abandoned, and Christian churches were founded among the ruins, the most famous example of this is the reuse of the Festival Hall of Thutmose III's central hall, where painted decorations of saints and Coptic inscriptions can still be seen. Thebes' exact placement was unknown in medieval Europe, though both Herodotus and Strabo give the exact location of Thebes and how long up the Nile one must travel to reach it. Maps of Egypt, based on the 2nd century Claudius Ptolemaeus' mammoth work Geographia, had been circulating in Europe since the late 14th century, all of them showing Thebes' (Diospolis) location. Despite this, several European authors of the 15th and 16th centuries who visited only Lower Egypt and published their travel accounts, such as Joos van Ghistele and André Thévet, put Thebes in or close to Memphis. The first European description of the Karnak temple complex was by an unknown Venetian in 1589 and is housed in the Biblioteca Nazionale Centrale di Firenze, although his account gives no name for the complex. Karnak ("Carnac") as a village name, and name of the complex, is first attested in 1668, when two capuchin missionary brothers, Protais and Charles François d'Orléans, travelled though the area. Protais' writing about their travel was published by Melchisédech Thévenot (Relations de divers voyages curieux, 1670s–1696 editions) and Johann Michael Vansleb (The Present State of Egypt, 1678). The first drawing of Karnak is found in Paul Lucas' travel account of 1704, (Voyage du Sieur Paul Lucas au Levant). It is rather inaccurate, and can be quite confusing to modern eyes. Lucas travelled in Egypt during 1699–1703. The drawing shows a mixture of the Precinct of Amun-Re and the Precinct of Montu, based on a complex confined by the three huge Ptolemaic gateways of Ptolemy III Euergetes / Ptolemy IV Philopator, and the massive 113 m long, 43 m high and 15 m thick, First Pylon of the Precinct of Amun-Re. Karnak was visited and described in succession by Claude Sicard and his travel companion Pierre Laurent Pincia (1718 and 1720–21), Granger (1731), Frederick Louis Norden (1737–38), Richard Pococke (1738), James Bruce (1769), Charles-Nicolas-Sigisbert Sonnini de Manoncourt (1777), William George Browne (1792–93), and finally by a number of scientists of the Napoleon expedition, including Vivant Denon, during 1798–1799. Claude-Étienne Savary describes the complex in rather great detail in his work of 1785; especially in light of the fact that it is a fictional account of a pretend journey to Upper Egypt, composed out of information from other travellers. Savary did visit Lower Egypt in 1777–78, and published a work about that too. Main parts This is the largest of the precincts of the temple complex, and is dedicated to Amun-Re, the chief deity of the Theban Triad. There are several colossal statues, including the figure of Pinedjem I which is 10.5 metres (34 ft) tall. The sandstone for this temple, including all of the columns, was transported from Gebel Silsila 100 miles (161 km) south on the Nile river. It also has one of the largest obelisks, weighing 328 tons and standing 29 metres (95 ft) tall. Located to the south of the newer Amun-Re complex, this precinct was dedicated to the mother goddess, Mut, who became identified as the wife of Amun-Re in the Eighteenth Dynasty Theban Triad. It has several smaller temples associated with it and has its own sacred lake, constructed in a crescent shape. This temple has been ravaged, many portions having been used in other structures. Following excavation and restoration works by the Johns Hopkins University team, led by Betsy Bryan (see below) the Precinct of Mut has been opened to the public. Six hundred black granite statues were found in the courtyard to her temple. It may be the oldest portion of the site. In 2006, Bryan presented her findings of a festival that included apparent intentional overindulgence in alcohol. Participation in the festival included the priestesses and the population. Historical records of tens of thousands attending the festival exist. These findings were made in the temple of Mut because when Thebes rose to greater prominence, Mut absorbed the warrior goddesses, Sekhmet and Bast, as some of her aspects. First, Mut became Mut-Wadjet-Bast, then Mut-Sekhmet-Bast (Wadjet having merged into Bast), then Mut also assimilated Menhit, another lioness goddess, and her adopted son's wife, becoming Mut-Sekhmet-Bast-Menhit, and finally becoming Mut-Nekhbet. Temple excavations at Luxor discovered a "porch of drunkenness" built onto the temple by the pharaoh Hatshepsut, during the height of her twenty-year reign. In a later myth developed around the annual drunken Sekhmet festival, Ra, by then the sun god of Upper Egypt, created her from a fiery eye gained from his mother, to destroy mortals who conspired against him (Lower Egypt). In the myth, Sekhmet's blood-lust was not quelled at the end of the battle and led to her destroying almost all of humanity, so Ra had tricked her by turning the Nile as red as blood (the Nile turns red every year when filled with silt during inundation) so that Sekhmet would drink it. The trick, however, was that the red liquid was not blood, but beer mixed with pomegranate juice so that it resembled blood, making her so drunk that she gave up slaughter and became an aspect of the gentle Hathor. The complex interweaving of deities occurred over the thousands of years of the culture. This portion of the site is dedicated to the son of Mut and Amun-Re, Montu, a war-god. It is located to the north of the Amun-Re complex and is much smaller in size. It is not open to the public. The temple that Akhenaten (Amenhotep IV) constructed on the site was located east of the main complex, outside the walls of the Amun-Re precinct. It was destroyed immediately after the death of its builder, who had attempted to overcome the powerful priesthood who had gained control over Egypt before his reign. It was so thoroughly demolished that its full extent and layout is unknown. The priesthood of that temple regained their powerful position as soon as Akhenaten died, and were instrumental in destroying many records of his existence. Gallery See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Geometric_albedo] | [TOKENS: 812] |
Contents Geometric albedo In astronomy, the geometric albedo of a celestial body is the ratio of its actual brightness as seen from the light source (i.e. at zero phase angle) to that of an idealized flat, fully reflecting, diffusively scattering (Lambertian) disk with the same cross-section. (This phase angle refers to the direction of the light paths and is not a phase angle in its normal meaning in optics or electronics.) Diffuse scattering implies that radiation is reflected isotropically with no memory of the location of the incident light source. Zero phase angle corresponds to looking along the direction of illumination. For Earth-bound observers, this occurs when the body in question is at opposition and on the ecliptic. The visual geometric albedo refers to the geometric albedo quantity when accounting for only electromagnetic radiation in the visible spectrum. Airless bodies The surface materials (regoliths) of airless bodies (in fact, the majority of bodies in the Solar System) are strongly non-Lambertian and exhibit the opposition effect, which is a strong tendency to reflect light straight back to its source, rather than scattering light diffusely. The geometric albedo of these bodies can be difficult to determine because of this, as their reflectance is strongly peaked for a small range of phase angles near zero. The strength of this peak differs markedly between bodies, and can only be found by making measurements at small enough phase angles. Such measurements are usually difficult due to the necessary precise placement of the observer very close to the incident light. For example, the Moon is never seen from the Earth at exactly zero phase angle, because then it is being eclipsed. Other Solar System bodies are not in general seen at exactly zero phase angle even at opposition, unless they are also simultaneously located at the ascending or descending node of their orbit, and hence lie on the ecliptic. In practice, measurements at small nonzero phase angles are used to derive the parameters which characterize the directional reflectance properties for the body (Hapke parameters). The reflectance function described by these can then be extrapolated to zero phase angle to obtain an estimate of the geometric albedo. For very bright, solid, airless objects such as Saturn's moons Enceladus and Tethys, whose total reflectance (Bond albedo) is close to one, a strong opposition effect combines with the high Bond albedo to give them a geometric albedo above unity (1.4 in the case of Enceladus). Light is preferentially reflected straight back to its source even at low angle of incidence such as on the limb or from a slope, whereas a Lambertian surface would scatter the radiation much more broadly. A geometric albedo above unity means that the intensity of light scattered back per unit solid angle towards the source is higher than is possible for any Lambertian surface. Stars Stars shine intrinsically, but they can also reflect light. In a close binary star system polarimetry can be used to measure the light reflected from one star off another (and vice versa) and therefore also the geometric albedos of the two stars. This task has been accomplished for the two components of the Spica system, with the geometric albedo of Spica A and B being measured as 0.0361 and 0.0136 respectively. The geometric albedos of stars are in general small, for the Sun a value of 0.001 is expected, but for hotter or lower-gravity (i.e. giant) stars the amount of reflected light is expected to be several times that of the stars in the Spica system. Equivalent definitions For the hypothetical case of a plane surface, the geometric albedo is the albedo of the surface when the illumination is provided by a beam of radiation that comes in perpendicular to the surface. Examples The geometric albedo may be greater or smaller than the Bond albedo, depending on surface and atmospheric properties of the body in question. Some examples: See also References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/File:Data_Networks_classification_by_spatial_scope.svg] | [TOKENS: 148] |
File:Data Networks classification by spatial scope.svg Summary In increasing order of scale: • Nanoscale • Body (BAN) • Personal (PAN) • Local (LAN) • Campus (CAN) • Metropolitan (MAN) • Radio access (RAN) Licensing File history Click on a date/time to view the file as it appeared at that time. File usage The following 26 pages use this file: Global file usage The following other wikis use this file: View more global usage of this file. Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/BlueBream] | [TOKENS: 2344] |
Contents Zope Zope is a family of free and open-source web application servers written in Python, and their associated online community. Zope stands for "Z Object Publishing Environment", and was the first system using the now common object publishing methodology for the Web. Zope has been called a Python killer app, an application that helped put Python in the spotlight. Over the last few years, the Zope community has spawned several additional web frameworks with disparate aims and principles, but sharing philosophy, people, and source code. Zope 2 is still the most widespread of these frameworks, largely thanks to the Plone content management system, which runs on Zope 2. BlueBream (earlier called Zope 3) is less widespread but underlies several large sites, including Launchpad. Grok was started as a more programmer-friendly framework, "Zope 3 for cavemen", and in 2009 Pyramid gained popularity in the Zope community as a minimalistic framework based on Zope principles. History The Zope Corporation was formed in 1995 in Fredericksburg, Virginia under the name Digital Creations, as a joint venture with InfiNet (a joint newspaper chain venture). The company developed a classified advertisement engine for the Internet. In 1997, the company became independently owned and private. The company's software engineers are led by CTO Jim Fulton. PythonLabs, creators of Python, became part of the company in 2000 (Python founder Guido van Rossum left Zope Corp in 2003). What is now known as Zope 2 began with the merging of three separate software products – Bobo, Document Template, and BoboPOS – into the Principia application server. At the behest of its largest investor, Opticality Ventures, Principia was re-released as free software under the name Zope in 1998. Bobo, and therefore Zope, was the first Web object publishing solution. In November 2004, Zope 3 was released. Zope 3 is a complete rewrite that preserves only the original ZODB object database. It is directly intended for enterprise Web application development using the newest development paradigms. Zope 3 is, however, not compatible with Zope 2, so Zope 2 applications do not run on Zope 3. It was originally intended to introduce a backwards-compatibility layer so that Zope 2 software would run on Zope 3. Instead a module known as Five introduced the new Zope 3 paradigms into Zope 2, although full compatibility isn't possible that way either. The existence of two incompatible Web frameworks called Zope has caused a lot of confusion. In response, in January 2010, Zope 3 was renamed "BlueBream". "Zope" and "blue bream" are names of a kind of fish, Ballerus ballerus. Zope Foundation The Zope Foundation is an organization that promotes the development of the Zope platform by supporting the community that develops and maintains the relevant software components. The community includes both open source software, documentation and web infrastructure contributors, as well as business and organization consumers of the software platform. It manages the zope.org websites, an infrastructure for open source collaboration. Zope versions A Zope website is usually composed of objects in a Zope Object Database, not files on a file system, as is usual with most web servers. This allows users to harness the advantages of object technologies, such as encapsulation. Zope maps URLs to objects using the containment hierarchy of such objects; methods are considered to be contained in their objects as well. Data can be stored in other databases as well, or on the file system, but ZODB is the most common solution. Zope provides two mechanisms for HTML templating: Document Template Markup Language (DTML) and Zope Page Templates (ZPT). DTML is a tag-based language that allows implementation of simple scripting in the templates. DTML has provisions for variable inclusion, conditions, and loops. However, DTML can be problematic: DTML tags interspersed with HTML form non-valid HTML documents, and its use requires care when including logic into templates, to retain code readability. The use of DTML is discouraged by many leading Zope developers. ZPT is a technology that addresses the shortcomings of DTML. ZPT templates can be either well-formed XML documents or HTML documents, in which all special markup is presented as attributes in the TAL (Template Attribute Language) namespace. ZPT offers a very limited set of tools for conditional inclusion and repetition of XML elements. Consequently, the templates are usually quite simple, with most logic implemented in Python code. One significant advantage of ZPT templates is that they can be edited in most graphical HTML editors. ZPT also offers direct support for internationalization. Zope 2 underlies the Plone content management system, as well as the ERP5 open source enterprise resource planning system. BlueBream is a rewrite by the Zope developers of the Zope 2 web application server. It was created under the name "Zope 3", but the existence of two incompatible frameworks with the same name caused much confusion, and Zope 3 was renamed "BlueBream" in January 2010. BlueBream is distributed under the terms of the Zope Public License and is thus free software. Zope 2 has proven itself as a useful framework for Web applications development, but its use revealed some shortcomings.[citation needed] To name a few, creating Zope 2 products involves copying a lot of boilerplate code – "magic" code – that just has to be there, and the built-in management interface is difficult to modify or replace. Zope 3 was a rewrite of the software that attempts to address these shortcomings while retaining the advantages of Zope that led to its popularity. BlueBream is based on a component architecture that makes it easy to mix software components of various origins written in Python. Although originally intended as a replacement for Zope 2, the Zope Component Architecture has instead been backported to Zope 2, starting with Zope 2.8. Many Zope platforms such as Plone are going through the same type of piece-by-piece rewriting. The first production release of the new software, Zope X3 3.0.0, was released on November 6, 2004. The Zope 3 project started in February 2001 as an effort to develop a new version of Zope as an almost complete rewrite, with the goal to retain the successful features of Zope 2 while trying to fix some of its shortcomings. The goal was to create a more developer-friendly and flexible platform for programming web applications than Zope 2 is. The project began with the development of a component architecture, which allows the structuring of code into small, composable units with introspectable interfaces. The interfaces are supported by an interface package in order to provide the functionality of explicitly declared interfaces to the Python language. The first production release of the software, Zope X3, was released on November 6, 2004. In January 2010 Zope 3 was renamed BlueBream. The goal of the project was to enable programmers to use Zope in order to expose arbitrary Python objects as model objects to the web without the need to make these objects fulfill particular behavior requirements. In Zope 2 there had been many behavior requirements to allow objects to participate in the framework, which resulted in a large amount of mixin base classes and special attributes. BlueBream uses a model/view architecture, separating the presentation code from the problem domain code. Views and models are linked together by the component architecture. The libraries underlying BlueBream have been evolving into a collection of useful libraries for web application development rather than a single, monolithic application server. BlueBream includes separate packages for interfaces, component architecture, HTTP server, publisher, Zope Object Database (ZODB), Zope Page Templates, I18N, security policy, and so on. The component architecture is used to glue these together. The component architecture is configured using a ZCML (Zope Configuration Markup Language), an XML based configuration file language. The Zope 3 project pioneered the practice of sprints for open source software development. Sprints are intensive development sessions when programmers, often from different countries, gather in one room and work together for a couple of days or even several weeks. During the sprints various practices drawn from agile software development are used, such as pair programming and test-driven development. Besides the goal of developing software, sprints are also useful for geographically separated developers to meet in person and attracting new people to the project. They also serve as a way for the participants to learn from each other. BlueBream is considered a stable framework, used on production projects worldwide, most notably Launchpad. As a result of the development of Zope 3 / BlueBream, there are now many independent Python packages used and developed as a part of BlueBream, and although many of these are usable outside of BlueBream, many are not. The Zope Toolkit (ZTK) project was started to clarify which packages were usable outside BlueBream, and to improve the re-usability of the packages. Thus the Zope Toolkit is a base for the Zope frameworks. Zope 2.12 is the first release of a web framework that builds on Zope Toolkit, and Grok and BlueBream were set to have releases based on the ZTK during 2010. In 2006 the Grok project was started by a number of Zope 3 developers who wanted to make Zope 3 technology more agile in use and more accessible to newcomers. Grok has since then seen regular releases and its core technology (Martian, grokcore.component) is also finding uptake in other Zope 3 and Zope 2 based projects. In late 2017, development began on Zope 4. Zope 4 is a successor to Zope 2.13, making many changes that are not backwards compatible with Zope 2. Zope 5 was released in 2020. Zope page templates As mentioned previously, Zope page templates are themselves XHTML documents, which means they can be viewed and edited using normal HTML editors or XHTML compliant tools (a big advantage compared to other template languages used for Web applications). Templates can also be checked for XHTML compliance so you can be fairly confident that they will automatically expand into proper XHTML. However, these page templates are not meant to be rendered as is. Instead they are marked up with additional elements and attributes in special XML namespaces (see below). This additional information is used to describe how the page template should ultimately be processed. Here are some basic examples. To conditionally include a particular element, like a div element, simply add the tal:condition attribute to the element as follows: To control what appears inside an element, use the tal:content attribute like this: Finally, to introduce or replace values of attributes use the tal:attributes attribute as below. You can use Python to alter the href at runtime. This is a very cursory explanation of Zope Page Templates. The behavior of Zope Page Templates is almost completely described by a template language, fixed on TAL, TALES, and METAL specifications: Notable software using Zope See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Sociology_of_sport] | [TOKENS: 3871] |
Contents Sociology of sport 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias Sociology of sport, alternately referred to as sports sociology, is a sub-discipline of sociology which focuses on sports as social phenomena. It is an area of study concerned with the relationship between sociology and sports, and also various socio-cultural structures, patterns, and organizations or groups involved with sport. This area of study discusses the positive impact sports have on individual people and society as a whole economically, financially, and socially. Sociology of sport attempts to view the actions and behavior of sports teams and their players through the eyes of a sociologist. Sport is regulated by regulations and rules of behavior, spatial and time constraints, and has governing bodies. It is oriented towards a goal, which makes known both the winner and the loser. It is competitive, and ludic. All sports are culturally situated, intertwined with the value systems and power relations within the host society. The emergence of the sociology of sport (though not the name itself) dates from the end of the 19th century, when first social psychological experiments dealing with group effects of competition and pace-making took place. Besides cultural anthropology and its interest in games in the human culture, one of the first efforts to think about sports in a more general way was Johan Huizinga's Homo Ludens or Thorstein Veblen's Theory of the Leisure Class. Homo Ludens discusses the importance of the element of play in culture and society. Huizinga suggests that play, specifically sport, is primary to and a necessary condition of the generation of culture. These written works contributed to the rise of the study of sociology of sport. In 1970, sports sociology gained significant attention as an organized, legitimate field of study. The North American Society for the Sociology of Sport was formed in 1978 with the objective of studying the field. Its research outlet, the Sociology of Sport Journal, was formed in 1984. It is a common assumption that sports can be viewed as a ritual and a game at the same time. Sports as a result can be viewed as a parallel ritual process which is connected to leisure time and freedom. The symbolic effect of a ritual allows classification of social relationships among men and between women and men, as well as the impact sports has on nations. Some national sports like baseball in Cuba, cricket in the West Indies, and football in a majority of Latin American countries drive passion that goes past the ethnic status, regional origins, or class lines. Therefore, sport is an important field of analysis for achieving better understanding of the functioning of modern societies. Race and sports There was controversy around the 1936 Berlin Olympic Games, as the rhetoric and laws of the host country (Nazi Germany) encompassed, and indeed were largely based on, overt and extreme racism. Many Germans were dismayed that nonwhite athletes were allowed to compete; the "Nazis were deeply offended by sporting contacts with 'primitive' races and by competing against Negro athletes, in particular." Adolf Hitler agreed with the proposition people who had ancestors who "came from the jungle" were "primitive because their physiques were stronger than those of civilized whites." and wanted to impose racial segregation on the games, but the Olympic Committee refused. The Nazi regime did, however, use any results they could to propagandize the superiority of what they called the Aryan race. Sport has always been characterized by racial social relationships. The first scientific look at race came at the end of the 19th century, when count Arthur de Gobineau attempted to prove the physical and intellectual superiority of the white race. Darwin's theory of natural selection was used in service of racism as well. After the athletic ability of black sportspeople was proven, the theory shifted toward physical ability at the expense of intellect. Several racist theories were advanced. Black people were athletically able because animals ate all the slow ones. The myth of "middle passage" posited only the most athletically able of black people were able to survive the slave trade and plantation work. The matriarchal theory suggested that absent fathers made black people channel their anger into sports, with coaches becoming father figures. The mandigo theory assumed that the most physically potent black men were bred with the most physically potent black women. The psychological theory claimed that black athletes did not have the intellectual capacity to assume leadership positions in sports. The "dumb jock theory" saw black people enrolling on sport scholarships as they were unable to find success in academia. Lastly, the genetic theory suggested that black sportspeople had more of certain muscle fibers. Young African Americans see sports as means of upward social mobility, which is denied to them through conventional employment. Race often interplays with class, gender and ethnicity to determine how accessible certain sports are, and how the athlete is perceived. For example, golf is inaccessible to African Americans less because of race, and more because of the high economic and social capital needed. Race is often connected to gender, with women having less opportunities to access and succeed in sports. Once a woman does succeed, her race is downplayed and her sexuality is accentuated. In certain cultures, especially Muslim ones, women are denied access to sports all-together. In team sports, white players are often placed in central positions which demand intelligence, decisiveness, leadership, calmness and reliability. Black players are in turn place in positions that demand athletic ability, physical strength, speed and explosiveness. For example, white players in the role of central midfielders and black players as wingers. Gender in sports Female participation in sports is influenced by patriarchal ideologies surrounding the body, as well as ideas of femininity and sexuality. Physical exertion inevitably leads to development of muscle, which is connected to masculinity, which is in contrast to the idea of women as presented by modern consumer culture. Women who enter sports early are more likely to challenge these stereotypes. Television networks and corporations focus on showcasing female athlete which are considered as attractive, which trivializes the achievements of these sportswomen. Women's sports are less covered by news than male sports. During sporting events, the camera focuses on specifically on attractive women. Allen Guttman argues that erotic component of sports cannot be rooted out, and as such remains one of its key components. Further, attractive male and female athletes will always be more sought after. The erotic component of sports should be researched, instead of being outright rejected. Jennifer Hargreaves sees three political strategies for women in sports: Theories in sociology of sport Structural functionalist theories see society as a complex system whose parts work together to promote solidarity and stability. Sport itself developed from religious ceremonies, which served to promote social and moral solidarity of the community. Bromberger saw similarities between religious ceremonies and football matches. Matches are held in a particular spatial configuration, pitches are sacred and may not be polluted by pitch invaders, and lead to intense emotional states in fans. As with religious ceremonies, spectators are spatially distributed according to social distribution of power. Football seasons have a fixed calendar. Group roles on match day are ceremonial, with specially robed people performing intense ritual acts. As a church, football has an organizational network, from local to global levels. Matches have a sequential order that guides the actions of participants, from pre-match to post-match actions. Lastly, football rituals create a sense of communitas. Songs and choreography can be seen as an immanent ceremony through which spectators transfer their strength to the team. Accounting for the fact that not all actions support the existing societal structure, Robert K. Merton saw five ways a person could react to the existing structure, which can be applied to sports as well: conformism, innovation, ritualism, withdrawal, and rebellion. Erving Goffman drew on Durkheim's conception of positive rituals, emphasizing the sacred status of an individual's "face". Positive (compliments, greetings, etc.) and negative (avoiding confrontation, apologies, etc.) rituals all serve to protect one's face. Sport journalists, for example, utilize both the positive and negative rituals to protect the face of the athlete they wish to maintain good relations with. Birrell furthermore posits sport events are ritual competitions in which athletes show their character through a mix of bravery, good play and integrity. A good showing serves to reinforce the good face of the athlete. Interpretative sociology explores the interrelations of social action to status, subjectivity, meaning, motives, identities and social change. It avoids explaining human groups through general laws and generalizations, preferring what Max Weber called verstehen - understanding and explaining individual motivations. It allows for a more complete understanding of diverse social meanings, symbols and roles within sport. Sport allows for creation of various social identities within the framework of a single game or match, which may change during it or throughout the course of multiple matches. Ones role as a sportsperson further affects how they act outside of a game or a match, i.e. acting out the role of a student athlete. Weber introduced the notion of rationalization. In modern society, relationship are organized to be as efficient as possible, based on technical knowledge, instead of moral and political principles. This creates bureaucracies that are efficient, impersonal and homogeneous. Allen Guttmann identified several key aspects of rationalization, which can likewise be applied to sports: Karl Marx saw sports as rooted in its economic context, subject to commodification and alienation. Neo-Marxism sees sport as an ideological tool of the bourgeoisie, used to deceive the masses, in order to maintain control. As laborers, athletes give up their labour power, and suffer the same fate as the alienated worker. Aside from supporting industrial capitalism, sport propagates heavy physical exertion and overworking as something positive. Specialized division of labor force athletes to constantly perform the same movements, instead of playing creatively, experimentally and freely. The athlete if often under the illusion of being free, unaware of losing control over his labor power. Spectators themselves support the alienation of athletes' labor through their support and participation. Marxist theories have been used to research the commodification of sport, for example, how players themselves become goods or promote them, the hyper-commercialization of sports during the 20th century, how clubs become like traditional firms, and how sport organizations become brands. This approach has been criticized for their tendency toward raw economism, and supposing that all current social structures function to maintain the existing capitalist order. Supporting sport teams does not necessarily contradict the development of class consciousness and participating in the class struggle. Sport events have a number of examples of political protest. Neo Marxist analysis of sports often underestimate the aesthetic side of sport as well. Hegemony research describes the relations of power, as well as methods and techniques used by dominant groups to achieve ideological consent, without resorting to physical coercion. This ideological consent aims to make the exploratory social order seem natural, guaranteeing that the subordinate groups live out their subordination. A hegemony is always open to contestation, and thus counter-hegemonic movements may emerge. The dominant groups may use sports to steer the use of the subordinate classes in the desired direction, or towards consumerism. However, the history of sport shows that colonized are not necessarily manipulated through sport, while sport professionalization, and their own popular culture, helped the working class avoid mass subordination to bourgeois values. Resistance is a key concept in cultural studies, which describes how subordinate groups engage in particular cultural practices to resist their domination. Resistance can be overt and deliberate or latent and unconscious, but always counters the norms and conventions of the dominant groups. John Fiske differentiated between confrontational semiotics and avoidance. Body and sports Body became a subject of research in the 80s, with the work of Michel Foucault. For him, power is exercise in two different ways - through biopower and disciplinary power. Biopower centers on the political control of key biological aspects of the human body and whole populations, such as birth, reproduction, death, etc. Disciplinary power is exercised by means of the everyday disciplining of bodies, particularly through controlling time and space. Eichberg sees three different types of bodies as highlighting the difference between disciplined and undisciplined bodies in sport: the dialogic body, of different shapes and sizes, which are given to freeing oneself from control, and were the main type in pre-modern festivals and carnivals. The streamlined, improved body for sports accomplishment and competition. The healthy, straight body, which is shaped through disciplined regimes of fitness. The grotesque body could be seen in pre-modern festivals and carnivals, i.e. folk wrestling or three-legged race. Modern sport pedagogy fluctuates between strictness and freedom, discipline and control, but the hierarchical relations of power and knowledge between the coach and athlete remain. Segel claimed that the cultural raise of sports reflected the wider turn of modern society toward physical expression, which revived militarism, war and fascism. Some representatives of the Frankfurt school, saw sport as a cult of the fascistic idea of the body. Tännsjö claimed that overly complimenting sport prowess reflects the fascistic elements in society, as it normalizes the ridicule of the weak and defeated. Prizefighting allows research into the violent body. Prizefighters transform their bodily capital into prizefighting capital, for the purpose of winning fame, status and wealth. Their bodies are exploited by managers, of which they are aware, describing themselves alternatively as prostitutes, slaves and stallions. Prizefighters accept the routine damage their bodies sustain, while at the same time fearing the effects of such damage. A frequent response to this is attempting to turn themselves into heroic personalities. All contact sports have violence as part of strategy to a certain extent. Sports violence is not individual, but is a product of socialization. Finn see footballers as socializing into a culture of quasi-violence, which accentuates different values than those in regular life. It accepts violence as central to the game. Physical injury of sportspeople can be seen through Beck's theory of a "risk society". A risk society is characterized by reflexive modernity, where members of society are well informed, critical and participate in the shaping of social structures. Unlike the routine risk of traditional society, modern societies identify and minimize risks. Reflexive modernity in sports is evinced in isolation, minimizing and removal of causes of physical injury, while at the same time keeping the techniques and strategies particular to those sports. The lower classes have lower access to risk assessment and avoidance, and as such have a higher rate of participation in riskier sports. Despite this, athletes are still thought to ignore and attempt to overcome pain, as overcoming pain is seen as brave and heroic. The capacity of the athlete to make the body seem invincible is an integral part of sports professionalism. This ignoring of pain is often a key part of some sport subcultures. Children are also often exposed to acute pain and injuries, i.e. gymnastics. Emotion in sports Emotion has always been a huge part of sports as it can affect both athletes and the spectators themselves. Theorists and sociologists who study the impact of emotions in sports try to classify emotions into categories. Controversial, debated, and discussed intensely, these classifications are not definitive or set in stone. Emotion is very important in sports; athletes can use them to convey specific and significant information to their teammates and coaches and they can use emotion to send false signals to confuse their opponents. In addition to athletes using emotion to their advantage, emotion can also have a negative impact on athletes and their performances. For example, "stage fright," or nervousness and apprehension, can impact their performance in their sport, be it in a positive or negative way. Depending on the level of sports, the level of emotion differs. In professional sports, emotions can be extremely intense because there are many more people in many distinct roles who are involved. There are the professional athletes, the coaching staff, the referees, the television crew, the commentators, and last but not least, the fans and spectators. There is much more public press, pressure, and self-pressure. It is extremely difficult to not get emotionally invested in sports; sports are very good at bringing out the worst qualities in people. There have been violent brawls when one team beats another in an intense game, loud fighting and yelling, and intense verbal arguments as well. Emotion is also highly contagious, especially if there are many emotional people in one space. Binary divisions within sports There are many perspectives through which sport can be viewed. Therefore, very often some binary divisions are stressed, and many sports sociologists have shown that those divisions can create constructs within the ideologies of gender and affect the relationships between genders, as well as advocate or challenge social and racial class structures. Some of these binary divisions include: professional vs. amateur, mass vs. top-level, active vs. passive/spectator, men vs. women, sports vs. play (as an antithesis to organized and institutionalized activity). Not only can binary divisions be seen within sports themselves, but they are also seen in the research of sports. The field of research has mainly been dominated by men because many[citation needed] believe that women's input or research is inauthentic compared to men's research. Some women researchers also feel as though they have to "earn" their place within the sports research field whereas men, for the most part, do not. While women researchers in this field do have to deal with gender-related issues when it comes to their research, it does not prevent them from being able to gather and understand the data they are collecting. Sports sociologists believe that women can have a unique perspective when gathering research on sports since they are able to more closely look at and understand the female fan side of sporting events. Following feminist or other reflexive and tradition-breaking paradigms, sports are sometimes studied as contested activities, i.e. as activities in the center of various people/groups interests (connection of sports and gender, mass media, or state-politics). These perspectives provide people with different ways to think about sports and figure out the differences between the binary divisions. Sports have always been of tremendous impact to the world as a whole, as well as individual societies and the people within them. There are so many positive aspects to the world of sport, specifically, organized sport. Sports involve community values, attempting to establish and exercise good morals and ethics. Spectator sports provide watchers with an enlivenment through key societal values displayed in the "game". Becoming a fan teaches you a large variety of skills as well that are a very important part of everyday life in the office, at home, and on the go. Some of these skills include teamwork, leadership, creativity, and individuality.[citation needed] See also References Further reading External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.