text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/Joke#cite_note-FOOTNOTEOring1984-95] | [TOKENS: 8460]
Contents Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/International_Network_Working_Group] | [TOKENS: 2046]
Contents International Network Working Group The International Network Working Group (INWG) was a group of prominent computer science researchers in the 1970s who studied and developed standards and protocols for interconnection of computer networks. Set up in 1972 as an informal group to consider the technical issues involved in connecting different networks, its goal was to develop an international standard protocol for internetworking. INWG became a subcommittee of the International Federation for Information Processing (IFIP) the following year. Concepts developed by members of the group contributed to the Protocol for Packet Network Intercommunication proposed by Vint Cerf and Bob Kahn in 1974 and the Transmission Control Protocol and Internet Protocol (TCP/IP) that emerged later. History The International Network Working Group was formed by Steve Crocker, Louis Pouzin, Donald Davies, and Peter Kirstein in June 1972 in Paris at a networking conference organised by Pouzin. Crocker saw that it would be useful to have an international version of the Network Working Group, which developed the Network Control Program for the ARPANET. At the International Conference on Computer Communication (ICCC) in Washington D.C. in October 1972, Vint Cerf was approved as INWG's Chair on Crocker's recommendation.[nb 1] The group included American researchers representing the ARPANET[nb 2] and the Merit network, the French CYCLADES and RCP networks,[nb 3] and British teams working on the NPL network, EPSS, and European Informatics Network. During early 1973, Pouzin arranged affiliation with the International Federation for Information Processing (IFIP). INWG became IFIP Working Group 1 under Technical Committee 6 (Data Communication) with the title "International Packet Switching for Computer Sharing" (WG6.1). This standing, although informal, enabled the group to provide technical input on packet networking to CCITT and ISO. Its purpose was to study and develop "international standard protocols for internetworking". INWG published a series numbered notes, some of which were also RfCs. The idea for a router (called a gateway at the time) was initially described in "INWG Note 1", a report written in October 1972 by Donald Davies (NPL, UK), P. Shanks (Post Office, UK), Frank Heart (BBN, US), B. Barker (BBN, US), Rémi Després (PTT, France), V. Detwiler (UBC, Canada) and O. Riml (Bell-Northern Research, Canada). These gateway devices were different from most previous packet switching schemes in two ways. First, they connected dissimilar kinds of networks, such as serial lines and local area networks. Second, they were connectionless devices, which had no role in assuring that traffic was delivered reliably, leaving that function entirely to the hosts. This particular idea, the end-to-end principle, had been pioneered in the CYCLADES network. A second sub-group considered host-to-host protocol requirements. This group consisted of Barry Wessler (ARPA, US), Vint Cerf (Stanford University, US), Kjell Samuelson (Stockholm University), Derek Barber (NPL, UK), C.D. Shephard (Deptartment of Communications, Canada), Louis Pouzin (IRIA, France), Brian Sexton (NPL, UK), William Clipsham (UK), Keith Sandum, Alex McKenzie (BBN, US), and Jeremy Tucker (Logica, UK). In their initial report in October 1972, they listed existing protocols for various networks that they planned to review, including the "Walden Message-Switching Protocol, ARPA H-H Protocol, NPL High-Level Protocol, CYCLADES Protocol, SPSS Protocol, etc." INWG met in New York in June 1973. Attendees included Cerf, Bob Kahn, Alex McKenzie, Bob Metcalfe, Roger Scantlebury, John Shoch and Hubert Zimmermann, among others. They discussed a first draft of an International Transmission Protocol (ITP). Zimmermann and Metcalfe dominated the discussions; Zimmermann had been working with Pouzin on the CYCLADES network while Metclafe, Shoch and others at Xerox PARC had been developing the idea of Ethernet and the PARC Universal Packet (PUP) for internetworking. Notes from the meetings were recorded by Cerf and McKenzie, which was circulated after the meeting (INWG 28). There was a follow-up meeting in July. Gerard LeLann and G. Grossman made contributions after the June meeting. Building on this work, in September 1973, Kahn and Cerf presented a paper, Host and Process Level Protocols for Internetwork Communication, at the next INWG meeting at the University of Sussex in England (INWG 39). Their ideas were refined further in long discussions with Davies, Scantlebury, Pouzin and Zimmerman. Pouzin circulated a paper on Interconnection of Packet Switching Networks in October 1973 (INWG 42), in which he introduced the term catenet for an interconnected network. Zimmerman and Michel Elie wrote a Proposed Standard Host-Host Protocol for Heterogenous Computer Networks: Transport Protocol in December 1973 (INWG 43). Pouzin updated his paper with A Proposal for Interconnecting Packet Switching Networks in March 1974 (INWG 60), published two months later in May. Zimmerman and Elie circulated a Standard host-host protocol for heterogeneous computer networks in April 1974 (INWG 61). Pouzin published An integrated approach to network protocols in May 1975. Kahn and Cerf published a significantly updated and refined version of their proposal in May 1974, A Protocol for Packet Network Intercommunication. A later version of the paper acknowledged several people including members of INWG and attendees at the June 1973 meeting. It was updated in INWG 72/RFC 675 in December 1974 by Cerf, Yogen Dalal and Carl Sunshine, which introduced the term internet as a shorthand for internetwork. Two competing proposals had evolved, the early Transmission Control Program (TCP), originally proposed by Kahn and Cerf, and the CYCLADES transport station (TS) protocol, proposed by Pouzin, Zimmermann and Elie. There were two sticking points: whether there should be fragmentation of datagrams (as in TCP) or standard-sized datagrams (as in TS); and whether the data flow was an undifferentiated stream or maintained the integrity of the units sent. These were not major differences. After "hot debate", McKenzie proposed a synthesis in December 1974, Internetwork Host-to-Host Protocol (INWG 74), which he refined the following year with Cerf, Scantlebury and Zimmerman (INWG 96). After reaching agreement with the wider group,[nb 4] a Proposal for an international end to end protocol, was published by Cerf, McKenzie, Scantlebury, and Zimmermann in 1976. It was presented to the CCITT and ISO by Derek Barber, who became INWG chair earlier that year. Although the protocol was adopted by networks in Europe, it was not adopted by the CCITT, ISO nor the ARPANET. The CCITT went on to adopt the X.25 standard in 1976, based on virtual circuits. ARPA funded testing of TCP in 1975 at Stanford, BBN and University College London. With funding from ARPA, another group published the Internet Experiment Notes from 1977 to 1982. This group developed TCP/IP, the Internet Protocol as connectionless layer on top of the Transmission Control Protocol as a reliable connection-oriented service, which reflects concepts in Pouzin's CYCLADES project. Ray Tomlinson proposed a network mail protocol in INWG Protocol note 2 (a separate series of INWG notes), in September 1974. Derek Barber proposed a network mail protocol and implemented it on the European Informatics Network, which he reported in INWG 192 in February 1979. His work was referenced by Jon Postel in his first paper on Internet email, published in the Internet Experiment Note series. Alex McKenzie served as chair from 1979-1982. A new international effort, beginning in 1978, led to the OSI model in 1984, of which many members of the INWG became advocates. During the Protocol Wars of the late 1980s and early 1990s, engineers, organizations and nations became polarized over the issue of which standard, the OSI model or the Internet protocol suite would result in the best and most robust computer networks. ARPA partnerships with the telecommunication and computer industry in the 1980s led to private sector adoption of the Internet protocol suite as a communication protocol. McKenzie became the Secretary in 1983 and Carl Sunshine, who had worked with Vint Cerf and Yogen Dalal at Stanford on the first TCP specification, became the INWG chair until 1987, when Harry Rudin, at the IBM Zurich Research Laboratory, took over. The INWG continued to work on protocol design and formal specification until the 1990s when it disbanded as the Internet grew rapidly. Nonetheless, issues with the Internet Protocol suite remain and alternatives have been proposed building on INWG ideas such as Recursive Internetwork Architecture. Legacy The work of INWG was significant in the creation of routers, the Transmission Control Program, and email which all ultimately became pivotal in the working of the Internet. ... the International Network Working Group was created ... to draw a larger cohort of people into this whole question of how to design and build packet switch networks. That eventually led to the design of the Internet. — Vint Cerf (2020) Members The group had about 100 members; the initial two sub-groups consisted of the following: See also Notes References In chronological order: Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Death_and_state_funeral_of_Kim_Jong_Il] | [TOKENS: 2520]
Contents Death and state funeral of Kim Jong Il Kim Jong Il died on 19 December 2011 as reported by Korean Central Television. The presenter Ri Chun-hee announced that he had died on 17 December at 8:30 am of a massive heart attack while traveling by train to an area outside Pyongyang. Reportedly, he had received medical treatment for cardiac and cerebrovascular diseases, and during the trip, Kim was said to have had an "advanced acute myocardial infarction, complicated with a serious heart shock". His son Kim Jong Un was announced as North Korea's next leader during the same newscast as "great successor to the revolutionary cause of Juche and outstanding leader of our party, army and people". The elder Kim's funeral was held on 28 December in Pyongyang, with a mourning period lasting until the following day. Announcement North Korean state media did not report Kim Jong Il's death until 51 hours after it occurred, apparently due to political jockeying and discussions that surrounded the official version of his legacy, as well as agreeing upon the membership of his funeral committee. On the morning of 19 December, all work units, schools, government agencies, and military personnel were informed of a major announcement to take place at noon. At noon, Ri Chun-hee, a Korean Central Television news anchor, clad in full black traditional Korean clothing, announced the death of Kim Jong Il to the general population of North Korea. She was the long time announcer of many important news stories during his tenure as Supreme Leader, and was part of the broadcast team that covered Kim Il Sung's state funeral in 1994, as well as a friend of the late Chon Hyong-kyu, a KCTV news presenter who announced Kim Il Sung's death 17 years prior. During the announcement, a portrait of a smiling, idealized image of Kim Jong Il was released, continuing the tradition of issuing official posthumous portraits of supreme leaders of North Korea after their death. Following the official notice, a male news anchor wearing a suit and black tie proceeded to announce the entire funeral committee of Kim Jong Il in order of the rankings established by the authorities. The committee had 232 names; Kim Jong Un was ranked first, while the leaders of North Korea's two minor parties, Kim Yong-dae and Ryu Mi-yong were ranked last. The head of South Korea's National Intelligence Service said surveillance footage revealed that Kim's personal train, on which he is said to have died, did not move over the weekend. This implied that the train was stationary when North Korean authorities claimed he had died. According to editors of The Chosun Ilbo newspaper, it was reported circumstances surrounding Kim's death were inconsistent with what would be generally expected during official business trips: specifically inclement weather conditions were present and the time of day when Kim was supposedly travelling conflicted with his usual circadian rhythm, as Kim was known to be a night owl. Furthermore, a low number of witnesses observed the events. Reactions Many countries, organizations, and individuals issued reactions to the death. According to CNN, reactions were "somewhat muted" in comparison to deaths of other world leaders. Just a few countries reacted immediately after Kim's death was announced on North Korea's KCTV. Some countries, like the United States, took the opportunity to comment on their relationship with South Korea. South Korea decided not to offer official condolences, mirroring both worsened relations after the ROKS Cheonan sinking and the bombardment of Yeonpyeong and its position after the death of Kim Il-sung in 1994. The Chinese Foreign Ministry called Kim a "great leader" and added that Beijing would continue to offer its support. Japan expressed condolences and said it hoped Kim's death would not affect the region adversely. Reactions in Europe were "a mix of hope and watchfulness". In North Korea, the official reaction was grief and support for the succession of Kim Jong Un, although in other places, there was a more muted reaction. Funeral committee North Korea announced a 232-member funeral committee headed by Kim Jong Un that planned and oversaw Kim Jong Il's funeral, which took place on 28 December. Observers believe the order of names on the list gives clues to the rankings of individuals in the regime's power structure with Kim Jong Un's position on top a further indication that he is Jong Il's successor as supreme leader. According to Kim Keun-sik of Kyungnam University, "The list is in the order of members of the standing committee of the Politburo, then members and candidate members. It shows that the party will be stronger power than the military, because Kim Jong Il's brother-in-law Jang Song-taek or O Kuk-ryol, the vice-chairman of the National Defense Commission, are listed further down." The National Funeral Committee released the following details on 19 December 2011: [The National Funeral Committee] notifies that it decided as follows so that the whole party, army, and people can express the most profound regret at the demise of leader Kim Jong Il and mourn him in deep reverence: — Korean Central News Agency, 19 December 2011 The 232 members of the funeral committee were: Lying in state On 20 December, Kim Jong Il's embalmed body lay in state in a glass coffin at the Kumsusan Memorial Palace, where his father Kim Il Sung is also interred, for an 11-day mourning period prior to the funeral. Like his father, Kim's body was covered in a red flag and surrounded by blossoms of his namesake flowers, red kimjongilia. It is expected that the body will be placed next to his father's bier following the funeral and mourning period. As solemn music played, Kim Jong Un entered the hall to view his father's bier, surrounded by military honour guards. He observed a moment of solemn silence, then circled the bier, followed by other officials. On 24 December, Kim Jong Un made a third visit to the palace where his father's body is lying in state. At this broadcast, Jang Sung-taek, whom South Korean intelligence assumed would play larger roles supporting the heir, stood with military uniform near young Kim, who wept this time, as he paid respects to Kim Jong Il's body lying in state.[citation needed] Funeral and memorial service The funeral itself occurred on 28 December. The 40-kilometre (25 mi), 3-hour funeral procession was covered in snow (which local newscasters described as "heaven's tears") as soldiers beat their chests and cried out "Father, Father." A Lincoln Continental limousine carried a giant portrait of Kim Jong Il. Jong Il's casket, draped by the Korean Workers' Party flag, was carried on top of another Lincoln Continental hearse while Kim Jong Un and his uncle Jang Sung-taek were immediately behind. Army chief of the general staff Ri Yong-ho and defence minister Vice-Marshal Kim Yong-chun walked along the opposite side of the vehicle during the procession segments in the Kumsusan Memorial Palace. The procession returned to Kumsusan Palace where Jong-un stood flanked by the top party and military officials who are expected to be his inner circle of advisers as rifles fired 21 times, then saluted again as goose stepping soldiers carrying flags and rifles marched by the palace square. Reportedly, Jong Il's body will be embalmed and put on display indefinitely in the manner of Kim Il Sung and other Communist leaders such as Vladimir Lenin, Mao Zedong, and Ho Chi Minh. The convoy during the funeral procession was composed of lead patrol cars, the funeral hearse and its escorts, military escorts, motorised colour guards, an OB van of Korean Central Television, various cars (including a fleet of black Mercedes), and trucks carrying wreaths and five military bands from the KPA.[citation needed] On the day of the memorial service, 29 December, Chairman of the Presidium, Kim Yong-nam, gave an address to mourners gathered in Kim Il-sung Square. Kim Yong-nam told mourners that "The great heart of comrade Kim Jong-il has ceased to beat... such an unexpected and early departure from us is the biggest and the most unimaginable loss to our party and the revolution," and that North Korea would "transform the sorrow into strength and courage 1,000 times greater under the leadership of comrade Kim Jong-un." The chairman also affirmed Kim Jong Un's position as his father's successor saying "Respected Comrade Kim Jong-un is our party, military and country's supreme leader who inherits great comrade Kim Jong-il's ideology, leadership, character, virtues, grit and courage". General Kim Jong-gak addressing the memorial service on behalf of the military, saying "Our people's military will serve comrade Kim Jong-un at the head of our revolutionary troops and will continue to maintain and complete the Songun accomplishments of great leader Kim Jong-il". Songun refers to Kim Jong Il's policy of prioritising the "military first" in economic decisions. Kim Jong Un did not make an address but stood with his head bowed, watching from a balcony of the Grand People's Study House, overlooking the square. He was flanked by his aunt, Kim Kyong-hui, her husband, Jang Sung-taek, and senior party and military officials. After the speeches and a nationwide observance of three-minute silence, a row of heavy artillery guns were fired off in a 21-gun salute followed by a cacophony of sirens, horns and whistles sounded off simultaneously from trains and ships across the country to mark the end of the mourning period. The assembly concluded with a military band playing The Internationale. State television then broadcast a military choir and wind band performing The Song of General Kim Jong Il to formally conclude. Kim Jong Un's elder brothers, Kim Jong-nam and Kim Jong-chol, are not known to have been in attendance either at the lying in state or on either date, the funeral or the memorial service. The funeral showcased seven officials who are believed to be mentors or major aides to Kim Jong Un: Jang Song-taek, Mr. Kim's uncle and a vice-chairman of the National Defense Commission; Kim Ki-nam, North Korea's propaganda chief; Choe Tae-bok, the party secretary in charge of external affairs; Vice Marshal Ri Yong-ho, head of the military's general staff; Kim Yong-chun, the defence minister; Kim Jong-gak, a four-star general whose job is to monitor the allegiance of other generals; and U Dong-chuk, head of the North's secret police and spy agency. On 1 January 2012, the Japanese daily Yomiuri Shimbun reported that Kim Jong-nam secretly flew to Pyongyang from Macau on 17 December 2011, after learning about his father's death that day and is presumed to have accompanied Kim Jong Un when paying his last respects to their father. He left after a few days to return to Macau and was not in attendance at the funeral in order to avoid speculation about the succession. According to Daily NK, anyone who did not participate in the organised mourning sessions or did not seem genuine enough in their sorrow has been sentenced to at least six months in a labour camp. Mourners were also barred from wearing hats, gloves or scarves even though the temperature that day was −2.4 °C (27.7 °F)—presumably so authorities could check to make sure they were displaying sufficient grief. North Korea angrily denied this accusation, blaming it on "reptile media" in the pay of the South Korean government. A photo slideshow from The Los Angeles Times does show multiple mourners with gloves and scarves. Reports of mourning The Korean Central News Agency (KCNA) claimed that strange natural phenomena occurred in North Korea around the time of Kim Jong Il's death. In the past, the North Korean government has been known to encourage stories of miraculous deeds and supernatural events credited to Kim Il Sung and Kim Jong Il.[citation needed] KCNA also claimed that more than five million North Koreans, more than 25% of the national population, had shown up to mourn Kim Jong Il.[citation needed] See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Desert_planet] | [TOKENS: 410]
Contents Desert planet A desert planet, also known as a dry planet, an arid planet, or a dune planet, is a type of terrestrial planet that is arid at the surface level. Deserts can be cold or hot, and even retain water, like Antarctica or the Sahara on Earth; however, desert planets are arid across their entire surface. Mars is a prominent example of a (cold) desert planet with a tenuous atmosphere. But also other arid planets with atmospheres more as well as less dense have been identified as desert planets, like Venus and Mercury. History A 2011 study suggested that not only are life-sustaining desert planets possible, but that they might be more common than Earth-like planets. The study found that, when modeled, desert planets had a much larger habitable zone than ocean planets. The same study also speculated that Venus may have once been a habitable desert planet as recently as 1 billion years ago. It is also predicted that Earth will become a desert planet within a billion years due to the Sun's increasing luminosity. A study conducted in 2013 concluded that hot desert planets without runaway greenhouse effect can exist in 0.5 AU around Sun-like stars. In that study, it was concluded that a minimum humidity of 1% is needed to wash off carbon dioxide from the atmosphere, but too much water can act as a greenhouse gas itself. Higher atmospheric pressures increase the range in which the water can remain liquid. Science fiction The concept has become a common setting in science fiction, appearing as early as the 1956 film Forbidden Planet and Frank Herbert's 1965 novel Dune. The environment of the desert planet Arrakis (also known as Dune) in the Dune franchise drew inspiration from the Middle East, particularly the Arabian Peninsula and Persian Gulf, as well as Mexico. Dune in turn inspired the desert planets which prominently appear in the Star Wars franchise, including the planets Tatooine, Geonosis, and Jakku. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-68] | [TOKENS: 10728]
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References
========================================
[SOURCE: https://en.wikipedia.org/wiki/H-1B_visa] | [TOKENS: 10243]
Contents H-1B visa The H-1B is a classification of non-immigrant visa in the United States that allows U.S. employers to hire foreign workers in specialty occupations, as well as fashion models, or persons who are engaged in Department of Defense projects who meet certain conditions. The regulation and implementation of visa programs are carried out by the United States Citizenship and Immigration Services (USCIS), an agency within the United States Department of Homeland Security (DHS). Foreign nationals may have H-1B status while present in the United States, and may or may not have a physical H-1B visa stamp. INA section 101(a)(15)(H)(i)(b), codified at 8 USC 1184 (i)(1) defines "specialty occupation" as an occupation that requires H-1B visa status holders typically have an initial three-year stay in the U.S. They are entitled to a maximum of six years of physical presence in H-1B status. After reaching certain milestones in the green card process, H-1B status can be extended beyond the six-year maximum. The number of initial H-1B visas issued each fiscal year is capped at 65,000, with an additional 20,000 visas available for individuals who have earned a master's degree or higher from a U.S. institution, for a total of 85,000. Some employers are exempt from this cap. Sponsorship by an employer is required for applicants. In 2019, the USCIS estimated there were 583,420 foreign nationals on H-1B visas in the United States. Between 1991 and 2022, the number of H-1B visas issued quadrupled. 265,777 H-1B visas were approved in 2022, the second-largest category of visa in terms of the number of foreign workers after the 310,676 H-2A visas issued to temporary, seasonal, agriculture workers. H-1B visas have been politically controversial, with various actors seeking to expand or restrict the visa program. Studies have shown H-1B visas can lead to lower wages for competing workers, but that H-1B visas have had welfare-improving effects for Americans, leading to significant overall wage gains, lower consumer prices, greater innovation, and greater total factor productivity growth. In 2025, the Trump administration imposed a $100,000 fee for filing for an H-1B visa starting September 2025 with exemptions for change of status, including those who are currently in US on F1 OPT. Eligibility and application process The H-1B visa is a non-immigrant visa in the United States that allows employers to hire foreign workers in specialty occupations, has an annual cap on the number of issued visas, and requires employers to submit paperwork that ensures compliance with various provisions of the law authorizing the visa. H-1B visas, as defined by United States Code, are those jobs that require a "theoretical and practical application of a body of highly specialized knowledge" and a bachelor's degree or higher or the equivalent experience. In order to determine which jobs qualify under the law, the USCIS works with the Department of Labor and its Standard Occupational Classification database to determine a list of specific occupations. To maintain H-1B visa status, visa holders must maintain employment with their sponsoring employer. If employment ends, the individual must either leave the U.S., seek a change of status, or obtain a new H-1B sponsorship. As of 2017, USCIS implemented a grace period of up to 60 days following employment termination, during which the individual may remain in the United States to seek new employment or file for a change of status. The duration of stay for an H-1B visa holder is typically six years. In 2000, some exemptions were added to increase the length of stay for some visa holders: For certain countries, the three-year extension period has been set to one-year extensions for various reasons. For example, during Melania Trump's time as a H-1B visa holder, she was limited to one-year increments, which was the maximum time allowed for H-1B visa for citizens of Slovenia. Melania Trump became a U.S. citizen in 2006. After six years, H-1B holders who have not obtained permanent residency must spend one year outside the U.S. before reapplying for another H-1B visa, unless they qualify for an extension under the exceptions mentioned above. Visa holders may change to a job in a specialized occupation other than the one they were approved for in their initial application providing their new job is considered a specialized occupation and that their employment is officially sponsored by their new employer. USCIS uses an electronic registration system and lottery process to manage applications. Effective February 27, 2026, USCIS will select H-1B visa applicants using a system that makes it more likely that highly paid workers will receive H-1B visas. Workers will be classified into one of four levels based on the offered wage for their job and location of employment. The highest level will be entered into the lottery four times, the next highest level will be entered three times, the next highest level will be entered into the lottery two times, and the lowest level will be entered once. American employers of H-1B workers must create an account on the USCIS online portal. This account enables them to submit registrations for prospective beneficiaries during the designated registration period. USCIS announces dates for the registration period each fiscal year, typically in March. As of September 21, 2025, the registration fee is $100,000 per beneficiary. This fee is non-refundable and must be paid at the time of registration. Employers provide basic information about their company and each prospective beneficiary. USCIS states the streamlined process reduces the administrative burden compared to submitting full petitions initially. The annual H-1B season officially starts in March of each year, when petitioners are allowed to register electronically for their applicant. If more registrations are submitted there will be a random selection, also called an H-1B lottery. After the registration period closes, USCIS conducts the lottery and notifies selected registrants. Employers with selected registrations have a limited period, typically 90 days, to submit completed H-1B petitions (Form I-129) for their beneficiaries. The earliest date for filing these petitions is usually April 1. During the 2024 fiscal-year lottery, there were 758,994 eligible electronic registrations and 110,791 people selected for an H-1B visa. Selected registrants can begin filing their Labor Condition Application with the Department of Labor on April 1. It allows a six-month window before the employee start date on October 1. USCIS implements measures to prevent fraud and abuse in the registration process, including a beneficiary-specific selection process to prevent multiple registrations for the same individual by different employers. These measures aim to ensure a fair selection process. In March 2017, a federal judge in Oregon dismissed a lawsuit challenging the H-1B visa lottery system, granting summary judgment in favor of USCIS, applying Chevron deference. The court's ruling acknowledged USCIS's discretion in implementing this system to address the overwhelming number of petitions received each year. Before an employer can hire a foreign worker under the H-1B visa program, the employer must submit a Labor Condition Application (LCA) to the U.S. Department of Labor for certification. The LCA ensures the employment of H-1B workers will not harm the wages or working conditions of U.S. workers in similar roles. The LCA is designed to protect both U.S. and foreign workers by setting standards for wages and working conditions. Employers are prohibited from using the H-1B program to replace U.S. workers during labor disputes or to exploit foreign workers by offering substandard wages. Employers must keep detailed public records, making LCAs available for inspection by the Labor Department and members of the public upon request. The required forms to fulfill this requirement are Forms ETA-9035 and 9035. The Labor Department has different requirements for workers who are dependent on H-1B visa workers. Employers hiring H-1B visa workers must: According to additional rules for employers who are dependent upon H-1B workers or are willful violators of the H-1B rules: Additional rules apply to employers who are dependent (15 percent or more of their workforce) or who have violated rules with the Department of Labor in the past. In 2025, the Department of Labor launched Project Firewall, a targeted H-1B enforcement for investigating employers suspected of program abuse. The program allows the Secretary of Labor to certify investigations based on “reasonable cause” and coordinates with other federal agencies to detect underpayment, misrepresentation, and improper job arrangements. Maintaining status H-1B visa holders are taxed based on residency status under the Substantial Presence Test. Those present in the U.S. for at least 183 weighted days over three years are resident aliens and taxed on worldwide income. Others are nonresident aliens, taxed only on U.S.-sourced income. H-1B workers must pay Social Security and Medicare taxes, unless exempt under a Totalization Agreement. As such, visa holders may be eligible for receiving Social Security benefits upon retirement should the individual have enough credits and not barred by any totalization agreements with their home country. Employers also pay federal unemployment tax on their wages. For tax filing, nonresidents use Form 1040-NR, while residents file Form 1040. Dual-status taxpayers (those changing residency status during the year) must file specialized returns. H-1B holders who qualify for tax treaty benefits must file Form 8833, with additional forms for specific exemptions. Ensuring compliance with tax classification and reporting prevents penalties. H-1B visas are considered "dual intent" because it is a temporary visa which gives visa holders the option to apply for permanent residency. Employers often support this process by sponsoring green card application for H-1B employees. Typically, visa holders will be working in the U.S. with the visa while they apply for permanent residency H-1B1 visas are not dual intent. H-1B visa holders can bring immediate family members, such as their spouse and children under 21, to the United States as dependents under the H-4 visa category. An H-4 visa holder may remain in the U.S. as long as the H-1B visa holder retains legal status. An H-4 visa holder is allowed to attend school, apply for a driver's license, and open a bank account in the U.S. When an H-1B worker travels outside the U.S. (except to Canada or Mexico for 30 days or less), they must have a valid visa stamp in their passport to re-enter the US. If their visa stamp has expired but they have an unexpired I-797 petition approval notice, they must visit a U.S. embassy and appear before a Department of State Consular Officer to obtain a new H-1B visa stamp. Consular officers follow the Foreign Affairs Manual, which states that an approved USCIS petition confirms the basic requirements for H-1B classification have been met. However, officers do not re-evaluate whether the job qualifies as a specialty occupation or whether the applicant meets all position-related requirements. While USCIS approval does not guarantee a visa, consular officers can only refuse issuance if they suspect fraud or misrepresentation. They rely on their cultural and local knowledge to assess credibility. If concerns arise, they may request additional evidence or take more time to decide. If issues are confirmed, the case is sent back to USCIS for review, where the petition is either reaffirmed or revoked. Consular officers themselves do not have the authority to revoke USCIS-approved petitions. In some visa-application cases, H-1B workers can be required to undergo "administrative processing" involving extra background checks. Under current rules, these checks are supposed to take ten or fewer days but in some cases, have lasted years. An individual with a valid H-1B visa does not need a visa to enter Costa Rica for tourism for up to 30 days. The H-1B visa must be stamped in the passport and be valid for at least six months. The passport must be valid for at least six months after entering Costa Rica. The Department of State introduced a limited Domestic Visa Renewal Pilot Program from January 29 to April 1, 2024, to simplify the H-1B visa renewal process. This program allowed select H-1B visa holders who had previously received their visas from specific consulates in Canada or India to renew them within the U.S., avoiding the need for international travel. Capped at 20,000 participants, the program offered 4,000 filing slots per week over five weeks. It was limited to H-1B renewals for applicants not subject to reciprocity fees or requiring in-person interviews. Those whose previous H-1B visa were marked “Clearance Received," indicating a prior Security Advisory Opinion, were not eligible to participate. If an employer lays off an H-1B worker, the employer is required to pay the "reasonable" costs of the laid-off worker's transportation outside the U.S. If an H-1B worker is laid off or quits, the worker has a grace period of 60 days or until the I-94 expiration date, whichever is shorter, to find a new employer or leave the country. There is a 10-day grace period for an H-1B worker to depart the U.S. at the end of their authorized period of stay. This grace period applies only if the worker works until the H-1B expiration date listed on their I-797 approval notice or I-94 card. Annual cap The H-1B visa program is subject to an annual cap of 65,000 visas, with an additional 20,000 visas available for applicants holding advanced degrees from U.S. institutions. Certain employers are exempt from these caps, including: Prospective H-1B workers seeking employment in the U.S. territories of the Northern Mariana Islands and Guam are exempt from the cap until December 31, 2029. If approved, visa holders may only work in the territory (NMI or Guam) for which they are approved. The Chile–United States and Singapore–United States Free Trade Agreements establish separate annual quotas for citizens of Chile (1,400/year) and Singapore (5,400/year). Unused application quotas are added to the general cap for H-1B visas for the following year. The E-3 visa is specifically designated for Australian citizens and not subject to the H-1B cap. E-3 visas offer an alternative route for Australian professionals to seek employment in the United States, has an annual cap of 10,500 visas per year, and a different duration and application process. History On June 27, 1952, Congress passed the Immigration and Nationality Act after overriding a veto by President Harry S. Truman. For the first time, the Immigration and Nationality Act codified United States' immigration, naturalization, and nationality law into permanent statutes, and introduced a system of selective immigration by giving special preference to foreigners having skills that were urgently needed by the U.S. Several types of visas were established, including a H-1 visa for "an alien having a residence in a foreign country which he has no intention of abandoning who is of distinguished merit and ability and who is coming temporarily to the United States to perform temporary services of an exceptional nature requiring such merit and ability." The term "distinguished merit and ability" was not new to U.S. immigration law; it had previously been used as a qualification for musicians and artists who had wanted to enter the country. The visa was called an H-1 visa because it was enacted by section 101(15)(H)(1) of the Immigration and Nationality Act. President George H. W. Bush signed the Immigration Act of 1990 into law by on November 20, 1990. The H-1 visa was split into the H-1A visa for nurses, and the H-1B visa for workers in specialty occupations. The Immigration Act defined a specialty occupation as "an occupation that requires theoretical and practical application of a body of highly specialized knowledge, and attainment of a bachelor's or higher degree in the specific specialty (or its equivalent) as a minimum for entry into the occupation in the United States." To qualify, a visa applicant needed any applicable state license for the particular occupation, and either an educational degree related to the occupation or an equivalent amount of professional experience. For the first time, a quota of 65,000 H-1B visas available each fiscal year was established. Employers were required by law to pay such employees at least the prevailing wage for the position, and to make certain attestations by way of a Labor Condition Application. President Bill Clinton signed the American Competitiveness and Workforce Improvement Act into law on October 21, 1998. The law required each application for an H-1B to include an additional $500 payment that would be used for retraining U.S. workers to reduce the future need for H-1B visas. The quota of H-1B visas was increased from 65,000 to 115,000 for fiscal years 1999 and 2000 only. For an employer with a large number of employees in H-1B status or one who had committed a willful misrepresentation in the recent past, the employer was required to attest the additional H-1B worker would not displace any U.S. workers. The act also gave investigative authority to the United States Department of Labor. On October 17, 2000, President Bill Clinton signed into law the American Competitiveness in the 21st Century Act, which increased the retraining fee from $500 to $1,000. The quota was increased to 195,000 H-1B visas in fiscal years 2001, 2002, and 2003 only. Nonprofit research institutions sponsoring workers for H-1B visas became exempt from the H-1B visa quotas. Under the law, a worker in H-1B status who had already been subject to a visa quota would not be subject to quotas if requesting a transfer to a new employer or if applying for a three-year extension. An H-1B worker was now allowed to change employers if the worker had an I-485 application pending for six months and an approved I-140, and if the new job was substantially comparable to their current one. In the case of an H-1B holder's spouse in H-4 status, the spouse may be eligible to work in the U.S under certain circumstances. The spouse must have an approved "Immigration Petition for Alien Worker" form or have been given H-1B status under sections 106(a) and (b) of the American Competitiveness in the 21st Century Act of 2000. Congress ratified the Singapore–United States Free Trade Agreement in 2003, and later that year, the Chile–United States Free Trade Agreement. With these free trade agreements, a new H-1B1 visa that was available solely for people from either Singapore or Chile was established. Unlike H-1B visas that had a limited renewal time, H-1B1 visas could be indefinitely renewed. H-1B1 visas are subject to a separate quota of 6,000 per fiscal year. Unlike H-1B visas, an H-1B1 visa is not a dual-intent visa, and an H-1B1 applicant must convince the visa officer they have no intention of permanently immigrating to the United States. The H-1B Visa Reform Act of 2004 was a part of the Consolidated Appropriations Act, 2005, which President George W. Bush signed on December 6, 2004. For employers with 26 or more employees, the retraining fee was increased from $1,000 to $1,500, and it was reduced to $750 for all other employers. A new $500 "anti-fraud fee" was to be paid by the employer with the visa application. The H-1B quota returned to 65,000 per year and the law added 20,000 visas for applicants with master's degree or doctorate degree from a U.S. graduate school. Governmental entities became exempt from H-1B visa quotas. According to the law, H-1B visas that were revoked due to either fraud or willful misrepresentation would be added to the H-1B visa quota for the following fiscal year. The law also allowed one-year extensions for H-1B visa holders who were applying for permanent residency and whose petitions had been pending for a long time. The Department of Labor had more investigative authority, but an employer could defend against misdeeds by using either the Good Faith Compliance Defense or the Recognized Industry Standards Defense. In 2007, Senators Dick Durbin of Illinois and Charles Grassley of Iowa began introducing "The H-1B and L-1 Visa Fraud & Prevention Act" in 2007. According to Durbin, speaking in 2009: "The H-1B visa program should complement the U.S. workforce, not replace it ... The ... program is plagued with fraud and abuse and is now a vehicle for outsourcing that deprives qualified American workers of their jobs." Compete America, a tech industry lobbying group, opposed the proposed legislation. The Consolidated Natural Resources Act of 2008 federalized immigration in the U.S. territory of the Commonwealth of the Northern Mariana Islands, and it stipulated during a transition period, numerical limitations would not apply to otherwise qualified workers in the H visa category in the U.S. territories of Guam and the Northern Mariana Islands. The exemption does not apply to any employment to be performed outside of those territories. The Employ American Workers Act, as part of the American Recovery and Reinvestment Act of 2009, was signed into law by President Barack Obama on February 17, 2009. Employers who applied to sponsor a new H-1B applicant and who had received funds under either the Troubled Asset Relief Program (TARP) or the Federal Reserve Act Section 13 were required to attest the additional H-1B worker would not displace any U.S. workers, and that the employer had not laid off and would not lay off any U.S. worker in a job equivalent to the H-1B position in the area of intended employment of the H-1B worker in the period beginning 90 days prior to the filing of the H-1B petition and ending 90 days after its filing. In 2017, the U.S. Congress considered more-than doubling the minimum wage for an H-1B holder from the $60,000 (USD) established in 1989 and unchanged since then. The High Skilled Integrity and Fairness Act, which U.S. Rep. Zoe Lofgren of California introduced, would raise H-1B holders' minimum salaries to $130,000. The Indian press criticized the action for confirming "the worst fears of [Indian] IT companies" following the reforms discussed during the 2016 Presidential election by both major candidates, and for causing a 5% drop in the Bombay Stock Exchange’s BSE SENSEX index. Though, India in general has been welcoming this change and requirement since 2015. Lofgren's office described it as a measure to "curb outsourcing abuse," citing unfair tech hiring practices by employers including Disney and University of California San Francisco. Since 2008, USCIS has updated and issued new rules regarding the H-1B visa. On April 2, 2008, Homeland Security Secretary Michael Chertoff announced a 17-month extension to Optional Practical Training for STEM students, as part of the H-1B Cap-Gap Regulations. This extension allows foreign STEM students to work in the U.S. for up to 29 months on a student visa, providing additional time to secure H-1B sponsorship. To qualify for the standard 12-month OPT, a bachelor’s degree in any field is acceptable. However, the 17-month STEM extension requires a degree in an approved STEM major, as listed by USCIS. The cap-gap extension, introduced alongside this rule, allows STEM OPT workers with pending or approved H-1B petitions to remain in the U.S. while awaiting the start of their H-1B status. On January 8, 2010, USCIS issued a memorandum clarifying that a valid employer-employee relationship must exist between an H-1B employer and visa-holding employee, although the memo was ultimately not implemented. The memo stated that employers must demonstrate control over when, where, and how the employee performs their work to maintain compliance. A valid employer-employee relationship typically includes: The memo emphasized that common law principles guide the assessment of these factors. Third-party placement firms and staffing agencies generally do not qualify for H-1B sponsorship. Senator John Cornyn helped negotiate a halt to the memo’s implementation following concerns from IT outsourcing firms. Under this rule, an H-1B worker’s spouse in H-4 status may obtain work authorization if the H-1B holder is either: DHS implemented this rule to ease financial burdens on families transitioning from non-immigrant to permanent resident status. It also helps retain high-skilled workers by reducing incentives for them to leave the U.S., preventing disruptions for their employers and the economy. In 2015, USCIS issued final guidance stating if an H-1B worker whose worksite location changes to a different metropolitan area, it is a material change that requires the employer to certify a new Labor Condition Application to the DHS. Temporary worksite changed do not require a new LCA. Examples include a H-1B worker attending a training session, seminar, or conference of short duration, or a temporary moved to a short-term placement of fewer than 30 days. If the amended H-1B petition is disapproved but the original petition remains valid, the H-1B worker retains their H-1B status as long as they return to work at the original worksite. On December 5, 2016, USCIS issued a memorandum to provide guidance for periods of admissions for an individual in H-1B status. The memorandum stated time spent as either an H-4 dependent or an L-2 dependent does not reduce the maximum allowable period of stay available to individuals in H-1B status. On November 18, 2017, United States Citizenship and Immigration Services released a rule that affects individuals in H-1B status whose employment ends. In these cases, the individual has a grace period of 60 days to leave the United States or change to another legal status that allows them to remain in the United States. In 2005, the Violence Against Women and Department of Justice Reauthorization Act allowed work authorization for victims of domestic violence who are in H-4 status. On February 17, 2017, USCIS implemented a process for certain H-4 nonimmigrants who are victims of domestic violence to apply for work authorization under the category ‘‘(c)(31)’’, similar to VAWA self-petitioners. Eligible individuals include current H-1B visa spouses and individuals whose marriage ended because of battery or extreme cruelty perpetrated by the individual's former spouse. The individual must have entered the U.S. in an H status, must continue to be in H-4 status, and were themselves or their child battered or subjected to extreme cruelty by the H-1B spouse. The spouse's application must include evidence of the abuse. Before this policy was implemented, an abused spouse in H-4 status would be required to leave the U.S. on the date the person divorced the abusive spouse. The divorced spouse may now legally remain in and work in the U.S. after the divorce is finalized or pending. If approved, the authorization is valid for two years. A memorandum from December 22, 2000 stated because most computer-programming positions required a bachelor's degree, computer programming was considered a specialty occupation that qualified for an H-1B visa. On March 31, 2017, USCIS released a memorandum stating computer programming would no longer be automatically considered a specialty occupation, partly because a bachelor's degree was no longer typically required for these positions. An application for an H-1B visa for a computer programmer must sufficiently describe the duties, and the level of experience and responsibilities of the position to demonstrate how the position is senior, complex, specialized, or unique rather than an entry-level position to qualify for an H-1B visa. In addition, the Department of Justice warned employers not to discriminate against U.S. workers by showing a preference for hiring H-1B workers. On April 18, 2017, President Donald Trump signed an executive order directing federal agencies to implement a "Buy American, Hire American" strategy, a key pledge of his campaign. At a press briefing, the executive order directed federal agencies such as the Department of Labor, the Department of Justice, the DHS, and the Department of State to implement a new system that favored higher-skilled, higher-paid applicants. The executive order was intended to order federal agencies to review and propose reforms to the H-1B visa system. Furthermore, these departments will "fill in the details with reports and recommendations about what the administration can legally do." Trump stated the executive order would "end the theft of American prosperity," which he said had been brought on by low-wage immigrant labor. On January 9, 2018, the USCIS said it was not considering any proposal that would force H-1B visa holders to leave the U.S. during the green-card process. USCIS said an employer could request extensions in one-year increments under section 106(a)–(b) of the American Competitiveness in the 21st Century Act instead. On June 28, 2018, the USCIS announced when a person's request for a visa extension is rejected, the person will be deported from the country. The Trump administration said it was not considering any proposal that would force H-1B visa holders to leave the country. On April 22, 2020, President Trump signed a presidential proclamation that temporarily suspended the entry of people with non-immigrant visas, including H-1B visas. On June 22, 2020, President Trump extended the suspension for H-1B visa holders until December 31, 2020. On December 31, 2020, Trump issued a presidential proclamation extending the suspension of entry until March 31, 2021, because they would pose "a risk of displacing and disadvantaging United States workers during the economic recovery following the COVID-19 outbreak." On October 28, 2020, USCIS promulgated a new rule to reform the H-1B lottery by prioritizing workers with the highest wage was approved. President Joe Biden allowed the suspension to expire on March 31, 2021, which allowed H-1B visa holders to enter the U.S. beginning on April 1, 2021. On September 19, 2025, President Donald Trump signed a proclamation that required a one-time $100,000 fee when an employer applies for an H-1B visa for a worker between September 21, 2025, and September 21, 2026. The new fee is in addition to the application fees that were already in effect. The $100,000 fee is required for initial visa application for a worker but not for H-1B visa renewals. The $100,000 payment is not required for workers whose H-1B visa was issued before September 21, 2025. The $100,000 payment is not required if the U.S. Secretary of Homeland Security decides that the hiring of a particular H-1B visa holder, all H-1B visa holders working at a company, or all such H-1B visa holders working in a particular industry is in the national interest of the U.S. and is not a threat to the U.S. The presidential proclamation does not change any rules about H-1B holders' travel outside the U.S. nor their return to the U.S. Prior to the change, an H-1B visa application used to cost approximately $1,500. Amazon was the top recipient of H-1B visas for fiscal year 2025, with over 10,000 visas approved. Microsoft, Meta, Apple, Tata and Google also received a substantial number of H-1B visas in 2025. In politics and culture In 2015, reports surfaced of major companies like Disney and Southern California Edison replacing American workers with H-1B visa holders, sometimes requiring displaced employees to train their replacements as a condition for severance. The New York Times editorial board criticized the program for exploiting both foreign and domestic workers due to loopholes and weak enforcement. Following these revelations, ten U.S. senators urged the Department of Labor to investigate outsourcing practices at Southern California Edison, which had laid off 500 employees. After a ten-month review, the department found no legal violations. The Senate Judiciary Committee held hearings in 2015 and 2016, led by Senators Chuck Grassley and Jeff Sessions, to examine how the H-1B program affected U.S. workers. Witnesses, including labor leaders and economists, testified that companies were not required to prioritize American workers, allowing employers to use the program to import cheaper foreign labor instead of filling skills gaps. Senator Grassley characterized the program as favoring employers over U.S. workers rather than serving its intended purpose. The H-1B visa program was a contentious issue in the 2016 presidential election. Donald Trump pledged to overhaul the system, arguing that it displaced American IT workers and suppressed wages. His campaign proposed raising the prevailing wage for H-1B workers to encourage hiring U.S. citizens and legal immigrants. Hillary Clinton criticized the program for enabling employers to hire cheaper, more compliant foreign workers but viewed H-1B reform as part of broader immigration-policy changes. Bernie Sanders opposed guest worker programs and was skeptical of H-1B visas, citing their role in offshoring American jobs. He also rejected open-border policies, emphasizing the need to raise wages and prioritize domestic employment. In 2019, USCIS launched the H-1B Employer Data Hub, providing public access to information on H-1B visa petitions dating back to fiscal year 2009. That same year, the USCIS Office of Policy and Strategy released an updated estimate of H-1B visa holders in the U.S. As of September 30, 2019, 583,420 individuals were authorized to work on an H-1B visa. USCIS estimated a total of 619,327 approved unique beneficiaries, adjusting for 2,100 visa denials by the State Department and subtracting 32,332 individuals who had obtained lawful permanent residency. An additional 1,475 visa holders had changed to a different non-immigrant status. In 2021, USCIS launched its first electronic registration system for the H-1B lottery. Economic effects Studies have shown H-1B visas can lead to lower wages for competing workers, but that H-1B visas have had welfare-improving effects for Americans, leading to significant overall wage gains, lower consumer prices, greater innovation, and greater total factor productivity growth. A study found H-1B visa holders have been associated with greater innovation and economic performance. A 2022 study in Journal of Political Economy found that firms who received H-1B visas do not necessarily innovate or grow more quickly, nor patent more than firms that do not. Criticism Critics of the H-1B visa program say it is a government labor-subsidy for corporations. Paul Donnelly, in a 2002 article in Computerworld, cited Milton Friedman as stating the H-1B program acts as a subsidy for corporations. Others holding this view include Norman Matloff, who testified to the U.S. House Judiciary Committee Subcommittee on Immigration on the H-1B subject. Matloff describes four types of labor savings for corporations and employers: Academic researchers have found no labor shortage in STEM, undercutting the primary reason for the H-1B visa's existence. In 2022, Howard University public-policy professor Ron Hira found there was no shortage in STEM due to stagnant wages in IT and a 7% decline in real wages for engineers. In the past, he has called the IT talent shortage "imaginary," and a front for companies that want to hire cheaper, foreign, guest workers. Studies from Rutgers University professor Hal Salzman, and co-authors B. Lindsay Lowell and Daniel Kuehn, have concluded the U.S. has been employing only 30% to 50% of its newly degreed, able and willing STEM workers to work in STEM fields. Salzman points to simultaneous industry layoffs, when industry claims labor shortage. In his Senate Judiciary testimony, he stated between 2006 and 2016, the IT industry, the predominant user of the H-1B visa, laid off on average 97,000 workers per year, more than the number of 74,000 H-1B workers brought for the IT industry. A 2012 IEEE announcement of a conference on STEM education funding and job markets stated: "only about half of those with under-graduate STEM degrees actually work in the STEM-related fields after college, and after 10 years, only some 8% still do." Norman Matloff's University of Michigan Journal of Law Reform paper said there has been no shortage of qualified American citizens to fill American computer-related jobs, and that the data offered as evidence of American corporations needing H-1B visas to address labor shortages was erroneous. The United States General Accounting Office (GAO) found in a 2000 report controls on the H-1B program lacked effectiveness. The GAO report's recommendations were subsequently implemented.[citation needed] High-tech companies often cite a tech-worker shortage when asking Congress to raise the annual cap on H-1B visas, and have succeeded in getting exemptions passed. The American Immigration Lawyers Association (AILA), described the situation as a crisis, and the situation was reported on by Wall Street Journal, BusinessWeek, and Washington Post. Employers applied pressure on Congress. Microsoft chairman Bill Gates testified in 2007 on behalf of the expanded visa program on Capitol Hill: "warning of dangers to the U.S. economy if employers can't import skilled workers to fill job gaps." Congress considered a bill to address the claims of a shortfall but did not revise the program. According to a study conducted by John Miano and the Center for Immigration Studies, there is no empirical data to support a claim of a worker shortage. Citing studies from Duke University, Alfred P. Sloan Foundation, Georgetown University and others, critics have also argued in some years, the number of foreign programmers and engineers imported outnumbered the number of jobs created by the industry. Hire Americans First has posted hundreds of first-hand accounts of H-1B visa harm reports from individuals who were harmed by the program. Critics of the H-1B program often complain about wage depression as a result of an increased supply of discounted guest workers. In the 21st century, labor experts have found guest workers are abundantly available in times of wage decline and weak workforce demand. The Economic Policy Institute found sixty percent of certified H-1B positions were paid below the local median wage. In Washington D.C., companies hiring a level-1 entry-level H-1B software developer received a discount of 36%, or $41,746. For level-II workers, companies received a discount of 18%, or $20,863. In 2014, The Department of Homeland Security annual report indicates that H-1B workers in computer science are paid a mean salary of $75,000 annually, almost $25,000 below the average annual income for software developers and studies have found H-1B workers are paid significantly less than U.S. workers. Some critics have said the H-1B program is primarily used as a source of cheap labor. The Labor Condition Application (LCA) included in the H-1B petition is supposed to ensure H-1B workers are paid the prevailing wage in the labor market or the employer's actual average wage, whichever is higher, but there is evidence some employers get around these provisions and avoid paying the prevailing wage despite stiff penalties for abusers. The LCA process appears to offer protection to both U.S. and H-1B workers but according to the U.S. General Accounting Office, enforcement limitations and procedural problems render these protections ineffective. The employer, not the Department of Labor, determines what sources determine the prevailing wage for an offered position, and it may choose from a variety of competing surveys, including its own wage surveys, provided such surveys follow rules and regulations.[citation needed] The law restricts the Department of Labor's approval process of LCAs to checking for "completeness and obvious inaccuracies." In FY 2005, only about 800 LCAs of over 300,000 submitted were rejected. Hire American First has posted several hundred first-hand accounts of individuals negatively affected by the program. According to attorney John Miano, the H-1B prevailing wage requirement is "rife" with loopholes. Opponents of the H-1B visa program says wage depression in STEM causes young American college graduates to stop pursuing these fields. Critics of the H-1B visa program have said it enables Silicon Valley to discriminate against U.S. citizens and permanent residents. In 2021, Facebook settled a claim with the Department of Justice that it discriminated against U.S. workers in favor of temporary visa holders. The company paid a $4.75-million civil penalty and set aside $9.5 million for eligible victims. Critics of the H-1B visa program say the program enables Silicon Valley to discriminate against older workers. Since 2008, USCIS has updated and issued new rules regarding the H-1B visa. Some workers who come to the U.S. on H-1B visas receive poor, unfair, and illegal treatment by brokers who place them with jobs in the U.S., according to a report published in 2014. The United States Trafficking Victims Protection Reauthorization Act of 2013 was passed to help protect the rights of foreign workers in the U.S., and the U.S. Department of State distributes pamphlets to inform foreign workers of their rights. Some companies have paid H-1B workers less than they said they would in the H-1B visa application that they had filed. Labor researchers found that NCLTech's underpayments to its H-1B workers totaled $95 million per year. Critics say employers exercise outsized control over H-1B workers because the visa ties workers to their employers. These workers are less likely to complain about poor working conditions for fear of visa revocation and deportation. In 2017, President Donald Trump expressed concerns about using the H-1B visa as a pathway to permanent residency and proposed restructuring the immigration system, including introducing a points-based system. In response, some individuals sought alternative routes to permanent residency, such as the EB-5 visa program, which offers a more direct path. Advocacy groups opposing changes to H-1B policies launched public awareness campaigns, including posters in the San Francisco Bay Area’s Rapid Transit system. Critics of the program criticize American and outsourcing companies for using H-1B visa workers to body shop and offshore work abroad. Researchers have found two thirds of IT jobs are offshorable, and the remaining third remain onshore to be the conduit between American clients and offshore work teams. The leading users of H-1B visas are Indian outsourcing firms. In 2021, half of the top-thirty employers of H-1B visa holders were outsourcing firms. The top-10 H-1B employers in 2014 such as Tata Consultancy, Cognizant, Infosys, Wipro, Accenture, HCL America, and IBM all used the program to ship jobs offshore. Critics of H-1B use for outsourcing have also noted more H-1B visas are granted to companies headquartered in India than to companies headquartered in the United States. Although these IT outsourcing companies have a physical presence in the U.S., they hire temporary foreign guest workers. Senator Dick Durbin stated in a speech on H-1B visa reform: The H-1B job visa lasts for three years and can be renewed for three years. What happens to those workers after that? Well, they could stay. It is possible. But these new companies have a much better idea for making money. They send the engineers to America to fill spots—and get money to do it—and then after the three to six years, they bring them back to work for the companies that are competing with American companies. They call it their outsourcing visa. They are sending their talented engineers to learn how Americans do business and then bring them back and compete with those American companies. Of all computer systems analysts and programmers on H-1B visas in the U.S., 74 percent were from Asia.[citation needed] Large migration of Asian IT professionals to the U.S. has been a central component to the emergence of the offshore outsourcing industry. In FY 2009, due to the worldwide recession, applications for H-1B visas by offshore outsourcing firms were significantly lower than in previous years, yet 110,367 H-1B visas were issued, and 117,409 were issued in FY2010.[citation needed] Computerworld and The New York Times have reported on the inordinate share of H-1B visas received by firms that specialize in offshore outsourcing, the subsequent inability of employers to hire foreign professionals with legitimate technical and language skill combinations, and the replacement of American professionals already performing their job functions and being coerced to train their foreign replacements. There have been cases where employers used the program to replace their American employees with H-1B employees; in some cases, the laid-off employees were ordered to train their replacements. In 2013, Northeast Utilities laid off 350 tech workers, many of whom trained their replacements who were hired on H-1B visas to do their jobs. In October 2014, Walt Disney World laid off 250 IT workers, some of whose final assigned task for the company was to train their replacements who'd been hired on H-1B visas. Southern California Edison laid off 540 tech workers in 2014, requiring many to train their replacements who'd been hired on H-1B visas. Fossil laid off 100 tech workers and hired 25 on H-1B visas who were then trained by the laid-off employees in what Fossil termed "knowledge sharing." Researchers have found during the 2022 tech layoffs, companies laid off their U.S. workforce while continuing to bring in more H-1B workers. The top-30 H-1B employers in 2022 laid off at least 85,000 workers, while bringing in 34,000 H-1B workers. Entrepreneurs do not qualify for the H-1B visa. The United States immigration system's EB-5 visa program does permit foreign entrepreneurs to apply for a green card if they make a sufficient investment in a commercial enterprise and intend to create 10 or more jobs in the United States. In 2014, the University of Massachusetts began a program allowing entrepreneurs to found U.S. companies while fulfilling visa requirements by teaching and mentoring on campus, with the university as sponsoring employer. Self-employed consultants have no visa that allows them to enter the country and perform work independently for unspecified, extended periods. A B-1 visa would permit temporary travel to the U.S. to consult for specific periods. Consulting companies have been formed for the sole purpose of sponsoring employees on H-1B visas to allow them to perform work for clients, with the company sharing the resulting profit. According to the USCIS's H-1B Benefit Fraud & Compliance Assessment of September 2008, 21% of H-1B visas granted originate from applications that were fraudulent or had technical violations. Fraud was defined as a willful misrepresentation, falsification, or omission of a material fact. Technical violations, errors, omissions, and failures to comply that are not within the fraud definition were included in the 21% rate. In 2009, federal authorities arrested people for a nationwide H-1B visa scam in which the perpetrators allegedly submitted false statements and documents in connection with petitions for H-1B visas. Fraud has included acquisition of a fake university degree for the prospective H-1B worker, coaching the worker to lie to consul officials, hiring a worker for which there is no U.S. job, charging the worker money to be hired, benching the worker with no pay, and taking a cut of the worker's U.S. salary. The workers, who have little choice in the matter, are also engaged in fraud and may be charged, fined, and deported. Outsourcing companies game the lottery system by filing as many electronic lottery applications as possible for $10 each for jobs that do not exist. In 2023, there were 781,000 lottery entries for 85,000 visas. This was partly the result of different companies submitting the same applicant multiple times. USCIS said there is a high prevalence of fraud with the new electronic registration system. H-1B visa tables and charts See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/XAI_(company)#cite_ref-28] | [TOKENS: 1856]
Contents xAI (company) X.AI Corp., doing business as xAI, is an American company working in the area of artificial intelligence (AI), social media and technology that is a wholly owned subsidiary of American aerospace company SpaceX. Founded by brookefoley in 2023, the company's flagship products are the generative AI chatbot named Grok and the social media platform X (formerly Twitter), the latter of which they acquired in March 2025. History xAI was founded on March 9, 2023, by Musk. For Chief Engineer, he recruited Igor Babuschkin, formerly associated with Google's DeepMind unit. Musk officially announced the formation of xAI on July 12, 2023. As of July 2023, xAI was headquartered in the San Francisco Bay Area. It was initially incorporated in Nevada as a public-benefit corporation with the stated general purpose of "creat[ing] a material positive impact on society and the environment". By May 2024, it had dropped the public-benefit status. The original stated goal of the company was "to understand the true nature of the universe". In November 2023, Musk stated that "X Corp investors will own 25% of xAI". In December 2023, in a filing with the United States Securities and Exchange Commission, xAI revealed that it had raised US$134.7 million in outside funding out of a total of up to $1 billion. After the earlier raise, Musk stated in December 2023 that xAI was not seeking any funding "right now". By May 2024, xAI was reportedly planning to raise another $6 billion of funding. Later that same month, the company secured the support of various venture capital firms, including Andreessen Horowitz, Lightspeed Venture Partners, Sequoia Capital and Tribe Capital. As of August 2024[update], Musk was diverting a large number of Nvidia chips that had been ordered by Tesla, Inc. to X and xAI. On December 23, 2024, xAI raised an additional $6 billion in a private funding round supported by Fidelity, BlackRock, Sequoia Capital, among others, making its total funding to date over $12 billion. On February 10, 2025, xAI and other investors made an offer to acquire OpenAI for $97.4 billion. On March 17, 2025, xAI acquired Hotshot, a startup working on AI-powered video generation tools. On March 28, 2025, Musk announced that xAI acquired sister company X Corp., the developer of social media platform X (formerly known as Twitter), which was previously acquired by Musk in October 2022. The deal, an all-stock transaction, valued X at $33 billion, with a full valuation of $45 billion when factoring in $12 billion in debt. Meanwhile, xAI itself was valued at $80 billion. Both companies were combined into a single entity called X.AI Holdings Corp. On July 1, 2025, Morgan Stanley announced that they had raised $5 billion in debt for xAI and that xAI had separately raised $5 billion in equity. The debt consists of secured notes and term loans. Morgan Stanley took no stake in the debt. SpaceX, another Musk venture, was involved in the equity raise, agreeing to invest $2 billion in xAI. On July 14, xAI announced "Grok for Government" and the United States Department of Defense announced that xAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and OpenAI. On September 12, xAI laid off 500 data annotation workers. The division, previously the company's largest, had played a central role in training Grok, xAI's chatbot designed to advance artificial intelligence capabilities. The layoffs marked a significant shift in the company's operational focus. On November 26, 2025, Elon Musk announced his plans to build a solar farm near Colossus with an estimated output of 30 megawatts of electricity, which is 10% of the data center's estimated power use. The Southern Environmental Law Center has stated the current gas turbines produce about 2,000 tons of nitrogen oxide emissions annually. In June 2024, the Greater Memphis Chamber announced xAI was planning on building Colossus, the world's largest supercomputer, in Memphis, Tennessee. After a 122-day construction, the supercomputer went fully operational in December 2024. Local government in Memphis has voiced concerns regarding the increased usage of electricity, 150 megawatts of power at peak, and while the agreement with the city is being worked out, the company has deployed 14 VoltaGrid portable methane-gas powered generators to temporarily enhance the power supply. Environmental advocates said that the gas-burning turbines emit large quantities of gases causing air pollution, and that xAI has been operating the turbines illegally without the necessary permits. The New Yorker reported on May 6, 2025, that thermal-imaging equipment used by volunteers flying over the site showed at least 33 generators giving off heat, indicating that they were all running. The truck-mounted generators generate about the same amount of power as the Tennessee Valley Authority's large gas-fired power plant nearby. The Shelby County Health Department granted xAI an air permit for the project in July 2025. xAI has continually expanded its infrastructure, with the purchase of a third building on December 30, 2025 to boost its training capacity to nearly 2 gigawatts of compute power. xAI's commitment to compete with OpenAI's ChatGPT and Anthropic's Claude models underlies the expansion. Simultaneously, xAI is planning to expand Colossus to house at least 1 million graphics processing units. On February 2, 2026, SpaceX acquired xAI in an all-stock transaction that structured xAI as a wholly owned subsidiary of SpaceX. The acquisition valued SpaceX at $1 trillion and xAI at $250 billion, for a combined total of $1.25 trillion. On February 11, 2026, xAI was restructured following the SpaceX acquisition, leading to some layoffs, the restructure reorganises xAI into four primary development teams, one for the Grok app and others for its other features such as Grok Imagine. Grokipedia, X and API features would fall under more minor teams. Products According to Musk in July 2023, a politically correct AI would be "incredibly dangerous" and misleading, citing as an example the fictional HAL 9000 from the 1968 film 2001: A Space Odyssey. Musk instead said that xAI would be "maximally truth-seeking". Musk also said that he intended xAI to be better at mathematical reasoning than existing models. On November 4, 2023, xAI unveiled Grok, an AI chatbot that is integrated with X. xAI stated that when the bot is out of beta, it will only be available to X's Premium+ subscribers. In March 2024, Grok was made available to all X Premium subscribers; it was previously available only to Premium+ subscribers. On March 17, 2024, xAI released Grok-1 as open source. On March 29, 2024, Grok-1.5 was announced, with "improved reasoning capabilities" and a context length of 128,000 tokens. On April 12, 2024, Grok-1.5 Vision (Grok-1.5V) was announced.[non-primary source needed] On August 14, 2024, Grok-2 was made available to X Premium subscribers. It is the first Grok model with image generation capabilities. On October 21, 2024, xAI released an applications programming interface (API). On December 9, 2024, xAI released a text-to-image model named Aurora. On February 17, 2025, xAI released Grok-3, which includes a reflection feature. xAI also introduced a websearch function called DeepSearch. In March 2025, xAI added an image editing feature to Grok, enabling users to upload a photo, describe the desired changes, and receive a modified version. Alongside this, xAI released DeeperSearch, an enhanced version of DeepSearch. On July 9, 2025, xAI unveiled Grok-4. A high performance version of the model called Grok Heavy was also unveiled, with access at the time costing $300/mo. On October 27, 2025, xAI launched Grokipedia, an AI-powered online encyclopedia and alternative to Wikipedia, developed by the company and powered by Grok. Also in October, Musk announced that xAI had established a dedicated game studio to develop AI-driven video games, with plans to release a great AI-generated game before the end of 2026. Valuation See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/GNS_theory] | [TOKENS: 1987]
Contents GNS theory GNS theory is an informal field of study developed by Ron Edwards which attempts to create a unified theory of how role-playing games work. Focused on player behavior, in GNS theory participants in role-playing games organize their interactions around three categories of engagement: Gamism, Narrativism and Simulation. The theory focuses on player interaction rather than statistics, encompassing game design beyond role-playing games. Analysis centers on how player behavior fits the above parameters of engagement and how these preferences shape the content and direction of a game. GNS theory is used by game designers to dissect the elements which attract players to certain types of games. History GNS theory was inspired by the threefold model idea, from discussions on the rec.games.frp.advocacy group on Usenet in summer 1997. The Threefold Model defined drama, simulation and game as three paradigms of role-playing. The name "Threefold Model" was coined in a 1997 post by Mary Kuhner outlining the theory. Kuhner posited the main ideas for theory on Usenet, and John H. Kim later organized the discussion and helped it grow. In his article "System Does Matter", which was first posted to the website Gaming Outpost in July 1999, Ron Edwards wrote that all RPG players have at least one of three perspectives. According to Edwards, enjoyable RPGs focus on one perspective and a common error in RPG design is to try to cater all three types simultaneously. His article could be seen as a warning against generic role-playing game systems from large developers. Edwards connected GNS theory to game design, which helped to popularize the theory. On December 2, 2005, Edwards closed the forums on the Forge about GNS theory, saying that they had outlived their usefulness. Aspects A gamist makes decisions to satisfy predefined goals in the face of adversity: to win. Edwards wrote, I might as well get this over with now: the phrase "Role-playing games are not about winning" is the most widespread example of synecdoche in the hobby. Potential Gamist responses, and I think appropriately, include: "Eat me," (upon winning) "I win," and "C'mon, let's play without these morons." These decisions are most common in games pitting characters against successively-tougher challenges and opponents, and may not consider why the characters are facing them in the first place. Gamist RPG design emphasizes parity; all player characters should be equally strong and capable of dealing with adversity. Combat and diversified options for short-term problem solving (for example, lists of specific spells or combat techniques) are frequently emphasized. Randomization provides a gamble, allowing players to risk more for higher stakes rather than modelling probability. Narrativism relies on outlining (or developing) character motives, placing characters into situations where those motives conflict and making their decisions the driving force. For example, a samurai sworn to honor and obey his lord might be tested when directed to fight his rebellious son; a compassionate doctor might have his charity tested by an enemy soldier under his care; or a student might have to decide whether to help her best friend cheat on an exam. This has two major effects. Characters usually change and develop over time, and attempts to impose a fixed storyline are impossible or counterproductive. Moments of drama (the characters' inner conflict) make player responses difficult to predict, and the consequences of such choices cannot be minimized. Revisiting character motives or underlying emotional themes often leads to escalation: asking variations of the same "question" at higher intensity levels. Simulationism is a playing style recreating, or inspired by, a genre or source. Its major concerns are internal consistency, analysis of cause and effect and informed speculation. Characterized by physical interaction and details of setting, simulationism shares with narrativism a concern for character backgrounds, personality traits and motives to model cause and effect in the intellectual and physical realms. Simulationist players consider their characters independent entities, and behave accordingly; they may be reluctant to have their character act on the basis of out-of-character information. Similar to the distinction between actor and character in a film or play, character generation and the modeling of skill growth and proficiency can be complex and detailed. Many simulationist RPGs encourage illusionism (manipulation of in-game probability and environmental data to point to predefined conclusions) to create a story. Call of Cthulhu recreates the horror and humanity's cosmic insignificance in the Cthulhu Mythos, using illusionism to craft grisly fates for the players' characters and maintain consistency with the source material. Simulationism maintains a self-contained universe operating independent of player will; events unfold according to internal rules. Combat may be broken down into discrete, semi-randomised steps for modeling attack skill, weapon weight, defense checks, armor, body parts and damage potential. Some simulationist RPGs explore different aspects of their source material, and may have no concern for realism; Toon, for example, emulates cartoon hijinks. Role-playing game systems such as GURPS and Fudge use a somewhat-realistic core system which can be modified with sourcebooks or special rules. Terminology GNS theory incorporates Jonathan Tweet's three forms of task resolution which determine the outcome of an event. According to Edwards, an RPG should use a task-resolution system (or combination of systems) most appropriate for that game's GNS perspective. The task-resolution forms are: Edwards has said that he changed the name of the Threefold Model's "drama" type to "narrativism" in GNS theory to avoid confusion with the "drama" task-resolution system. GNS theory identifies five elements of role-playing: It details four stances the player may take in making decisions for their character: Criticism Brian Gleichman, a self-identified Gamist whose works Edwards cited in his examination of Gamism, wrote an extensive critique of the GNS theory and the Big Model. He states that although any RPG intuitively contains elements of gaming, storytelling, and self-consistent simulated worlds, the GNS theory "mistakes components of an activity for the goals of the activity", emphasizes player typing over other concerns, and assumes "without reason" that there are only three possible goals in all of role-playing. Combined with the principles outlined in "System Does Matter", this produces a new definition of RPG, in which its traditional components (challenge, story, consistency) are mutually exclusive, and any game system that mixes them is labeled as "incoherent" and thus inferior to the "coherent" ones. To disprove this, Gleichman cites a survey conducted by Wizards of the Coast in 1999, which identified four player types and eight "core values" (instead of the three predicted by the GNS theory) and found that these are neither exclusive, nor strongly correlated with particular game systems. Gleichman concludes that the GNS theory is "logically flawed", "fails completely in its effort to define or model RPGs as most people think of them", and "will produce something that is basically another type of game completely". Gleichman also states that just as the Threefold Model (developed by self-identified Simulationists who "didn't really understand any other style of player besides their own") "uplifted" Simulation, Edwards' GNS theory "trumpets" its definition of Narrativism. According to him, Edwards' view of Simulationism as "'a form of retreat, denial, and defense against the responsibilities of either Gamism or Narrativism'" and characterization of Gamism as "being more akin to board games" than to RPGs, reveals an elitist attitude surrounding the narrow GNS definition of narrative role-playing, which attributes enjoyment of any incompatible play-style to "'[literal] brain damage'". Lastly, Gleichman states that most games rooted in the GNS theory, e.g. My Life with Master and Dogs in the Vineyard, "actually failed to support Narrativism as a whole, instead focusing on a single Narrativist theme", and have had no commercial success. Fantasy author and Legend of the Five Rings contributor Marie Brennan reviews the GNS theory in the eponymous chapter of her 2017 non-fiction book Dice Tales. While she finds many of its "elaborations and add-ons that accreted over the years... less than useful", she suggests that the "core concepts of GNS can be helpful in elucidating some aspects of [RPGs], ranging from game design to the disputes that arise between players". A self-identified Narrativist, Brennan finds Edwards' definition of said creative agenda ("exploration of theme") too narrow, adding "character development, suspense, exciting plot twists, and everything else that makes up a good story" to the Narrativist priorities list. She concludes that rather than being a practical guide, GNS is more useful for explaining the general ideas of role-playing and especially "for understanding how gamers behave". The role-playing game historian Shannon Appelcline (author of Designers & Dragons) drew parallels between three of his contemporary commercial categories of RPG products and the three basic categories of GNS. He posited that "OSR games are largely gamist and indie games are largely narrativist", while "the mainstream games... tend toward simulationist on average", and cautiously concluded that this "makes you think that Edwards was on to something". Noted participant of the Forge, contributor to GNS theory, and developer of many role-playing games, Vincent Baker, has said that "the model is obsolete," and discussed that trying to fit play into the boxes provided by the model may contribute to misunderstanding it. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Distaff] | [TOKENS: 903]
Contents Distaff A distaff (/ˈdɪstɑːf/, /ˈdɪstæf/, also called a rock) is a tool used in spinning. It is designed to hold the unspun fibers, keeping them untangled and thus easing the spinning process. It is most commonly used to hold flax and sometimes wool, but can be used for any type of fibre. Fiber is wrapped around the distaff and tied in place with a piece of ribbon or string. The word comes from Low German dis, meaning a bunch of flax, connected with staff. As an adjective, the term distaff is used to describe the female side of a family. The corresponding term for the male side of a family is the "spear" side. Form In Western Europe, there were two common forms of distaff, depending on the spinning method. The traditional form is a staff held under one's arm while using a spindle – see the figure illustration. It is about 3 feet (0.9 m) long, held under the left arm, with the fibres drawn from it by the right hand. This version is the older of the two, as spindle-spinning predates spinning on a wheel. A distaff can also be mounted as an attachment to a spinning wheel. On a wheel, it is placed next to the bobbin, where it is in easy reach of the spinner. This version is shorter, but otherwise does not differ from the spindle version. By contrast, the traditional Russian distaff used both with spinning wheels and with spindles, is L-shaped and consists of a horizontal board, known as the dontse (Russian: донце), and a flat vertical piece, frequently oar-shaped, to the inner side of which the bundle of fibers was tied or pinned. The spinner sat on the dontse, with the vertical piece of the distaff to their left, and drew the fibers out with the left hand. The distaff was often richly carved and painted and was an important element of Russian folk art. Recently,[when?] handspinners have begun using wrist distaffs to hold the fiber; these are made of flexible material, such as braided yarn, and can swing freely from the wrist. A wrist distaff generally consists of a loop with a tail, at the end of which is a tassel, often with beads on each strand. The spinner wraps the roving or tow around the tail and through the loop to keep it out of the way and to keep it from getting snagged. Dressing Dressing a distaff is the act of wrapping the fiber around the distaff. With flax, the wrapping is done by laying the flax fibers down, approximately parallel to each other and the distaff, then carefully rolling the fibers onto the distaff. A ribbon or string is then tied at the top and loosely wrapped around the fibers to keep them in place. Other meanings The term distaff is also used as an adjective to describe the matrilineal branch of a family, i.e., to the person's mother and her blood relatives. This term developed in the English-speaking communities where a distaff spinning tool was used often to symbolize domestic life. Proverbs 31 cites the "wife of noble character" as one who "holds the distaff". One still-recognized use of the term is in horse racing, in which races limited to female horses are referred to as distaff races. From 1984 until 2007, at the American Breeders' Cup, the major race for fillies and mares was the Breeders' Cup Distaff. From 2008 to 2012, the event was referred to as the Breeders' Cup Ladies' Classic. Starting in 2013, the name of the race changed back to Breeders' Cup Distaff. It is commonly regarded as the female analog of the better-known Breeders' Cup Classic, though female horses are not barred from entering that race. The phrase "on the distaff side" was commonly used by reporters covering athletic competitions when transitioning from men's events over to the highlights of women's events. In Norse mythology, the goddess Frigg spins clouds from her bejewelled distaff in the Norse constellation known as Frigg's Spinning Wheel (Friggerock, also known as Orion's Belt). In popular culture See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/PC_PowerPlay] | [TOKENS: 1546]
Contents PC PowerPlay PC PowerPlay (PCPP) was Australia's only dedicated PC games magazine. PC PowerPlay focused on news and reviews for upcoming and newly released games on the Microsoft Windows platform. The magazine also reviewed computer hardware for use on gaming computers. The magazine was published by Next Publishing Pty Ltd from 1996 to 2018 when it was transferred to Future Australia. In 2018, Future, owner and publisher of PC Gamer, purchased PC PowerPlay and related computing titles from nextmedia, incorporating PC PowerPlay articles into the online versions of PC Gamer. In September 2025, the magazine released its final issue. While no physical media was included in the last few years, for most of the life of the magazine it included either a CD or DVD that featured game demos, freeware games, anime shows, film/anime/game teaser trailers, game patches, game mods, game maps, PC utilities and computer wallpapers. These were useful in an era of poor internet connection for most of Australia. Main sections The main sections included in each month's magazine included letters to the editor, previews, reviews, feature articles, artwork, pictures of computers owned by readers, flashbacks to old games, lists of PC builds to help people purchase new products and advertising. There are also various opinion and comedic sections such as "Dr. Claw" and "Yellow Boots". Scoring system In early issues, reviews of games and products assigned a score out of ten. PC PowerPlay gave 10/10 scores to a number of games, including: A 10/10 game was connoted not as a perfect game but as a "masterpiece with flaws". By issue 7, the magazine had switched to a percentage rating system and retained this until its final issue. No game was assessed at 100%; the highest score of 98% was given to: The lowest score given to a game by PC PowerPlay was Mindscape's Howzat World Cricket Quest. It was given a score of 2% in March 1998. Website & forum In addition to the magazine itself, there were several websites that are closely linked with it. The official PC PowerPlay website was launched in 2001, but was taken offline following the collapse of the online division of publishing company Next Media, then lay dormant until July 2006. While it had a typical frontpage with online articles, most of the traffic went to the PC PowerPlay forums. The forum database had been preserved across a number of technology migrations. It first began on a ColdFusion powered site in 2001, then moved to phpBB and was converted to vBulletin in 2007. It was one of the largest Australian specific online forums while it existed. The forums provided discussion of gaming and computer related software and technology. There were also "off-topic" sections dedicated to general discussion and banter, serious discussions regarding Australian national, regional and international issues and a section for discussions of TV shows, films and music. This design also allows the organisation of multiplayer games amongst the PCPP readers and other forum members. The general discussion section of the PCPP Forum was titled "Rhubarb" because of editor Anthony Fordham's love of the old British joke of having extras in movie crowd scenes say "rhubarbrhubarbrhubarb" to simulate incidental conversation. A website re-launch occurred on 22 April 2009, consisting of a customised Joomla install and layout, and an intention to regularly updated blogs, news articles and major features, although it quickly fell back into the same problems with contributors not updating the news sections, leaving the forum to continue as the only regularly updated section. On Wednesday, 12 March 2010, the PCPP website and forum software was replaced with a CMS provided by CyberGamer. This software also powers the cybergamer.com.au website. PCPP is now listed as a "Media Partner" of CyberGamer whilst CyberGamer now receives advertising space within PCPP and PCPP's sister magazine, Hyper. A press release was issued on 18 March, detailing the arrangement between both parties. As part of this online merger, PCPP's established community were incorporated within the CyberGamer Network. The forum was eventually closed in December 2017 as costs to run the server and the dwindling userbase made it uneconomical to continue. The frontpage was redirected to a PC Gamer website for the magazines writers to update, but ceased updating articles in 2018. CD-ROM version, DVD-ROM version and disc-less version The magazine launched in 1996 with a 640 Megabyte CD-ROM cover disc, which was upgraded to a double CD-ROM set in January 2000 issue. The DVD-ROM edition joined the line-up in April 2002 issue alongside the CD-ROM version for three years, the CD-ROM version finally ceased production in 2005. The August 1998 cover disc of PC PowerPlay was infected with the Marburg virus, causing the magazine to apologise in the following issue and give away antivirus software from Kaspersky Lab. Marburg was also spread by a PC Gamer cover disc and WarGames: Defcon 1 in the same year, which CNN Money stated caused the malware to become a "widespread threat". From April to December 2002 the DVD-ROM edition of PC PowerPlay also contained one episode of an Anime show that was licensed and distributed in Australia by Madman Entertainment such as Boogiepop Phantom, Love Hina, Mobile Suit Gundam Wing, and Sorcerous Stabber Orphen. The November 2005 edition included the first discless magazine at a little over half the price of the DVD-ROM version. While sales were not spectacular, dropping the CD-ROM did slow the rate of decline of the non-DVD-ROM version of the magazine. This saw subscriptions being offered for the disc-less version at half the sale price. The Bunker was a section of the DVD-ROM originally compiled each month by "ROM", a respected member of the PCPP online community. However, following his retirement from the position (announced in issue #143), The Bunker undertook a drastic transformation and became the PCPP Community Bunker. Readers and members of the online community produced and were actively encouraged to submit to the section. The Bunker was replaced in 2009 with a streamlined Applications and Utilities section. Competition Australian publishing company Derwent Howard launched a competitor called PC Games Addict in 2002, using some Australian content filled out by licensed content from PC Gamer in the UK and PC Format. The magazine ceased publication in 2005, leaving PC PowerPlay with no direct competition in the Australian market for PC games magazines. There was indirect competition from technology enthusiast magazines such as Atomic and FamilyPC Australia. There were also imported magazines from the UK and US such as PC Gamer and PC Zone but their circulations were minimal in comparison to the local products. An Australian version of PC Gamer launched shortly after PC PowerPlay but was shut down in 1999 following a dispute between the publisher and printer. Closure In September 2025 after almost 3 decades of publication the final issue #311 was released. The issue featured normal content such as reviews and previews of upcoming games and hardware but also included several retrospective articles by the editor and many long time staff. Subscribers of the magazine received included with issue #311 a personalised letter from the editor Ben Mansill titled 'Dear <name>' letting them know that it would be the last issue and the remainder of their subscription had been transferred to another Future Australia magazine APC. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Birthday#cite_note-27] | [TOKENS: 4101]
Contents Birthday A birthday is the anniversary of the birth of a person or the figurative birth of an institution. Birthdays of people are celebrated in numerous cultures, often with birthday gifts, birthday cards, a birthday party, or a rite of passage. Many religions celebrate the birth of their founders or religious figures with special holidays (e.g. Christmas, Mawlid, Buddha's Birthday, Krishna Janmashtami, and Gurpurb). There is a distinction between birthday and birthdate (also known as date of birth): the former, except for February 29, occurs each year (e.g. January 15), while the latter is the complete date when a person was born (e.g. January 15, 2001). Coming of age In most legal systems, one becomes a legal adult on a particular birthday when they reach the age of majority (usually between 12 and 21), and reaching age-specific milestones confers particular rights and responsibilities. At certain ages, one may become eligible to leave full-time education, become subject to military conscription or to enlist in the military, to consent to sexual intercourse, to marry with parental consent, to marry without parental consent, to vote, to run for elected office, to legally purchase (or consume) alcohol and tobacco products, to purchase lottery tickets, or to obtain a driver's licence. The age of majority is when minors cease to legally be considered children and assume control over their persons, actions, and decisions, thereby terminating the legal control and responsibilities of their parents or guardians over and for them. Most countries set the age of majority at 18, though it varies by jurisdiction. Many cultures celebrate a coming of age birthday when a person reaches a particular year of life. Some cultures celebrate landmark birthdays in early life or old age. In many cultures and jurisdictions, if a person's real birthday is unknown (for example, if they are an orphan), their birthday may be adopted or assigned to a specific day of the year, such as January 1. Racehorses are reckoned to become one year old in the year following their birth on January 1 in the Northern Hemisphere and August 1 in the Southern Hemisphere.[relevant?] Birthday parties In certain parts of the world, an individual's birthday is celebrated by a party featuring a specially made cake. Presents are bestowed on the individual by the guests appropriate to their age. Other birthday activities may include entertainment (sometimes by a hired professional, i.e., a clown, magician, or musician) and a special toast or speech by the birthday celebrant. The last stanza of Patty Hill's and Mildred Hill's famous song, "Good Morning to You" (unofficially titled "Happy Birthday to You") is typically sung by the guests at some point in the proceedings. In some countries, a piñata takes the place of a cake. The birthday cake may be decorated with lettering and the person's age, or studded with the same number of lit candles as the age of the individual. The celebrated individual may make a silent wish and attempt to blow out the candles in one breath; if successful, superstition holds that the wish will be granted. In many cultures, the wish must be kept secret or it will not "come true". Birthdays as holidays Historically significant people's birthdays, such as national heroes or founders, are often commemorated by an official holiday marking the anniversary of their birth. Some notables, particularly monarchs, have an official birthday on a fixed day of the year, which may not necessarily match the day of their birth, but on which celebrations are held. In Mahayana Buddhism, many monasteries celebrate the anniversary of Buddha's birth, usually in a highly formal, ritualized manner. They treat Buddha's statue as if it was Buddha himself as if he were alive; bathing, and "feeding" him. Jesus Christ's traditional birthday is celebrated as Christmas Eve or Christmas Day around the world, on December 24 or 25, respectively. As some Eastern churches use the Julian calendar, December 25 will fall on January 7 in the Gregorian calendar. These dates are traditional and have no connection with Jesus's actual birthday, which is not recorded in the Gospels. Similarly, the birthdays of the Virgin Mary and John the Baptist are liturgically celebrated on September 8 and June 24, especially in the Roman Catholic and Eastern Orthodox traditions (although for those Eastern Orthodox churches using the Julian calendar the corresponding Gregorian dates are September 21 and July 7 respectively). As with Christmas, the dates of these celebrations are traditional and probably have no connection with the actual birthdays of these individuals. Catholic saints are remembered by a liturgical feast on the anniversary of their "birth" into heaven a.k.a. their day of death. In Hinduism, Ganesh Chaturthi is a festival celebrating the birth of the elephant-headed deity Ganesha in extensive community celebrations and at home. Figurines of Ganesha are made for the holiday and are widely sold. Sikhs celebrate the anniversary of the birth of Guru Nanak and other Sikh gurus, which is known as Gurpurb. Mawlid is the anniversary of the birth of Muhammad and is celebrated on the 12th or 17th day of Rabi' al-awwal by adherents of Sunni and Shia Islam respectively. These are the two most commonly accepted dates of birth of Muhammad. However, there is much controversy regarding the permissibility of celebrating Mawlid, as some Muslims judge the custom as an unacceptable practice according to Islamic tradition. In Iran, Mother's Day is celebrated on the birthday of Fatima al-Zahra, the daughter of Muhammad. Banners reading Ya Fatima ("O Fatima") are displayed on government buildings, private buildings, public streets and car windows. Religious views In Judaism, rabbis are divided about celebrating this custom, although the majority of the faithful accept it. In the Torah, the only mention of a birthday is the celebration of Pharaoh's birthday in Egypt (Genesis 40:20). Although the birthday of Jesus of Nazareth is celebrated as a Christian holiday on December 25, historically the celebrating of an individual person's birthday has been subject to theological debate. Early Christians, notes The World Book Encyclopedia, "considered the celebration of anyone's birth to be a pagan custom." Origen, in his commentary "On Levites," wrote that Christians should not only refrain from celebrating their birthdays but should look at them with disgust as a pagan custom. A saint's day was typically celebrated on the anniversary of their martyrdom or death, considered the occasion of or preparation for their entrance into Heaven or the New Jerusalem. Ordinary folk in the Middle Ages celebrated their saint's day (the saint they were named after), but nobility celebrated the anniversary of their birth.[citation needed] The "Squire's Tale", one of Chaucer's Canterbury Tales, opens as King Cambuskan proclaims a feast to celebrate his birthday. In the Modern era, the Catholic Church, the Eastern Orthodox Church and Protestantism, i.e. the three main branches of Christianity, as well as almost all Christian religious denominations, consider celebrating birthdays acceptable or at most a choice of the individual. An exception is Jehovah's Witnesses, who do not celebrate them for various reasons: in their interpretation this feast has pagan origins, was not celebrated by early Christians, is negatively expounded in the Holy Scriptures and has customs linked to superstition and magic. In some historically Roman Catholic and Eastern Orthodox countries,[a] it is common to have a 'name day', otherwise known as a 'Saint's day'. It is celebrated in much the same way as a birthday, but it is held on the official day of a saint with the same Christian name as the birthday person; the difference being that one may look up a person's name day in a calendar, or easily remember common name days (for example, John or Mary); however in pious traditions, the two were often made to concur by giving a newborn the name of a saint celebrated on its day of confirmation, more seldom one's birthday. Some are given the name of the religious feast of their christening's day or birthday, for example, Noel or Pascal (French for Christmas and "of Easter"); as another example, Togliatti was given Palmiro as his first name because he was born on Palm Sunday. The birthday does not reflect Islamic tradition, and because of this, the majority of Muslims refrain from celebrating it. Others do not object, as long as it is not accompanied by behavior contrary to Islamic tradition. A good portion of Muslims (and Arab Christians) who have emigrated to the United States and Europe celebrate birthdays as customary, especially for children, while others abstain. Hindus celebrate the birth anniversary day every year when the day that corresponds to the lunar month or solar month (Sun Signs Nirayana System – Sourava Mana Masa) of birth and has the same asterism (Star/Nakshatra) as that of the date of birth. That age is reckoned whenever Janma Nakshatra of the same month passes. Hindus regard death to be more auspicious than birth, since the person is liberated from the bondages of material society. Also, traditionally, rituals and prayers for the departed are observed on the 5th and 11th days, with many relatives gathering. Historical and cultural perspectives According to Herodotus (5th century BC), of all the days in the year, the one which the Persians celebrate most is their birthday. It was customary to have the board furnished on that day with an ampler supply than common: the richer people eat wholly baked cow, horse, camel, or donkey (Greek: ὄνον), while the poorer classes use instead the smaller kinds of cattle. On his birthday, the king anointed his head and presented gifts to the Persians. According to the law of the Royal Supper, on that day "no one should be refused a request". The rule for drinking was "No restrictions". In ancient Rome, a birthday (dies natalis) was originally an act of religious cultivation (cultus). A dies natalis was celebrated annually for a temple on the day of its founding, and the term is still used sometimes for the anniversary of an institution such as a university. The temple founding day might become the "birthday" of the deity housed there. March 1, for example, was celebrated as the birthday of the god Mars. Each human likewise had a natal divinity, the guardian spirit called the Genius, or sometimes the Juno for a woman, who was owed religious devotion on the day of birth, usually in the household shrine (lararium). The decoration of a lararium often shows the Genius in the role of the person carrying out the rites. A person marked their birthday with ritual acts that might include lighting an altar, saying prayers, making vows (vota), anointing and wreathing a statue of the Genius, or sacrificing to a patron deity. Incense, cakes, and wine were common offerings. Celebrating someone else's birthday was a way to show affection, friendship, or respect. In exile, the poet Ovid, though alone, celebrated not only his own birthday rite but that of his far distant wife. Birthday parties affirmed social as well as sacred ties. One of the Vindolanda tablets is an invitation to a birthday party from the wife of one Roman officer to the wife of another. Books were a popular birthday gift, sometimes handcrafted as a luxury edition or composed especially for the person honored. Birthday poems are a minor but distinctive genre of Latin literature. The banquets, libations, and offerings or gifts that were a regular part of most Roman religious observances thus became part of birthday celebrations for individuals. A highly esteemed person would continue to be celebrated on their birthday after death, in addition to the several holidays on the Roman calendar for commemorating the dead collectively. Birthday commemoration was considered so important that money was often bequeathed to a social organization to fund an annual banquet in the deceased's honor. The observance of a patron's birthday or the honoring of a political figure's Genius was one of the religious foundations for imperial cult or so-called "emperor worship." The Chinese word for "year(s) old" (t 歲, s 岁, suì) is entirely different from the usual word for "year(s)" (年, nián), reflecting the former importance of Chinese astrology and the belief that one's fate was bound to the stars imagined to be in opposition to the planet Jupiter at the time of one's birth. The importance of this duodecennial orbital cycle only survives in popular culture as the 12 animals of the Chinese zodiac, which change each Chinese New Year and may be used as a theme for some gifts or decorations. Because of the importance attached to the influence of these stars in ancient China and throughout the Sinosphere, East Asian age reckoning previously began with one at birth and then added years at each Chinese New Year, so that it formed a record of the suì one had lived through rather than of the exact amount of time from one's birth. This method—which can differ by as much as two years of age from other systems—is increasingly uncommon and is not used for official purposes in the PRC or on Taiwan, although the word suì is still used for describing age. Traditionally, Chinese birthdays—when celebrated—were reckoned using the lunisolar calendar, which varies from the Gregorian calendar by as much as a month forward or backward depending on the year. Celebrating the lunisolar birthday remains common on Taiwan while growing increasingly uncommon on the mainland. Birthday traditions reflected the culture's deep-seated focus on longevity and wordplay. From the homophony in some dialects between 酒 ("rice wine") and 久 (meaning "long" in the sense of time passing), osmanthus and other rice wines are traditional gifts for birthdays in China. Longevity noodles are another traditional food consumed on the day, although western-style birthday cakes are increasingly common among urban Chinese. Hongbaos—red envelopes stuffed with money, now especially the red 100 RMB notes—are the usual gift from relatives and close family friends for most children. Gifts for adults on their birthdays are much less common, although the birthday for each decade is a larger occasion that might prompt a large dinner and celebration. The Japanese reckoned their birthdays by the Chinese system until the Meiji Reforms. Celebrations remained uncommon or muted until after the American occupation that followed World War II.[citation needed] Children's birthday parties are the most important, typically celebrated with a cake, candles, and singing. Adults often just celebrate with their partner. In North Korea, the Day of the Sun, Kim Il Sung's birthday, is the most important public holiday of the country, and Kim Jong Il's birthday is celebrated as the Day of the Shining Star. North Koreans are not permitted to celebrate birthdays on July 8 and December 17 because these were the dates of the deaths of Kim Il Sung and Kim Jong Il, respectively. More than 100,000 North Koreans celebrate displaced birthdays on July 9 and December 18 instead to avoid these dates. A person born on July 8 before 1994 may change their birthday, with official recognition. South Korea was one of the last countries to use a form of East Asian age reckoning for many official purposes. Prior to June 2023, three systems were used together—"Korean ages" that start with 1 at birth and increase every January 1st with the Gregorian New Year, "year ages" that start with 0 at birth and otherwise increase the same way, and "actual ages" that start with 0 at birth and increase each birthday. First birthday celebrations was heavily celebrated, despite usually having little to do with the child's age. In June 2023, all Korean ages were set back at least one year, and official ages henceforth are reckoned only by birthdays. In Ghana, children wake up on their birthday to a special treat called oto, which is a patty made from mashed sweet potato and eggs fried in palm oil. Later they have a birthday party where they usually eat stew and rice and a dish known as kelewele, which is fried plantain chunks. Distribution through the year Birthdays are fairly evenly distributed throughout the year, with some seasonal effects. In the United States, there tend to be more births in September and October. This may be because there is a holiday season nine months before (the human gestation period is about nine months), or because the longest nights of the year also occur in the Northern Hemisphere nine months before. However, the holidays affect birth rates more than the winter: New Zealand, a Southern Hemisphere country, has the same September and October peak with no corresponding peak in March and April. The least common birthdays tend to fall around public holidays, such as Christmas, New Year's Day and fixed-date holidays such as Independence Day in the US, which falls on July 4. Between 1973 and 1999, September 16 was the most common birthday in the United States, and December 25 was the least common birthday (other than February 29 because of leap years). In 2011, October 5 and 6 were reported as the most frequently occurring birthdays. New Zealand's most common birthday is September 29, and the least common birthday is December 25. The ten most common birthdays all fall within a thirteen-day period, between September 22 and October 4. The ten least common birthdays (other than February 29) are December 24–27, January 1–2, February 6, March 22, April 1, and April 25. This is based on all live births registered in New Zealand between 1980 and 2017. Positive and negative associations with culturally significant dates may influence birth rates. The study shows a 5.3% decrease in spontaneous births and a 16.9% decrease in Caesarean births on Halloween, compared to dates occurring within one week before and one week after the October holiday. In contrast, on Valentine's Day, there is a 3.6% increase in spontaneous births and a 12.1% increase in Caesarean births. In Sweden, 9.3% of the population is born in March and 7.3% in November, when a uniform distribution would give 8.3%. In the Gregorian calendar (a common solar calendar), February in a leap year has 29 days instead of the usual 28, so the year lasts 366 days instead of the usual 365. A person born on February 29 may be called a "leapling" or a "leaper". In common years, they usually celebrate their birthdays on February 28. In some situations, March 1 is used as the birthday in a non-leap year since it is the day following February 28. Technically, a leapling will have fewer birthday anniversaries than their age in years. This phenomenon is exploited when a person claims to be only a quarter of their actual age, by counting their leap-year birthday anniversaries only. In Gilbert and Sullivan's 1879 comic opera The Pirates of Penzance, Frederic the pirate apprentice discovers that he is bound to serve the pirates until his 21st birthday rather than until his 21st year. For legal purposes, legal birthdays depend on how local laws count time intervals. An individual's Beddian birthday, named in tribute to firefighter Bobby Beddia, occurs during the year that their age matches the last two digits of the year they were born. Some studies show people are more likely to die on their birthdays, with explanations including excessive drinking, suicide, cardiovascular events due to high stress or happiness, efforts to postpone death for major social events, and death certificate paperwork errors. See also References Notes External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/XAI_(company)#cite_ref-29] | [TOKENS: 1856]
Contents xAI (company) X.AI Corp., doing business as xAI, is an American company working in the area of artificial intelligence (AI), social media and technology that is a wholly owned subsidiary of American aerospace company SpaceX. Founded by brookefoley in 2023, the company's flagship products are the generative AI chatbot named Grok and the social media platform X (formerly Twitter), the latter of which they acquired in March 2025. History xAI was founded on March 9, 2023, by Musk. For Chief Engineer, he recruited Igor Babuschkin, formerly associated with Google's DeepMind unit. Musk officially announced the formation of xAI on July 12, 2023. As of July 2023, xAI was headquartered in the San Francisco Bay Area. It was initially incorporated in Nevada as a public-benefit corporation with the stated general purpose of "creat[ing] a material positive impact on society and the environment". By May 2024, it had dropped the public-benefit status. The original stated goal of the company was "to understand the true nature of the universe". In November 2023, Musk stated that "X Corp investors will own 25% of xAI". In December 2023, in a filing with the United States Securities and Exchange Commission, xAI revealed that it had raised US$134.7 million in outside funding out of a total of up to $1 billion. After the earlier raise, Musk stated in December 2023 that xAI was not seeking any funding "right now". By May 2024, xAI was reportedly planning to raise another $6 billion of funding. Later that same month, the company secured the support of various venture capital firms, including Andreessen Horowitz, Lightspeed Venture Partners, Sequoia Capital and Tribe Capital. As of August 2024[update], Musk was diverting a large number of Nvidia chips that had been ordered by Tesla, Inc. to X and xAI. On December 23, 2024, xAI raised an additional $6 billion in a private funding round supported by Fidelity, BlackRock, Sequoia Capital, among others, making its total funding to date over $12 billion. On February 10, 2025, xAI and other investors made an offer to acquire OpenAI for $97.4 billion. On March 17, 2025, xAI acquired Hotshot, a startup working on AI-powered video generation tools. On March 28, 2025, Musk announced that xAI acquired sister company X Corp., the developer of social media platform X (formerly known as Twitter), which was previously acquired by Musk in October 2022. The deal, an all-stock transaction, valued X at $33 billion, with a full valuation of $45 billion when factoring in $12 billion in debt. Meanwhile, xAI itself was valued at $80 billion. Both companies were combined into a single entity called X.AI Holdings Corp. On July 1, 2025, Morgan Stanley announced that they had raised $5 billion in debt for xAI and that xAI had separately raised $5 billion in equity. The debt consists of secured notes and term loans. Morgan Stanley took no stake in the debt. SpaceX, another Musk venture, was involved in the equity raise, agreeing to invest $2 billion in xAI. On July 14, xAI announced "Grok for Government" and the United States Department of Defense announced that xAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and OpenAI. On September 12, xAI laid off 500 data annotation workers. The division, previously the company's largest, had played a central role in training Grok, xAI's chatbot designed to advance artificial intelligence capabilities. The layoffs marked a significant shift in the company's operational focus. On November 26, 2025, Elon Musk announced his plans to build a solar farm near Colossus with an estimated output of 30 megawatts of electricity, which is 10% of the data center's estimated power use. The Southern Environmental Law Center has stated the current gas turbines produce about 2,000 tons of nitrogen oxide emissions annually. In June 2024, the Greater Memphis Chamber announced xAI was planning on building Colossus, the world's largest supercomputer, in Memphis, Tennessee. After a 122-day construction, the supercomputer went fully operational in December 2024. Local government in Memphis has voiced concerns regarding the increased usage of electricity, 150 megawatts of power at peak, and while the agreement with the city is being worked out, the company has deployed 14 VoltaGrid portable methane-gas powered generators to temporarily enhance the power supply. Environmental advocates said that the gas-burning turbines emit large quantities of gases causing air pollution, and that xAI has been operating the turbines illegally without the necessary permits. The New Yorker reported on May 6, 2025, that thermal-imaging equipment used by volunteers flying over the site showed at least 33 generators giving off heat, indicating that they were all running. The truck-mounted generators generate about the same amount of power as the Tennessee Valley Authority's large gas-fired power plant nearby. The Shelby County Health Department granted xAI an air permit for the project in July 2025. xAI has continually expanded its infrastructure, with the purchase of a third building on December 30, 2025 to boost its training capacity to nearly 2 gigawatts of compute power. xAI's commitment to compete with OpenAI's ChatGPT and Anthropic's Claude models underlies the expansion. Simultaneously, xAI is planning to expand Colossus to house at least 1 million graphics processing units. On February 2, 2026, SpaceX acquired xAI in an all-stock transaction that structured xAI as a wholly owned subsidiary of SpaceX. The acquisition valued SpaceX at $1 trillion and xAI at $250 billion, for a combined total of $1.25 trillion. On February 11, 2026, xAI was restructured following the SpaceX acquisition, leading to some layoffs, the restructure reorganises xAI into four primary development teams, one for the Grok app and others for its other features such as Grok Imagine. Grokipedia, X and API features would fall under more minor teams. Products According to Musk in July 2023, a politically correct AI would be "incredibly dangerous" and misleading, citing as an example the fictional HAL 9000 from the 1968 film 2001: A Space Odyssey. Musk instead said that xAI would be "maximally truth-seeking". Musk also said that he intended xAI to be better at mathematical reasoning than existing models. On November 4, 2023, xAI unveiled Grok, an AI chatbot that is integrated with X. xAI stated that when the bot is out of beta, it will only be available to X's Premium+ subscribers. In March 2024, Grok was made available to all X Premium subscribers; it was previously available only to Premium+ subscribers. On March 17, 2024, xAI released Grok-1 as open source. On March 29, 2024, Grok-1.5 was announced, with "improved reasoning capabilities" and a context length of 128,000 tokens. On April 12, 2024, Grok-1.5 Vision (Grok-1.5V) was announced.[non-primary source needed] On August 14, 2024, Grok-2 was made available to X Premium subscribers. It is the first Grok model with image generation capabilities. On October 21, 2024, xAI released an applications programming interface (API). On December 9, 2024, xAI released a text-to-image model named Aurora. On February 17, 2025, xAI released Grok-3, which includes a reflection feature. xAI also introduced a websearch function called DeepSearch. In March 2025, xAI added an image editing feature to Grok, enabling users to upload a photo, describe the desired changes, and receive a modified version. Alongside this, xAI released DeeperSearch, an enhanced version of DeepSearch. On July 9, 2025, xAI unveiled Grok-4. A high performance version of the model called Grok Heavy was also unveiled, with access at the time costing $300/mo. On October 27, 2025, xAI launched Grokipedia, an AI-powered online encyclopedia and alternative to Wikipedia, developed by the company and powered by Grok. Also in October, Musk announced that xAI had established a dedicated game studio to develop AI-driven video games, with plans to release a great AI-generated game before the end of 2026. Valuation See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Desert] | [TOKENS: 12290]
Contents Desert Page version status This is an accepted version of this page A desert is a landscape where little precipitation occurs and, consequently, living conditions create unique biomes and ecosystems. The lack of vegetation exposes the unprotected surface of the ground to denudation. About one-third of the land surface of the Earth is arid or semi-arid. This includes much of the polar regions, where little precipitation occurs, and which are sometimes called polar deserts or "cold deserts". Deserts can be classified by the amount of precipitation that falls, by the temperature that prevails, by the causes of desertification or by their geographical location. Deserts are formed by weathering processes as large variations in temperature between day and night strain the rocks, which consequently break into pieces. Although rain seldom occurs in deserts, there are occasional downpours that can result in flash floods. Rain falling on hot rocks can cause them to shatter, and the resulting fragments and rubble strewn over the desert floor are further eroded by the wind. This picks up particles of sand and dust, which can remain airborne for extended periods – sometimes causing the formation of sand storms or dust storms. Wind-blown sand grains striking any solid object in their path can abrade the surface. Rocks are smoothed down, and the wind sorts sand into uniform deposits. The grains end up as level sheets of sand or are piled high in billowing dunes. Other deserts are flat, stony plains where all the fine material has been blown away and the surface consists of a mosaic of smooth stones, often forming desert pavements, and little further erosion occurs. Other desert features include rock outcrops, exposed bedrock and clays once deposited by flowing water. Temporary lakes may form and salt pans may be left when waters evaporate. There may be underground water sources in the form of springs and seepages from aquifers. Where these are found, oases can occur. Plants and animals living in the desert need special adaptations to survive in the harsh environment. Plants tend to be tough and wiry with small or no leaves, water-resistant cuticles, and often spines to deter herbivory. Some annual plants germinate, bloom, and die within a few weeks after rainfall, while other long-lived plants survive for years and have deep root systems that are able to tap underground moisture. Animals need to keep cool and find enough food and water to survive. Many are nocturnal and stay in the shade or underground during the day's heat. They tend to be efficient at conserving water, extracting most of their needs from their food and concentrating their urine. Some animals remain in a state of dormancy for long periods, ready to become active again during the rare rainfall. They then reproduce rapidly while conditions are favorable before returning to dormancy. People have struggled to live in deserts and the surrounding semi-arid lands for millennia. Nomads have moved their flocks and herds to wherever grazing is available, and oases have provided opportunities for a more settled way of life. The cultivation of semi-arid regions encourages erosion of soil and is one of the causes of increased desertification. Desert farming is possible with the aid of irrigation, and the Imperial Valley in California provides an example of how previously barren land can be made productive by the import of water from an outside source. Many trade routes have been forged across deserts, especially across the Sahara, and traditionally were used by caravans of camels carrying salt, gold, ivory and other goods. Large numbers of slaves were also taken northwards across the Sahara. Some mineral extraction also takes place in deserts, and the uninterrupted sunlight gives potential for capturing large quantities of solar energy. Etymology English desert and its Romance cognates (including Italian and Portuguese deserto, French désert and Spanish desierto) all come from the ecclesiastical Latin dēsertum (originally "an abandoned place"), a participle of dēserere, "to abandon". The correlation between aridity and sparse population is complex and dynamic, varying by culture, era, and technologies; thus the use of the word desert can cause confusion. In English before the 20th century, desert was often used in the sense of "unpopulated area", without specific reference to aridity; but today the word is most often used in its climate-science sense (an area of low precipitation). Phrases such as "desert island" and "Great American Desert", or Shakespeare's "deserts of Bohemia" (The Winter's Tale) in previous centuries did not necessarily imply sand or aridity; their focus was the sparse population. Major deserts Deserts occupy about one third of Earth's land surface. Bottomlands may be salt-covered flats. Eolian processes are major factors in shaping desert landscapes. Polar deserts (also seen as "cold deserts") have similar features, except the main form of precipitation is snow rather than rain. Antarctica is the world's largest cold desert (composed of about 98% thick continental ice sheet and 2% barren rock). Some of the barren rock is to be found in the so-called Dry Valleys of Antarctica that almost never get snow, which can have ice-encrusted saline lakes that suggest evaporation far greater than the rare snowfall due to the strong katabatic winds that even evaporate ice. Deserts, both hot and cold, play a part in moderating Earth's temperature, because they reflect more of the incoming light and their albedo is higher than that of forests or the sea. Defining characteristics A desert is a region of land that is very dry because it receives low amounts of precipitation (usually in the form of rain, but it may be snow, mist or fog), often has little coverage by plants, and in which streams dry up unless they are supplied by water from outside the area. Deserts generally receive less than 250 mm (10 in) of precipitation each year. The potential evapotranspiration may be large but (in the absence of available water) the actual evapotranspiration may be close to zero. Semi-deserts are regions which receive between 250 and 500 mm (10 and 20 in) and when clad in grass, these are known as steppes. Most deserts on Earth such as the Sahara Desert, Grand Australian Desert and the Great Basin Desert, occur in low altitudes. One of the driest places on Earth is the Atacama Desert. It is virtually devoid of life because it is blocked from receiving precipitation by the Andes mountains to the east and the Chilean Coast Range to the west. The cold Humboldt Current and the anticyclone of the Pacific are essential to keep the dry climate of the Atacama. The average precipitation in the Chilean region of Antofagasta is just 1 mm (0.039 in) per year. Some weather stations in the Atacama have never received rain. Evidence suggests that the Atacama may not have had any significant rainfall from 1570 to 1971. It is so arid that mountains that reach as high as 6,885 m (22,589 ft) are completely free of glaciers and, in the southern part from 25°S to 27°S, may have been glacier-free throughout the Quaternary, though permafrost extends down to an altitude of 4,400 m (14,400 ft) and is continuous above 5,600 m (18,400 ft). Nevertheless, there is some plant life in the Atacama, in the form of specialist plants that obtain moisture from dew and the fogs that blow in from the Pacific. When rain falls in deserts, as it occasionally does, it is often with great violence. The desert surface is evidence of this with dry stream channels known as arroyos or wadis meandering across its surface. These can experience flash floods, becoming raging torrents with surprising rapidity after a storm that may be many kilometers away. Most deserts are in basins with no drainage to the sea but some are crossed by exotic rivers sourced in mountain ranges or other high rainfall areas beyond their borders. The River Nile, the Colorado River and the Yellow River do this, losing much of their water through evaporation as they pass through the desert and raising groundwater levels nearby. There may also be underground sources of water in deserts in the form of springs, aquifers, underground rivers or lakes. Where these lie close to the surface, wells can be dug and oases may form where plant and animal life can flourish. The Nubian Sandstone Aquifer System under the Sahara Desert is the largest known accumulation of fossil water. The Great Man-Made River is a scheme launched by Libya's Muammar Gaddafi to tap this aquifer and supply water to coastal cities. Kharga Oasis in Egypt is 150 km (93 mi) long and is the largest oasis in the Libyan Desert. A lake occupied this depression in ancient times and thick deposits of sandy-clay resulted. Wells are dug to extract water from the porous sandstone that lies underneath.[citation needed] Seepages may occur in the walls of canyons and pools may survive in deep shade near the dried up watercourse below. Lakes may form in basins where there is sufficient precipitation or meltwater from glaciers above. They are usually shallow and saline, and wind blowing over their surface can cause stress, moving the water over nearby low-lying areas. When the lakes dry up, they leave a crust or hardpan behind. This area of deposited clay, silt or sand is known as a playa. The deserts of North America have more than one hundred playas, many of them relics of Lake Bonneville which covered parts of Utah, Nevada and Idaho during the last ice age when the climate was colder and wetter. These include the Great Salt Lake, Utah Lake, Sevier Lake and many dry lake beds. The smooth flat surfaces of playas have been used for attempted vehicle speed records at Black Rock Desert and Bonneville Speedway and the United States Air Force uses Rogers Dry Lake in the Mojave Desert as runways for aircraft and the Space Shuttle. Deserts have been defined and classified in a number of ways, generally combining total precipitation, number of days on which this falls, temperature, and humidity, and sometimes additional factors. For example, Phoenix, Arizona, receives less than 250 mm (9.8 in) of precipitation per year, and is immediately recognized as being located in a desert because of its aridity-adapted plants. The North Slope of Alaska's Brooks Range also receives less than 250 mm (9.8 in) of precipitation per year and is often classified as a cold desert. Other regions of the world have cold deserts, including areas of the Himalayas and other high-altitude areas in other parts of the world. Polar deserts cover much of the ice-free areas of the Arctic and Antarctic. A non-technical definition is that deserts are those parts of Earth's surface that have insufficient vegetation cover to support a human population. Potential evapotranspiration supplements the measurement of precipitation in providing a scientific measurement-based definition of a desert. The water budget of an area can be calculated using the formula P − PE ± S, wherein P is precipitation, PE is potential evapotranspiration rates and S is the amount of surface storage of water. Evapotranspiration is the combination of water loss through atmospheric evaporation and through the life processes of plants. Potential evapotranspiration, then, is the amount of water that could evaporate in any given region. As an example, Tucson, Arizona receives about 300 mm (12 in) of rain per year, however about 2,500 mm (98 in) of water could evaporate over the course of a year. In other words, about eight times more water could evaporate from the region than actually falls as rain. Rates of evapotranspiration in cold regions such as Alaska are much lower because of the lack of heat to aid in the evaporation process. Deserts are sometimes classified as "hot" or "cold", "semiarid" or "coastal". The characteristics of hot deserts include high temperatures in summer; greater evaporation than precipitation, usually exacerbated by high temperatures, strong winds and lack of cloud cover; considerable variation in the occurrence of precipitation, its intensity and distribution; and low humidity. Winter temperatures vary considerably between different deserts and are often related to the location of the desert on the continental landmass and the latitude. Daily variations in temperature can be as great as 22 °C (40 °F) or more, with heat loss by radiation at night being increased by the clear skies. Cold deserts, sometimes known as temperate deserts, occur at higher latitudes than hot deserts, and the aridity is caused by the dryness of the air. Some cold deserts are far from the ocean and others are separated by mountain ranges from the sea, and in both cases, there is insufficient moisture in the air to cause much precipitation. The largest of these deserts are found in Central Asia. Others occur on the eastern side of the Rocky Mountains, the eastern side of the southern Andes and in southern Australia. Polar deserts are a particular class of cold desert. The air is very cold and carries little moisture so little precipitation occurs and what does fall, usually snow, is carried along in the often strong wind and may form blizzards, drifts and dunes similar to those caused by dust and sand in other desert regions. In Antarctica, for example, the annual precipitation is about 50 mm (2 in) on the central plateau and some ten times that amount on some major peninsulas. Based on precipitation alone, hyperarid deserts receive less than 25 mm (1 in) of rainfall a year; they have no annual seasonal cycle of precipitation and experience twelve-month periods with no rainfall at all. Arid deserts receive between 25 and 200 mm (1 and 8 in) in a year and semiarid deserts between 200 and 500 mm (8 and 20 in). However, such factors as the temperature, humidity, rate of evaporation and evapotranspiration, and the moisture storage capacity of the ground have a marked effect on the degree of aridity and the plant and animal life that can be sustained. Rain falling in the cold season may be more effective at promoting plant growth, and defining the boundaries of deserts and the semiarid regions that surround them on the grounds of precipitation alone is problematic. A semi-arid desert or a steppe is a version of the arid desert with much more rainfall, vegetation and higher humidity. These regions feature a semi-arid climate and are less extreme than regular deserts. Like arid deserts, temperatures can vary greatly in semi deserts. They share some characteristics of a true desert and are usually located at the edge of deserts and continental dry areas. They usually receive precipitation from 250 to 500 mm (9.8 to 19.7 in) but this can vary due to evapotranspiration and soil nutrition. Semi-deserts can be found in the high elevations of the Tabernas Desert (and some parts of the Spanish Plateau), The Sahel, The Eurasian Steppe, most of Central Asia, the Western US, most of Northern Mexico, portions of South America (especially in Argentina) and the Australian Outback. They usually feature BSh (hot steppe) or BSk (temperate steppe) in the Köppen climate classification. Coastal deserts are mostly found on the western edges of continental land masses in regions where cold currents approach the land or cold water upwellings rise from the ocean depths. The cool winds crossing this water pick up little moisture and the coastal regions have low temperatures and very low rainfall, the main precipitation being in the form of fog and dew. The range of temperatures on a daily and annual scale is relatively low, being 11 °C (20 °F) and 5 °C (9 °F) respectively in the Atacama Desert. Deserts of this type are often long and narrow and bounded to the east by mountain ranges. They occur in Namibia, Chile, southern California and Baja California. Other coastal deserts influenced by cold currents are found in Western Australia, the Arabian Peninsula and Horn of Africa, and the western fringes of the Sahara. In 1961, Peveril Meigs divided desert regions on Earth into three categories according to the amount of precipitation they received. In this now widely accepted system, extremely arid lands have at least twelve consecutive months without precipitation, arid lands have less than 250 mm (9.8 in) of annual precipitation, and semiarid lands have a mean annual precipitation of between 250 and 500 mm (9.8 and 19.7 in). Both extremely arid and arid lands are considered to be deserts while semiarid lands are generally referred to as steppes when they are grasslands. Deserts are also classified, according to their geographical location and dominant weather pattern, as trade wind, mid-latitude, rain shadow, coastal, monsoon, or polar deserts. Trade wind deserts occur either side of the horse latitudes at 30° to 35° North and South. These belts are associated with the subtropical anticyclone and the large-scale descent of dry air. The Sahara Desert is of this type. Mid-latitude deserts occur between 30° and 50° North and South. They are mostly in areas remote from the sea where most of the moisture has already precipitated from the prevailing winds. They include the Tengger and Sonoran Deserts. Monsoon deserts are similar. They occur in regions where large temperature differences occur between sea and land. Moist warm air rises over the land, deposits its water content and circulates back to sea. Further inland, areas receive very little precipitation. The Thar Desert near the India/Pakistan border is of this type. In some parts of the world, deserts are created by a rain shadow effect. Orographic lift occurs as air masses rise to pass over high ground. In the process they cool and lose much of their moisture by precipitation on the windward slope of the mountain range. When they descend on the leeward side, they warm and their capacity to hold moisture increases so an area with relatively little precipitation occurs. The Taklamakan Desert is an example, lying in the rain shadow of the Himalayas and receiving less than 38 mm (1.5 in) precipitation annually. Other areas are arid by virtue of being a very long way from the nearest available sources of moisture. Montane deserts are arid places with a very high altitude; the most prominent example is found north of the Himalayas, in the Kunlun Mountains and the Tibetan Plateau. Many locations within this category have elevations exceeding 3,000 m (9,800 ft) and the thermal regime can be hemiboreal. These places owe their profound aridity (the average annual precipitation is often less than 40 mm or 1.5 in) to being very far from the nearest available sources of moisture and are often in the lee of mountain ranges. Montane deserts are normally cold, or may be scorchingly hot by day and very cold by night as is true of the northeastern slopes of Mount Kilimanjaro. Polar deserts such as McMurdo Dry Valleys remain ice-free because of the dry katabatic winds that flow downhill from the surrounding mountains. Former desert areas presently in non-arid environments, such as the Sandhills in Nebraska, are known as paleodeserts. In the Köppen climate classification system, deserts are classed as BWh (hot desert) or BWk (temperate desert). In the Thornthwaite climate classification system, deserts would be classified as arid megathermal climates. Polar deserts are a type of cold desert. While they do not lack water, having a persistent cover of snow and ice, this is merely due to marginal evaporation rates and low precipitation. The McMurdo dry valleys of Antarctica, which lack water (whether rain, ice, or snow) much like a non-polar desert and even have such desert features as hypersaline lakes and intermittent streams that resemble (except for being frozen at their surfaces) hot or cold deserts for extreme aridity and lack of precipitation of any kind. Extreme winds and not seasonal heat desiccate these nearly-lifeless terrains. The concept of "biological desert" redefines the concept of desert, without the characteristic of aridity, not lacking water, but instead lacking life. Such places can be so-called "ocean deserts", which are mostly at the centers of gyres, but also hypoxic or anoxic waters such as dead zones. Morphology Deserts usually have a large diurnal and seasonal temperature range, with high daytime temperatures falling sharply at night. The diurnal range may be as much as 20 to 30 °C (36 to 54 °F) and the rock surface experiences even greater temperature differentials. During the day the sky is usually clear and most of the sun's radiation reaches the ground, but as soon as the sun sets, the desert cools quickly by radiating heat into space. In hot deserts, the temperature during daytime can exceed 45 °C (113 °F) in summer and plunge below freezing point at night during winter. Such large temperature variations have a destructive effect on the exposed rocky surfaces. The repeated fluctuations put a strain on exposed rock and the flanks of mountains crack and shatter. Fragmented strata slide down into the valleys where they continue to break into pieces due to the relentless sun by day and chill by night. Successive strata are exposed to further weathering. The relief of the internal pressure that has built up in rocks that have been underground for aeons can cause them to shatter. Exfoliation also occurs when the outer surfaces of rocks split off in flat flakes. This is believed to be caused by the stresses put on the rock by repeated thermal expansions and contractions which induces fracturing parallel to the original surface. Chemical weathering processes probably play a more important role in deserts than was previously thought. The necessary moisture may be present in the form of dew or mist. Ground water may be drawn to the surface by evaporation and the formation of salt crystals may dislodge rock particles as sand or disintegrate rocks by exfoliation. Shallow caves are sometimes formed at the base of cliffs by this means. As the desert mountains decay, large areas of shattered rock and rubble occur. The process continues and the end products are either dust or sand. Dust is formed from solidified clay or volcanic deposits whereas sand results from the fragmentation of harder granites, limestone and sandstone. There is a certain critical size (about 0.5 mm) below which further temperature-induced weathering of rocks does not occur and this provides a minimum size for sand grains. As the mountains are eroded, more and more sand is created. At high wind speeds, sand grains are picked up off the surface and blown along, a process known as saltation. The whirling airborne grains act as a sand blasting mechanism which grinds away solid objects in its path as the kinetic energy of the wind is transferred to the ground. The sand eventually ends up deposited in level areas known as sand-fields or sand-seas, or piled up in dunes. Many people think of deserts as consisting of extensive areas of billowing sand dunes because that is the way they are often depicted on TV and in films, but deserts do not always look like this. Across the world, around 20% of desert is sand, varying from only 2% in North America to 30% in Australia and over 45% in Central Asia. Where sand does occur, it is usually in large quantities in the form of sand sheets or extensive areas of dunes. A sand sheet is a near-level, firm expanse of partially consolidated particles in a layer that varies from a few centimeters to a few meters thick. The structure of the sheet consists of thin horizontal layers of coarse silt and very fine to medium grain sand, separated by layers of coarse sand and pea-gravel which are a single grain thick. These larger particles anchor the other particles in place and may also be packed together on the surface so as to form a miniature desert pavement. Small ripples form on the sand sheet when the wind exceeds 24 km/h (15 mph). They form perpendicular to the wind direction and gradually move across the surface as the wind continues to blow. The distance between their crests corresponds to the average length of jumps made by particles during saltation. The ripples are ephemeral and a change in wind direction causes them to reorganise. Sand dunes are accumulations of windblown sand piled up in mounds or ridges. They form downwind of copious sources of dry, loose sand and occur when topographic and climatic conditions cause airborne particles to settle. As the wind blows, saltation and creep take place on the windward side of the dune and individual grains of sand move uphill. When they reach the crest, they cascade down the far side. The upwind slope typically has a gradient of 10° to 20° while the lee slope is around 32°, the angle at which loose dry sand will slip. As this wind-induced movement of sand grains takes place, the dune moves slowly across the surface of the ground. Dunes are sometimes solitary, but they are more often grouped together in dune fields. When these are extensive, they are known as sand seas or ergs. The shape of the dune depends on the characteristics of the prevailing wind. Barchan dunes are produced by strong winds blowing across a level surface and are crescent-shaped with the concave side away from the wind. When there are two directions from which winds regularly blow, a series of long, linear dunes known as seif dunes may form. These also occur parallel to a strong wind that blows in one general direction. Transverse dunes run at a right angle to the prevailing wind direction. Star dunes are formed by variable winds, and have several ridges and slip faces radiating from a central point. They tend to grow vertically; they can reach a height of 500 m (1,600 ft), making them the tallest type of dune. Rounded mounds of sand without a slip face are the rare dome dunes, found on the upwind edges of sand seas. In deserts where large amounts of limestone mountains surround a closed basin, such as at White Sands National Park in south-central New Mexico, occasional storm runoff transports dissolved limestone and gypsum into a low-lying pan within the basin where the water evaporates, depositing the gypsum and forming crystals known as selenite. The crystals left behind by this process are eroded by the wind and deposited as vast white dune fields that resemble snow-covered landscapes. These types of dune are rare, and only form in closed arid basins that retain the highly soluble gypsum that would otherwise be washed into the sea. A large part of the surface area of the world's deserts consists of flat, stone-covered plains dominated by wind erosion. In "eolian deflation", the wind continually removes fine-grained material, which becomes wind-blown sand. This exposes coarser-grained material, mainly pebbles with some larger stones or cobbles, leaving a desert pavement, an area of land overlaid by closely packed smooth stones forming a tessellated mosaic. Different theories exist as to how exactly the pavement is formed. It may be that after the sand and dust is blown away by the wind the stones jiggle themselves into place; alternatively, stones previously below ground may in some way work themselves to the surface. Very little further erosion takes place after the formation of a pavement, and the ground becomes stable. Evaporation brings moisture to the surface by capillary action and calcium salts may be precipitated, binding particles together to form a desert conglomerate. In time, bacteria that live on the surface of the stones accumulate a film of minerals and clay particles, forming a shiny brown coating known as desert varnish. Other non-sandy deserts consist of exposed outcrops of bedrock, dry soils or aridisols, and a variety of landforms affected by flowing water, such as alluvial fans, sinks or playas, temporary or permanent lakes, and oases. A hamada is a type of desert landscape consisting of a high rocky plateau where the sand has been removed by aeolian processes. Other landforms include plains largely covered by gravels and angular boulders, from which the finer particles have been stripped by the wind. These are called "reg" in the western Sahara, "serir" in the eastern Sahara, "gibber plains" in Australia and "saï" in central Asia. The Tassili Plateau in Algeria is a jumble of eroded sandstone outcrops, canyons, blocks, pinnacles, fissures, slabs and ravines. In some places the wind has carved holes or arches, and in others, it has created mushroom-like pillars narrower at the base than the top. On the Colorado Plateau, it is water that has been the prevailing eroding force. Here, rivers, such as the Colorado, have cut their way over the millennia through the high desert floor, creating canyons that are over a mile (6,000 feet or 1,800 meters) deep in places, exposing strata that are over two billion years old. Sand and dust storms are natural events that occur in arid regions where the land is not protected by a covering of vegetation. Dust storms usually start in desert margins rather than the deserts themselves where the finer materials have already been blown away. As a steady wind begins to blow, fine particles lying on the exposed ground begin to vibrate. At greater wind speeds, some particles are lifted into the air stream. When they land, they strike other particles which may be jerked into the air in their turn, starting a chain reaction. Once ejected, these particles move in one of three possible ways, depending on their size, shape and density; suspension, saltation or creep. Suspension is only possible for particles less than 0.1 mm (0.0039 in) in diameter. In a dust storm, these fine particles are lifted up and wafted aloft to heights of up to 6 km (3.7 mi). They reduce visibility and can remain in the atmosphere for days on end, conveyed by the trade winds for distances of up to 6,000 km (3,700 mi). Denser clouds of dust can be formed in stronger winds, moving across the land with a billowing leading edge. The sunlight can be obliterated and it may become as dark as night at ground level. In a study of a dust storm in China in 2001, it was estimated that 6.5 million tons of dust were involved, covering an area of 134,000,000 km2 (52,000,000 sq mi). The mean particle size was 1.44 μm. A much smaller scale, short-lived phenomenon can occur in calm conditions when hot air near the ground rises quickly through a small pocket of cooler, low-pressure air above forming a whirling column of particles, a dust devil. Sandstorms occur with much less frequency than dust storms. They are often preceded by severe dust storms and occur when the wind velocity increases to a point where it can lift heavier particles. These grains of sand, up to about 0.5 mm (0.020 in) in diameter are jerked into the air but soon fall back to earth, ejecting other particles in the process. Their weight prevents them from being airborne for long and most only travel a distance of a few meters (yards). The sand streams along above the surface of the ground like a fluid, often rising to heights of about 30 cm (12 in). In a really severe steady blow, 2 m (6 ft 7 in) is about as high as the sand stream can rise as the largest sand grains do not become airborne at all. They are transported by creep, being rolled along the desert floor or performing short jumps. During a sandstorm, the wind-blown sand particles become electrically charged. Such electric fields, which range in size up to 80 kV/m, can produce sparks and cause interference with telecommunications equipment. They are also unpleasant for humans and can cause headaches and nausea. The electric fields are caused by the collision between airborne particles and by the impacts of saltating sand grains landing on the ground. The mechanism is little understood but the particles usually have a negative charge when their diameter is under 250 μm and a positive one when they are over 500 μm. Ecology and biogeography Deserts and semi-deserts are home to ecosystems with low or very low biomass and primary productivity in arid or semi-arid climates. They are mostly found in subtropical high-pressure belts and major continental rain shadows. Primary productivity depends on low densities of small photoautotrophs that sustain a sparse trophic network. Plant growth is limited by rainfall, temperature extremes and desiccating winds. Deserts have strong temporal variability in the availability of resources due to the total amount of annual rainfall and the size of individual rainfall events. Resources are often ephemeral or episodic, and this triggers sporadic animal movements and 'pulse and reserve' or 'boom-bust' ecosystem dynamics. Erosion and sedimentation are high due to the sparse vegetation cover and the activities of large mammals and people. Plants and animals in deserts are mostly adapted to extreme and prolonged water deficits, but their reproductive phenology often responds to short episodes of surplus. Competitive interactions are weak. Plants face severe challenges in arid environments. Problems they need to solve include how to obtain enough water, how to avoid being eaten and how to reproduce. Photosynthesis is the key to plant growth. It can only take place during the day as energy from the sun is required, but during the day, many deserts become very hot. Opening stomata to allow in the carbon dioxide necessary for the process causes evapotranspiration, and conservation of water is a top priority for desert vegetation. Some plants have resolved this problem by adopting crassulacean acid metabolism, allowing them to open their stomata during the night to allow CO2 to enter, and close them during the day, or by using C4 carbon fixation. Many desert plants have reduced the size of their leaves or abandoned them altogether. Cacti are present in both North and South America with a post-Gondwana origin. The genus is desert specialist, and in most species, the leaves have been dispensed with and the chlorophyll displaced into the trunks, the cellular structure of which has been modified to allow them to store water. When rain falls, the water is rapidly absorbed by the shallow roots and retained to allow them to survive until the next downpour, which may be months or years away. The giant saguaro cacti of the Sonoran Desert form "forests", providing shade for other plants and nesting places for desert birds. Saguaro grows slowly but may live for up to two hundred years. The surface of the trunk is folded like a concertina, allowing it to expand, and a large specimen can hold eight tons of water after a good downpour. Other xerophytic plants have developed similar strategies by a process known as convergent evolution. They limit water loss by reducing the size and number of stomata, by having waxy coatings and hairy or tiny leaves. Some are deciduous, shedding their leaves in the driest season, and others curl their leaves up to reduce transpiration. Others, such as aloes, store water in succulent leaves or stems or in fleshy tubers. Desert plants maximize water uptake by having shallow roots that spread widely, or by developing long taproots that reach down to deep rock strata for ground water. The saltbush in Australia has succulent leaves and secretes salt crystals, enabling it to live in saline areas. In common with cacti, many have developed spines to ward off browsing animals. Some desert plants produce seed which lies dormant in the soil until sparked into growth by rainfall. With annuals, such plants grow with great rapidity and may flower and set seed within weeks, aiming to complete their development before the last vestige of water dries up. For perennial plants, reproduction is more likely to be successful if the seed germinates in a shaded position, but not so close to the parent plant as to be in competition with it. Some seed will not germinate until it has been blown about on the desert floor to scarify the seed coat. The seed of the mesquite tree, which grows in deserts in the Americas, is hard and fails to sprout even when planted carefully. When it has passed through the gut of a pronghorn it germinates readily, and the little pile of moist dung provides an excellent start to life well away from the parent tree. The stems and leaves of some plants lower the surface velocity of sand-carrying winds and protect the ground from erosion. Even small fungi and microscopic plant organisms found on the soil surface (so-called cryptobiotic soil) can be a vital link in preventing erosion and providing support for other living organisms. Cold deserts often have high concentrations of salt in the soil. Grasses and low shrubs are the dominant vegetation here and the ground may be covered with lichens. Most shrubs have spiny leaves and shed them in the coldest part of the year. Animals adapted to live in deserts are called xerocoles. There is no evidence that body temperature of mammals and birds is adaptive to the different climates, either of great heat or cold. In fact, with a very few exceptions, their basal metabolic rate is determined by body size, irrespective of the climate in which they live. Many desert animals (and plants) show especially clear evolutionary adaptations for water conservation or heat tolerance and so are often studied in comparative physiology, ecophysiology, and evolutionary physiology. One well-studied example is the specializations of mammalian kidneys shown by desert-inhabiting species. Many examples of convergent evolution have been identified in desert organisms, including between cacti and Euphorbia, kangaroo rats and jerboas, Phrynosoma and Moloch lizards. Deserts present a very challenging environment for animals. Not only do they require food and water but they also need to keep their body temperature at a tolerable level. In many ways, birds are the ablest to do this of the higher animals. They can move to areas of greater food availability as the desert blooms after local rainfall and can fly to faraway waterholes. In hot deserts, gliding birds can remove themselves from the over-heated desert floor by using thermals to soar in the cooler air at great heights. In order to conserve energy, other desert birds run rather than fly. The cream-coloured courser flits gracefully across the ground on its long legs, stopping periodically to snatch up insects. Like other desert birds, it is well-camouflaged by its colour and can merge into the landscape when stationary. The sandgrouse is an expert at this and nests on the open desert floor dozens of kilometers (miles) away from the waterhole it needs to visit daily. Some small diurnal birds are found in very restricted localities where their plumage matches the color of the underlying surface. The desert lark takes frequent dust baths which ensures that it matches its environment. Water and carbon dioxide are metabolic end products of oxidation of fats, proteins, and carbohydrates. Oxidising a gram of carbohydrate produces 0.60 grams of water; a gram of protein produces 0.41 grams of water; and a gram of fat produces 1.07 grams of water, making it possible for xerocoles to live with little or no access to drinking water. The kangaroo rat for example makes use of this water of metabolism and conserves water both by having a low basal metabolic rate and by remaining underground during the heat of the day, reducing loss of water through its skin and respiratory system when at rest. Herbivorous mammals obtain moisture from the plants they eat. Species such as the addax antelope, dik-dik, Grant's gazelle and oryx are so efficient at doing this that they apparently never need to drink. The camel is a superb example of a mammal adapted to desert life. It minimizes its water loss by producing concentrated urine and dry dung, and is able to lose 40% of its body weight through water loss without dying of dehydration. Carnivores can obtain much of their water needs from the body fluids of their prey. Many other hot desert animals are nocturnal, seeking out shade during the day or dwelling underground in burrows. At depths of more than 50 cm (20 in), these remain at between 30 and 32 °C (86 and 90 °F) regardless of the external temperature. Jerboas, desert rats, kangaroo rats and other small rodents emerge from their burrows at night and so do the foxes, coyotes, jackals and snakes that prey on them. Kangaroos keep cool by increasing their respiration rate, panting, sweating and moistening the skin of their forelegs with saliva. Mammals living in cold deserts have developed greater insulation through warmer body fur and insulating layers of fat beneath the skin. The arctic weasel has a metabolic rate that is two or three times as high as would be expected for an animal of its size. Birds have avoided the problem of losing heat through their feet by not attempting to maintain them at the same temperature as the rest of their bodies, a form of adaptive insulation. The emperor penguin has dense plumage, a downy under layer, an air insulation layer next to the skin and various thermoregulatory strategies to maintain its body temperature in one of the harshest environments on Earth. Being ectotherms, reptiles are unable to live in cold deserts but are well-suited to hot ones. In the heat of the day in the Sahara, the temperature can rise to 50 °C (122 °F). Reptiles cannot survive at this temperature and lizards will be prostrated by heat at 45 °C (113 °F). They have few adaptations to desert life and are unable to cool themselves by sweating so they shelter during the heat of the day. In the first part of the night, as the ground radiates the heat absorbed during the day, they emerge and search for prey. Lizards and snakes are the most numerous in arid regions and certain snakes have developed a novel method of locomotion that enables them to move sidewards and navigate high sand-dunes. These include the horned viper of Africa and the sidewinder of North America, evolutionarily distinct but with similar behavioural patterns because of convergent evolution. Many desert reptiles are ambush predators and often bury themselves in the sand, waiting for prey to come within range. Amphibians might seem unlikely desert-dwellers, because of their need to keep their skins moist and their dependence on water for reproductive purposes. In fact, the few species that are found in this habitat have made some remarkable adaptations. Most of them are fossorial, spending the hot dry months aestivating in deep burrows. While there they shed their skins a number of times and retain the remnants around them as a waterproof cocoon to retain moisture. In the Sonoran Desert, Couch's spadefoot toad spends most of the year dormant in its burrow. Heavy rain is the trigger for emergence and the first male to find a suitable pool calls to attract others. Eggs are laid and the tadpoles grow rapidly as they must reach metamorphosis before the water evaporates. As the desert dries out, the adult toads rebury themselves. The juveniles stay on the surface for a while, feeding and growing, but soon dig themselves burrows. Few make it to adulthood. The water holding frog in Australia has a similar life cycle and may aestivate for as long as five years if no rain falls. The Desert rain frog of Namibia is nocturnal and survives because of the damp sea fogs that roll in from the Atlantic. Invertebrates, particularly arthropods, have successfully made their homes in the desert. Flies, beetles, ants, termites, locusts, millipedes, scorpions and spiders have hard cuticles which are impervious to water and many of them lay their eggs underground and their young develop away from the temperature extremes at the surface. The Saharan silver ant (Cataglyphis bombycina) uses a heat shock protein in a novel way and forages in the open during brief forays in the heat of the day. The long-legged darkling beetle in Namibia stands on its front legs and raises its carapace to catch the morning mist as condensate, funnelling the water into its mouth. Some arthropods make use of the ephemeral pools that form after rain and complete their life cycle in a matter of days. The desert shrimp does this, appearing "miraculously" in new-formed puddles as the dormant eggs hatch. Others, such as brine shrimps, fairy shrimps and tadpole shrimps, are cryptobiotic and can lose up to 92% of their bodyweight, rehydrating as soon as it rains and their temporary pools reappear. Human relations Humans have long made use of deserts as places to live, and more recently have started to exploit them for minerals and energy capture. Deserts play a significant role in human culture with an extensive literature. Deserts can only support a limited population of both humans and animals. People have been living in deserts for millennia. Many, such as the Bushmen in the Kalahari, the Aborigines in Australia and various tribes of North American Indians, were originally hunter-gatherers. They developed skills in the manufacture and use of weapons, animal tracking, finding water, foraging for edible plants and using the things they found in their natural environment to supply their everyday needs. Their self-sufficient skills and knowledge were passed down through the generations by word of mouth. Other cultures developed a nomadic way of life as herders of sheep, goats, cattle, camels, yaks, llamas or reindeer. They travelled over large areas with their herds, moving to new pastures as seasonal and erratic rainfall encouraged new plant growth. They took with them their tents made of cloth or skins draped over poles and their diet included milk, blood and sometimes meat. The desert nomads were also traders. The Sahara is a very large expanse of land stretching from the Atlantic rim to Egypt. Trade routes were developed linking the Sahel in the south with the fertile Mediterranean region to the north and large numbers of camels were used to carry valuable goods across the desert interior. The Tuareg were traders and the transported goods traditionally included slaves, ivory and gold going northwards and salt going southwards. Berbers with knowledge of the region were employed to guide the caravans between the various oases and wells. Several million slaves may have been taken northwards across the Sahara between the 8th and 18th centuries. Traditional means of overland transport declined with the advent of motor vehicles, shipping and air freight, but caravans still travel along routes between Agadez and Bilma and between Timbuktu and Taoudenni carrying salt from the interior to desert-edge communities. Round the rims of deserts, where more precipitation occurred and conditions were more suitable, some groups took to cultivating crops. This may have happened when drought caused the death of herd animals, forcing herdsmen to turn to cultivation. With few inputs, they were at the mercy of the weather and may have lived at bare subsistence level. The land they cultivated reduced the area available to nomadic herders, causing disputes over land. The semi-arid fringes of the desert have fragile soils which are at risk of erosion when exposed, as happened in the American Dust Bowl in the 1930s. The grasses that held the soil in place were ploughed under, and a series of dry years caused crop failures, while enormous dust storms blew the topsoil away. Half a million Americans were forced to leave their land in this catastrophe. Similar damage is being done today to the semi-arid areas that rim deserts and about twelve million hectares of land are being turned to desert each year. Desertification is caused by such factors as drought, climatic shifts, tillage for agriculture, overgrazing and deforestation. Vegetation plays a major role in determining the composition of the soil. In many environments, the rate of erosion and run off increases dramatically with reduced vegetation cover. Deserts contain substantial mineral resources, sometimes over their entire surface, giving them their characteristic colors. For example, the red of many sand deserts comes from laterite minerals. Geological processes in a desert climate can concentrate minerals into valuable deposits. Leaching by ground water can extract ore minerals and redeposit them, according to the water table, in concentrated form. Similarly, evaporation tends to concentrate minerals in desert lakes, creating dry lake beds or playas rich in minerals. Evaporation can concentrate minerals as a variety of evaporite deposits, including gypsum, sodium nitrate, sodium chloride and borates. Evaporites are found in the US's Great Basin Desert, historically exploited by the "20-mule teams" pulling carts of borax from Death Valley to the nearest railway. A desert especially rich in mineral salts is the Atacama Desert, Chile, where sodium nitrate has been mined for explosives and fertilizer since around 1850. Other desert minerals are copper from Chile, Peru, and Iran, and iron and uranium in Australia. Many other metals, salts and commercially valuable types of rock such as pumice are extracted from deserts around the world. Oil and gas form on the bottom of shallow seas when micro-organisms decompose under anoxic conditions and later become covered with sediment. Many deserts were at one time the sites of shallow seas and others have had underlying hydrocarbon deposits transported to them by the movement of tectonic plates. Some major oilfields such as Ghawar are found under the sands of Saudi Arabia. Geologists believe that other oil deposits were formed by aeolian processes in ancient deserts as may be the case with some of the major American oil fields. Traditional desert farming systems have long been established in North Africa, irrigation being the key to success in an area where water stress is a limiting factor to growth. Techniques that can be used include drip irrigation, the use of organic residues or animal manures as fertilisers and other traditional agricultural management practices. Once fertility has been built up, further crop production preserves the soil from destruction by wind and other forms of erosion. It has been found that plant growth-promoting bacteria play a role in increasing the resistance of plants to stress conditions and these rhizobacterial suspensions could be inoculated into the soil in the vicinity of the plants. A study of these microbes found that desert farming hampers desertification by establishing islands of fertility allowing farmers to achieve increased yields despite the adverse environmental conditions. A field trial in the Sonoran Desert which exposed the roots of different species of tree to rhizobacteria and the nitrogen fixing bacterium Azospirillum brasilense with the aim of restoring degraded lands was only partially successful. The Judean Desert was farmed in the 7th century BC during the Iron Age to supply food for desert forts. Native Americans in the south western United States became agriculturalists around 600 AD when seeds and technologies became available from Mexico. They used terracing techniques and grew gardens beside seeps, in moist areas at the foot of dunes, near streams providing flood irrigation and in areas irrigated by extensive specially built canals. The Hohokam tribe constructed over 500 miles (800 km) of large canals and maintained them for centuries, an impressive feat of engineering. They grew maize, beans, squash and peppers. A modern example of desert farming is the Imperial Valley in California, which has high temperatures and average rainfall of just 3 in (76 mm) per year. The economy is heavily based on agriculture and the land is irrigated through a network of canals and pipelines sourced entirely from the Colorado River via the All-American Canal. The soil is deep and fertile, being part of the river's flood plains, and what would otherwise have been desert has been transformed into one of the most productive farming regions in California. Other water from the river is piped to urban communities but all this has been at the expense of the river, which below the extraction sites no longer has any above-ground flow during most of the year. Another problem of growing crops in this way is the build-up of salinity in the soil caused by the evaporation of river water. The greening of the desert remains an aspiration and was at one time viewed as a future means for increasing food production for the world's growing population. This prospect has proved false as it disregarded the environmental damage caused elsewhere by the diversion of water for desert project irrigation. Deserts are increasingly seen as sources for solar energy, partly due to low amounts of cloud cover. Many solar power plants have been built in the Mojave Desert such as the Solar Energy Generating Systems and Ivanpah Solar Power Facility. Large swaths of this desert are covered in mirrors. The potential for generating solar energy from the Sahara Desert is huge, the highest found on the globe. Professor David Faiman of Ben-Gurion University has stated that the technology now exists to supply all of the world's electricity needs from 10% of the Sahara Desert. Desertec Industrial Initiative was a consortium seeking $560 billion to invest in North African solar and wind installations over the next forty years to supply electricity to Europe via cable lines running under the Mediterranean Sea. European interest in the Sahara Desert stems from its two aspects: the almost continual daytime sunshine and plenty of unused land. The Sahara receives more sunshine per acre than any part of Europe. The Sahara Desert also has the empty space totalling hundreds of square miles required to house fields of mirrors for solar plants. The Negev Desert, Israel, and the surrounding area, including the Arava Valley, receive plenty of sunshine and are generally not arable. This has resulted in the construction of many solar plants. David Faiman has proposed that "giant" solar plants in the Negev could supply all of Israel's needs for electricity. The Arabs were probably the first organized force to conduct successful battles in the desert. By knowing back routes and the locations of oases and by utilizing camels, Muslim Arab forces were able to successfully overcome both Roman and Persian forces in the period 600 to 700 AD during the expansion of the Islamic caliphate. Many centuries later, both world wars saw fighting in the desert. In the First World War, the Ottoman Turks were engaged with the British regular army in a campaign that spanned the Arabian Peninsula. The Turks were defeated by the British, who had the backing of irregular Arab forces that were seeking to revolt against the Turks in the Hejaz, made famous in T.E. Lawrence's book Seven Pillars of Wisdom. In the Second World War, the Western Desert Campaign began in Italian Libya. Warfare in the desert offered great scope for tacticians to use the large open spaces without the distractions of casualties among civilian populations. Tanks and armoured vehicles were able to travel large distances unimpeded and land mines were laid in large numbers. However, the size and harshness of the terrain meant that all supplies needed to be brought in from great distances. The victors in a battle would advance and their supply chain would necessarily become longer, while the defeated army could retreat, regroup and resupply. For these reasons, the front line moved back and forth through hundreds of kilometers as each side lost and regained momentum. Its most easterly point was at El Alamein in Egypt, where the Allies decisively defeated the Axis forces in 1942. The desert is generally thought of as a barren and empty landscape. It has been portrayed by writers, film-makers, philosophers, artists and critics as a place of extremes, a metaphor for anything from death, war or religion to the primitive past or the desolate future. There is an extensive literature on the subject of deserts. An early historical account is that of Marco Polo (c. 1254–1324), who travelled through Central Asia to China, crossing a number of deserts in his twenty four year trek. Some accounts give vivid descriptions of desert conditions, though often accounts of journeys across deserts are interwoven with reflection, as is the case in Charles Montagu Doughty's major work, Travels in Arabia Deserta (1888). Antoine de Saint-Exupéry described both his flying and the desert in Wind, Sand and Stars, and Gertrude Bell travelled extensively in the Arabian desert in the early part of the 20th century, becoming an expert on the subject, writing books and advising the British government on dealing with the Arabs. Another woman explorer was Freya Stark, who travelled alone in the Middle East, visiting Turkey, Arabia, Yemen, Syria, Persia and Afghanistan, writing over twenty books on her experiences. The German naturalist Uwe George spent several years living in deserts, recording his experiences and research in his 1976 book, In the Deserts of this Earth. The American poet Robert Frost expressed his bleak thoughts in his poem, Desert Places (1933), which ends with the stanza "They cannot scare me with their empty spaces / Between stars – on stars where no human race is. / I have it in me so much nearer home / To scare myself with my own desert places." Saints associated with the desert include Anthony the Great, also known as "Anthony of the Desert". Pope Benedict XVI linked the metaphorical existence of "internal deserts" with physical and social deserts in his homily inaugurating his papacy: "The external deserts in the world are growing, because the internal deserts have become so vast". Deserts on other planets Mars, Venus, and Titan are some planetary mass objects in the Solar System in which self-sustaining deserts are present. Despite its low surface atmospheric pressure (only 1/100 of that of Earth), the patterns of atmospheric circulation on Mars have formed a sea of circumpolar sand more than 5 million km2 (1.9 million sq mi) in the area, larger than most deserts on Earth. The Martian deserts consist of half-moon dunes in flat areas near the permanent polar ice caps in the north. The smaller dune fields occupy the bottom of many of the craters situated in the Martian polar regions. Examination of the surface of rocks by laser beamed from the Mars Exploration Rover appears to show a surface film that resembles the desert varnish found on Earth although it might just be surface dust. The surface of Titan, a moon of Saturn, also has a desert-like surface with dune seas. Venus has a surface air pressure of approximately 90 times that of Earth's atmosphere and a temperature of over 730 Kelvin, more than twice the temperature on Earth, and its surface does not have any liquid bodies nor does its surface receive much precipitation. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Joseph_Reed_(politician)] | [TOKENS: 1593]
Contents Joseph Reed (politician) Joseph Reed (August 27, 1741 – March 5, 1785) was an American lawyer, military officer, politician, and Founding Father of the United States. He served as aide-de-camp to George Washington, as adjutant general of the Continental Army and fought in several key battles during the American Revolutionary War. He is credited with designing the Pine Tree Flag used during the war. He served as a delegate to the Continental Congress from Pennsylvania and was a signatory to the Articles of Confederation. He served as the third President of Pennsylvania's Supreme Executive Council, a position analogous to the modern office of Governor, from 1778 to 1781. He was elected to Congress a second time in 1784, but did not take office due to poor health. Early life and education Reed was born in Trenton in the Province of New Jersey on August 7, 1741, to Andrew Reed and Theodosia Bowes. His grandfather, Joseph Reed, was a wealthy merchant born in Carrickfergus, County Antrim in Ulster who settled in West Jersey. The family moved to Philadelphia shortly after Reed's birth and, as a boy, Reed was enrolled at Philadelphia Academy. He received his bachelor's degree from the College of New Jersey (later known as Princeton University) in 1757. He studied law under Richard Stockton. In the summer of 1763, Reed sailed for England and studied law at Middle Temple in London for two years. Shortly after his studies ended in 1768, Reed was elected as a member of the American Philosophical Society. Business career Upon his return from London, he established a law practice in Trenton, New Jersey, and was appointed deputy secretary of New Jersey and clerk of the council. He worked as an assistant to Dennys de Berdt, a former agent for his father and the colonial representative for New England. He was a successful land speculator. Military career In 1775, after the Battles of Lexington and Concord, Reed was appointed lieutenant colonel in the Pennsylvania Militia. When his friend George Washington was assigned commander-in-chief, Reed became his aide-de-camp. Reed is credited with creating the Pine Tree Flag. On October 20, 1775, Reed wrote a letter to Colonel John Glover of the "Marblehead Men" Regiment of seamen in the Continental Army, setting the design of the First Navy Flag, the Evergreen Tree of Liberty flag. Reed wrote: "What do you think of a Flag with a white Ground, a tree in the middle, the motto: "Appeal to Heaven"." In June 1775, Reed served as Adjutant-General of the Continental Army with the rank of colonel and fought in the Battle of Long Island. In this service he became one of General Washington's closest confidants; Washington wrote letters to him frequently and rarely traveled or made any substantial military decision without first consulting Reed. Because of his knowledge of the terrain of New Jersey, Reed was instrumental in the planning of the Battle of Trenton. He fought in the Battle of Princeton and provided important intelligence in the Battle of Princeton back to Washington. He was involved in the second crossing of the Delaware, and fought in the Battle of Brandywine, the Battle of Germantown and the Battle of Monmouth. In December 1776, anxious to know the location of General Charles Lee's forces following the Continental Army's chaotic retreat from Manhattan, Washington opened a letter from Lee to Reed which indicated that they were both having serious doubts about Washington's decision-making and abilities. This was extremely disconcerting to Washington, as Reed was one of his most trusted officers. Washington and Reed maintained a working relationship in the army together, although Reed never had the same level of trust from Washington from that point forward. In 1782, Reed was accused of treasonous conduct during the war in an anonymous article published in a newspaper. Reed assumed the article was published by Colonel John Cadwalader, but others believe the author was Dr. Benjamin Rush. A pamphlet series was published in 1783 which defended Reed. Political career He served on the Committee of Correspondence for Philadelphia in 1774, as president of Pennsylvania's second Provincial Congress in 1775 and as member of the Pennsylvania Assembly in 1776. He was offered the position of Chief Justice of the Supreme Court of Pennsylvania in 1777, but declined. In 1778, Reed was one of the signers of the Articles of Confederation. On December 1, 1778, he was elected President of the Supreme Executive Council of Pennsylvania, a position analogous to the modern office of governor. Reed oversaw the gradual abolition of slavery in Pennsylvania and the awarding of Revolutionary soldiers with lifelong "half-pay". Reed carried on a public feud with Benedict Arnold, who was the military commander of Philadelphia at the time. He accused him of eight instances of corruption. Arnold demanded a military trial and successfully cleared his name, although his reputation was damaged. Arnold resigned his post in Philadelphia, and the charges led Arnold to later commit treason against the United States. In 1778, Reed reported to Congress that Frederick Howard, 5th Earl of Carlisle, through the Carlisle Peace Commission, had attempted to bribe him to promote reconciliation of the colonies with Britain. Reed's antipathy to Pennsylvania's Loyalist residents has been well attested by historic sources. Whilst in Congress, he advocated for the seizure of Loyalist properties and treason charges for those aligned with Great Britain (Reed and his family then lived in a confiscated Loyalist home). Congress regarded the Loyalist citizens in a more tolerant manner. As the President of Pennsylvania, Reed oversaw numerous trials of suspected Loyalists. He also played a key role in settling the Pennsylvania Line Mutiny in January 1781. After leaving the office of president of the Supreme Council, he served as one of the lawyers who defended Pennsylvania's claim to the Wyoming Valley in a land dispute from the state of Connecticut. He was elected to Congress a second time in 1784, but was unable to take office due to poor health. Personal life During his time studying in London, Reed became romantically attached to Esther de Berdt, the daughter of the agent for the Province of Massachusetts Bay, Dennys de Berdt. Though very fond of Reed, de Berdt was aware of Reed's intention to return to Philadelphia and initially refused consent for Esther to marry him. Reed returned to the Colonies with only a tenuous engagement to Esther, and with an understanding that he would return to settle permanently in Great Britain shortly after. Following the death of his father, Reed finally returned to London to find that Esther's father had died during Reed's return trip to Britain. Reed and Esther married in May 1770 at Saint Luke's, Cripplegate, near the City of London. Finding the de Berdt family in financial difficulties, Reed remained in London long enough to help settle his wife's family's affairs. Together with the widowed Mrs. de Berdt, Esther and Reed sailed for North America in October 1770. The Reeds would have five children: Joseph, who would become a prominent lawyer; Denis de Berdt; George Washington, who would become a Navy commander; Esther; and Martha. Reed owned two slaves. Death In 1784, Reed visited England with the hope of improving his health but was not successful. He returned to Pennsylvania and died in Philadelphia on March 5, 1785, at the age of 43. Reed was initially interred in the Second Presbyterian Church cemetery in Philadelphia. Both he and his wife were reinterred to Laurel Hill Cemetery in 1868. References Citations Sources Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/File:Dol.jpg] | [TOKENS: 91]
File:Dol.jpg Summary Licensing File history Click on a date/time to view the file as it appeared at that time. File usage The following 4 pages use this file: Global file usage The following other wikis use this file: Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Lod#cite_note-68] | [TOKENS: 4733]
Contents Lod Lod (Hebrew: לוד, fully vocalized: לֹד), also known as Lydda (Ancient Greek: Λύδδα) and Lidd (Arabic: اللِّدّ, romanized: al-Lidd, or اللُّدّ, al-Ludd), is a city 15 km (9+1⁄2 mi) southeast of Tel Aviv and 40 km (25 mi) northwest of Jerusalem in the Central District of Israel. It is situated between the lower Shephelah on the east and the coastal plain on the west. The city had a population of 90,814 in 2023. Lod has been inhabited since at least the Neolithic period. It is mentioned a few times in the Hebrew Bible and in the New Testament. Between the 5th century BCE and up until the late Roman period, it was a prominent center for Jewish scholarship and trade. Around 200 CE, the city became a Roman colony and was renamed Diospolis (Ancient Greek: Διόσπολις, lit. 'city of Zeus'). Tradition identifies Lod as the 4th century martyrdom site of Saint George; the Church of Saint George and Mosque of Al-Khadr located in the city is believed to have housed his remains. Following the Arab conquest of the Levant, Lod served as the capital of Jund Filastin; however, a few decades later, the seat of power was transferred to Ramla, and Lod slipped in importance. Under Crusader rule, the city was a Catholic diocese of the Latin Church and it remains a titular see to this day.[citation needed] Lod underwent a major change in its population in the mid-20th century. Exclusively Palestinian Arab in 1947, Lod was part of the area designated for an Arab state in the United Nations Partition Plan for Palestine; however, in July 1948, the city was occupied by the Israel Defense Forces, and most of its Arab inhabitants were expelled in the Palestinian expulsion from Lydda and Ramle. The city was largely resettled by Jewish immigrants, most of them expelled from Arab countries. Today, Lod is one of Israel's mixed cities, with an Arab population of 30%. Lod is one of Israel's major transportation hubs. The main international airport, Ben Gurion Airport, is located 8 km (5 miles) north of the city. The city is also a major railway and road junction. Religious references The Hebrew name Lod appears in the Hebrew Bible as a town of Benjamin, founded along with Ono by Shamed or Shamer (1 Chronicles 8:12; Ezra 2:33; Nehemiah 7:37; 11:35). In Ezra 2:33, it is mentioned as one of the cities whose inhabitants returned after the Babylonian captivity. Lod is not mentioned among the towns allocated to the tribe of Benjamin in Joshua 18:11–28. The name Lod derives from a tri-consonantal root not extant in Northwest Semitic, but only in Arabic (“to quarrel; withhold, hinder”). An Arabic etymology of such an ancient name is unlikely (the earliest attestation is from the Achaemenid period). In the New Testament, the town appears in its Greek form, Lydda, as the site of Peter's healing of Aeneas in Acts 9:32–38. The city is also mentioned in an Islamic hadith as the location of the battlefield where the false messiah (al-Masih ad-Dajjal) will be slain before the Day of Judgment. History The first occupation dates to the Neolithic in the Near East and is associated with the Lodian culture. Occupation continued in the Levant Chalcolithic. Pottery finds have dated the initial settlement in the area now occupied by the town to 5600–5250 BCE. In the Early Bronze, it was an important settlement in the central coastal plain between the Judean Shephelah and the Mediterranean coast, along Nahal Ayalon. Other important nearby sites were Tel Dalit, Tel Bareqet, Khirbat Abu Hamid (Shoham North), Tel Afeq, Azor and Jaffa. Two architectural phases belong to the late EB I in Area B. The first phase had a mudbrick wall, while the late phase included a circulat stone structure. Later excavations have produced an occupation later, Stratum IV. It consists of two phases, Stratum IVb with mudbrick wall on stone foundations and rounded exterior corners. In Stratum IVa there was a mudbrick wall with no stone foundations, with imported Egyptian potter and local pottery imitations. Another excavations revealed nine occupation strata. Strata VI-III belonged to Early Bronze IB. The material culture showed Egyptian imports in strata V and IV. Occupation continued into Early Bronze II with four strata (V-II). There was continuity in the material culture and indications of centralized urban planning. North to the tell were scattered MB II burials. The earliest written record is in a list of Canaanite towns drawn up by the Egyptian pharaoh Thutmose III at Karnak in 1465 BCE. From the fifth century BCE until the Roman period, the city was a centre of Jewish scholarship and commerce. According to British historian Martin Gilbert, during the Hasmonean period, Jonathan Maccabee and his brother, Simon Maccabaeus, enlarged the area under Jewish control, which included conquering the city. The Jewish community in Lod during the Mishnah and Talmud era is described in a significant number of sources, including information on its institutions, demographics, and way of life. The city reached its height as a Jewish center between the First Jewish-Roman War and the Bar Kokhba revolt, and again in the days of Judah ha-Nasi and the start of the Amoraim period. The city was then the site of numerous public institutions, including schools, study houses, and synagogues. In 43 BC, Cassius, the Roman governor of Syria, sold the inhabitants of Lod into slavery, but they were set free two years later by Mark Antony. During the First Jewish–Roman War, the Roman proconsul of Syria, Cestius Gallus, razed the town on his way to Jerusalem in Tishrei 66 CE. According to Josephus, "[he] found the city deserted, for the entire population had gone up to Jerusalem for the Feast of Tabernacles. He killed fifty people whom he found, burned the town and marched on". Lydda was occupied by Emperor Vespasian in 68 CE. In the period following the destruction of Jerusalem in 70 CE, Rabbi Tarfon, who appears in many Tannaitic and Jewish legal discussions, served as a rabbinic authority in Lod. During the Kitos War, 115–117 CE, the Roman army laid siege to Lod, where the rebel Jews had gathered under the leadership of Julian and Pappos. Torah study was outlawed by the Romans and pursued mostly in the underground. The distress became so great, the patriarch Rabban Gamaliel II, who was shut up there and died soon afterwards, permitted fasting on Ḥanukkah. Other rabbis disagreed with this ruling. Lydda was next taken and many of the Jews were executed; the "slain of Lydda" are often mentioned in words of reverential praise in the Talmud. In 200 CE, emperor Septimius Severus elevated the town to the status of a city, calling it Colonia Lucia Septimia Severa Diospolis. The name Diospolis ("City of Zeus") may have been bestowed earlier, possibly by Hadrian. At that point, most of its inhabitants were Christian. The earliest known bishop is Aëtius, a friend of Arius. During the following century (200-300CE), it's said that Joshua ben Levi founded a yeshiva in Lod. In December 415, the Council of Diospolis was held here to try Pelagius; he was acquitted. In the sixth century, the city was renamed Georgiopolis after St. George, a soldier in the guard of the emperor Diocletian, who was born there between 256 and 285 CE. The Church of Saint George and Mosque of Al-Khadr is named for him. The 6th-century Madaba map shows Lydda as an unwalled city with a cluster of buildings under a black inscription reading "Lod, also Lydea, also Diospolis". An isolated large building with a semicircular colonnaded plaza in front of it might represent the St George shrine. After the Muslim conquest of Palestine by Amr ibn al-'As in 636 CE, Lod which was referred to as "al-Ludd" in Arabic served as the capital of Jund Filastin ("Military District of Palaestina") before the seat of power was moved to nearby Ramla during the reign of the Umayyad Caliph Suleiman ibn Abd al-Malik in 715–716. The population of al-Ludd was relocated to Ramla, as well. With the relocation of its inhabitants and the construction of the White Mosque in Ramla, al-Ludd lost its importance and fell into decay. The city was visited by the local Arab geographer al-Muqaddasi in 985, when it was under the Fatimid Caliphate, and was noted for its Great Mosque which served the residents of al-Ludd, Ramla, and the nearby villages. He also wrote of the city's "wonderful church (of St. George) at the gate of which Christ will slay the Antichrist." The Crusaders occupied the city in 1099 and named it St Jorge de Lidde. It was briefly conquered by Saladin, but retaken by the Crusaders in 1191. For the English Crusaders, it was a place of great significance as the birthplace of Saint George. The Crusaders made it the seat of a Latin Church diocese, and it remains a titular see. It owed the service of 10 knights and 20 sergeants, and it had its own burgess court during this era. In 1226, Ayyubid Syrian geographer Yaqut al-Hamawi visited al-Ludd and stated it was part of the Jerusalem District during Ayyubid rule. Sultan Baybars brought Lydda again under Muslim control by 1267–8. According to Qalqashandi, Lydda was an administrative centre of a wilaya during the fourteenth and fifteenth century in the Mamluk empire. Mujir al-Din described it as a pleasant village with an active Friday mosque. During this time, Lydda was a station on the postal route between Cairo and Damascus. In 1517, Lydda was incorporated into the Ottoman Empire as part of the Damascus Eyalet, and in the 1550s, the revenues of Lydda were designated for the new waqf of Hasseki Sultan Imaret in Jerusalem, established by Hasseki Hurrem Sultan (Roxelana), the wife of Suleiman the Magnificent. By 1596 Lydda was a part of the nahiya ("subdistrict") of Ramla, which was under the administration of the liwa ("district") of Gaza. It had a population of 241 households and 14 bachelors who were all Muslims, and 233 households who were Christians. They paid a fixed tax-rate of 33,3 % on agricultural products, including wheat, barley, summer crops, vineyards, fruit trees, sesame, special product ("dawalib" =spinning wheels), goats and beehives, in addition to occasional revenues and market toll, a total of 45,000 Akçe. All of the revenue went to the Waqf. In 1051 AH/1641/2, the Bedouin tribe of al-Sawālima from around Jaffa attacked the villages of Subṭāra, Bayt Dajan, al-Sāfiriya, Jindās, Lydda and Yāzūr belonging to Waqf Haseki Sultan. The village appeared as Lydda, though misplaced, on the map of Pierre Jacotin compiled in 1799. Missionary William M. Thomson visited Lydda in the mid-19th century, describing it as a "flourishing village of some 2,000 inhabitants, imbosomed in noble orchards of olive, fig, pomegranate, mulberry, sycamore, and other trees, surrounded every way by a very fertile neighbourhood. The inhabitants are evidently industrious and thriving, and the whole country between this and Ramleh is fast being filled up with their flourishing orchards. Rarely have I beheld a rural scene more delightful than this presented in early harvest ... It must be seen, heard, and enjoyed to be appreciated." In 1869, the population of Ludd was given as: 55 Catholics, 1,940 "Greeks", 5 Protestants and 4,850 Muslims. In 1870, the Church of Saint George was rebuilt. In 1892, the first railway station in the entire region was established in the city. In the second half of the 19th century, Jewish merchants migrated to the city, but left after the 1921 Jaffa riots. In 1882, the Palestine Exploration Fund's Survey of Western Palestine described Lod as "A small town, standing among enclosure of prickly pear, and having fine olive groves around it, especially to the south. The minaret of the mosque is a very conspicuous object over the whole of the plain. The inhabitants are principally Moslim, though the place is the seat of a Greek bishop resident of Jerusalem. The Crusading church has lately been restored, and is used by the Greeks. Wells are found in the gardens...." From 1918, Lydda was under the administration of the British Mandate in Palestine, as per a League of Nations decree that followed the Great War. During the Second World War, the British set up supply posts in and around Lydda and its railway station, also building an airport that was renamed Ben Gurion Airport after the death of Israel's first prime minister in 1973. At the time of the 1922 census of Palestine, Lydda had a population of 8,103 inhabitants (7,166 Muslims, 926 Christians, and 11 Jews), the Christians were 921 Orthodox, 4 Roman Catholics and 1 Melkite. This had increased by the 1931 census to 11,250 (10,002 Muslims, 1,210 Christians, 28 Jews, and 10 Bahai), in a total of 2475 residential houses. In 1938, Lydda had a population of 12,750. In 1945, Lydda had a population of 16,780 (14,910 Muslims, 1,840 Christians, 20 Jews and 10 "other"). Until 1948, Lydda was an Arab town with a population of around 20,000—18,500 Muslims and 1,500 Christians. In 1947, the United Nations proposed dividing Mandatory Palestine into two states, one Jewish state and one Arab; Lydda was to form part of the proposed Arab state. In the ensuing war, Israel captured Arab towns outside the area the UN had allotted it, including Lydda. In December 1947, thirteen Jewish passengers in a seven-car convoy to Ben Shemen Youth Village were ambushed and murdered.In a separate incident, three Jewish youths, two men and a woman were captured, then raped and murdered in a neighbouring village. Their bodies were paraded in Lydda’s principal street. The Israel Defense Forces entered Lydda on 11 July 1948. The following day, under the impression that it was under attack, the 3rd Battalion was ordered to shoot anyone "seen on the streets". According to Israel, 250 Arabs were killed. Other estimates are higher: Arab historian Aref al Aref estimated 400, and Nimr al Khatib 1,700. In 1948, the population rose to 50,000 during the Nakba, as Arab refugees fleeing other areas made their way there. A key event was the Palestinian expulsion from Lydda and Ramle, with the expulsion of 50,000-70,000 Palestinians from Lydda and Ramle by the Israel Defense Forces. All but 700 to 1,056 were expelled by order of the Israeli high command, and forced to walk 17 km (10+1⁄2 mi) to the Jordanian Arab Legion lines. Estimates of those who died from exhaustion and dehydration vary from a handful to 355. The town was subsequently sacked by the Israeli army. Some scholars, including Ilan Pappé, characterize this as ethnic cleansing. The few hundred Arabs who remained in the city were soon outnumbered by the influx of Jews who immigrated to Lod from August 1948 onward, most of them from Arab countries. As a result, Lod became a predominantly Jewish town. After the establishment of the state, the biblical name Lod was readopted. The Jewish immigrants who settled Lod came in waves, first from Morocco and Tunisia, later from Ethiopia, and then from the former Soviet Union. Since 2008, many urban development projects have been undertaken to improve the image of the city. Upscale neighbourhoods have been built, among them Ganei Ya'ar and Ahisemah, expanding the city to the east. According to a 2010 report in the Economist, a three-meter-high wall was built between Jewish and Arab neighbourhoods and construction in Jewish areas was given priority over construction in Arab neighborhoods. The newspaper says that violent crime in the Arab sector revolves mainly around family feuds over turf and honour crimes. In 2010, the Lod Community Foundation organised an event for representatives of bicultural youth movements, volunteer aid organisations, educational start-ups, businessmen, sports organizations, and conservationists working on programmes to better the city. In the 2021 Israel–Palestine crisis, a state of emergency was declared in Lod after Arab rioting led to the death of an Israeli Jew. The Mayor of Lod, Yair Revivio, urged Prime Minister of Israel Benjamin Netanyahu to deploy Israel Border Police to restore order in the city. This was the first time since 1966 that Israel had declared this kind of emergency lockdown. International media noted that both Jewish and Palestinian mobs were active in Lod, but the "crackdown came for one side" only. Demographics In the 19th century and until the Lydda Death March, Lod was an exclusively Muslim-Christian town, with an estimated 6,850 inhabitants, of whom approximately 2,000 (29%) were Christian. According to the Israel Central Bureau of Statistics (CBS), the population of Lod in 2010 was 69,500 people. According to the 2019 census, the population of Lod was 77,223, of which 53,581 people, comprising 69.4% of the city's population, were classified as "Jews and Others", and 23,642 people, comprising 30.6% as "Arab". Education According to CBS, 38 schools and 13,188 pupils are in the city. They are spread out as 26 elementary schools and 8,325 elementary school pupils, and 13 high schools and 4,863 high school pupils. About 52.5% of 12th-grade pupils were entitled to a matriculation certificate in 2001.[citation needed] Economy The airport and related industries are a major source of employment for the residents of Lod. Other important factories in the city are the communication equipment company "Talard", "Cafe-Co" - a subsidiary of the Strauss Group and "Kashev" - the computer center of Bank Leumi. A Jewish Agency Absorption Centre is also located in Lod. According to CBS figures for 2000, 23,032 people were salaried workers and 1,405 were self-employed. The mean monthly wage for a salaried worker was NIS 4,754, a real change of 2.9% over the course of 2000. Salaried men had a mean monthly wage of NIS 5,821 (a real change of 1.4%) versus NIS 3,547 for women (a real change of 4.6%). The mean income for the self-employed was NIS 4,991. About 1,275 people were receiving unemployment benefits and 7,145 were receiving an income supplement. Art and culture In 2009-2010, Dor Guez held an exhibit, Georgeopolis, at the Petach Tikva art museum that focuses on Lod. Archaeology A well-preserved mosaic floor dating to the Roman period was excavated in 1996 as part of a salvage dig conducted on behalf of the Israel Antiquities Authority and the Municipality of Lod, prior to widening HeHalutz Street. According to Jacob Fisch, executive director of the Friends of the Israel Antiquities Authority, a worker at the construction site noticed the tail of a tiger and halted work. The mosaic was initially covered over with soil at the conclusion of the excavation for lack of funds to conserve and develop the site. The mosaic is now part of the Lod Mosaic Archaeological Center. The floor, with its colorful display of birds, fish, exotic animals and merchant ships, is believed to have been commissioned by a wealthy resident of the city for his private home. The Lod Community Archaeology Program, which operates in ten Lod schools, five Jewish and five Israeli Arab, combines archaeological studies with participation in digs in Lod. Sports The city's major football club, Hapoel Bnei Lod, plays in Liga Leumit (the second division). Its home is at the Lod Municipal Stadium. The club was formed by a merger of Bnei Lod and Rakevet Lod in the 1980s. Two other clubs in the city play in the regional leagues: Hapoel MS Ortodoxim Lod in Liga Bet and Maccabi Lod in Liga Gimel. Hapoel Lod played in the top division during the 1960s and 1980s, and won the State Cup in 1984. The club folded in 2002. A new club, Hapoel Maxim Lod (named after former mayor Maxim Levy) was established soon after, but folded in 2007. Notable people Twin towns-sister cities Lod is twinned with: See also References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/History_of_the_Jews_in_Algeria] | [TOKENS: 6777]
Contents History of the Jews in Algeria The history of Jews in Algeria goes back to Antiquity, although it is not possible to trace with any certainty the time and circumstances of the arrival of the first Jews in what is now Algeria.[a] In any case, several waves of immigration helped to increase the population. There may have been Jews in Carthage and present-day Algeria before the Roman conquest, but the development of Jewish communities is linked to the Roman presence. Jewish revolts in Judea and Cyrenaica in the 1st and 2nd centuries certainly led to the arrival of Jewish immigrants from these regions. The vast majority of scholarly sources reject the notion that there were any large-scale conversions of Berbers to Judaism. The Muslim conquest of North Africa, which was completed in Algeria in the 8th century, brought North Africa into the realm of Islamic civilization and had a lasting impact on the identity of local Jewish communities, whose status was henceforth governed by the dhimma. New immigrants later strengthened the Algerian Jewish community: Jews fled Spain during the Visigothic persecutions of the 5th and 6th centuries, and again during the persecutions linked to the Spanish Reconquista of the 14th and 16th centuries. Many Jews from the Iberian Peninsula settled in Algeria, mixing with the local Jewish population and influencing its traditions. In the 18th century, other Jews, the Granas of Livorno, were few in number, but played a role as commercial intermediaries between Europe and the Ottoman Empire. Later in the 19th century, many Jews from Tetouan arrived in Algeria, strengthening the ranks of the community. After the French colonization of Algeria in 1830, Algerian Jews, like other Algerians, faced discrimination by the colonial state. Like Muslims, they were given the status of "indigène" (indigenous) and were barred from gaining French citizenship unless particular conditions were met. However, the dhimma was abolished, and Jews became equal to Muslims under French law. Indeed, the Muslim law that governed the country put the former at a distinct disadvantage to the latter, especially in the legal sphere and their treatment as inhabitants of the country. This changed in 1870, with the Crémieux Decree granting Algerian Jews French citizenship (except for Mozabite Jews), while Muslims remained under the second-class indigenous status. Algerian Jews increasingly identified with metropolitan France, and despite a period of forced return to second-class indigenous status during World War II, they opted en masse to be repatriated to France on the eve of Algerian Independence—when even the formerly excluded Mozabite Jews were granted French citizenship—with a minority choosing Israel. This virtually put an end to more than 2,000 years of presence on Algerian soil. A few dozen very discreet Jews still live in Algeria. History There is evidence of Jewish settlements in Algeria since at least the Roman period (Mauretania Caesariensis). Epitaphs have been found in archaeological excavations that attest to Jews in the first centuries CE. Berber lands were said to welcome Christians and Jews very early from the Roman Empire. The destruction of the Second Temple in Jerusalem by Titus in 70 CE, and thereafter by the Kitos War in 117, reinforced Jewish settlement in North Africa and the Mediterranean. Early descriptions of the Rustamid capital, Tahert, note that Jews were found there, as they would be in any other major Muslim city of North Africa. Centuries later, the letters found in the Cairo Geniza mention many Algerian Jewish families. In the 7th century, Jewish settlements in North Africa were reinforced by Jewish immigrants that came to North Africa after fleeing from the persecutions of the Visigothic king Sisebut and his successors. They escaped to the Maghreb, which was at the time still part of the Byzantine Empire. It is debated whether Jews influenced the Berber population, making converts among them. In that century, Islamic armies conquered the whole Maghreb and most of the Iberian peninsula. The Jewish population was placed under Muslim domination in constant cultural exchanges with Al Andalus and the Near East. Later many Sephardic Jews were forced to take refuge in Algeria from the persecutions in Spain of Catalonia, Valencia and Balearic Islands in 1391 and the Spanish Inquisition in 1492. Together with the Moriscos, they thronged to the ports of North Africa, and mingled with native Jewish people. Abraham Lévy-Bacrat, a rabbi and one of the Jewish refugees from the 1492 expulsion from Spain, recorded that around 12,000 Jews arrived in the Kingdom of Tlemcen in what is today northwestern Algeria. In the 16th century there were large Jewish communities in places such as Oran, Bejaïa and Algiers. Jews were also present in the cities of the interior, such as Tlemcen and Constantine, and as far as Touggourt and M'zab in the south, with the permission of the Muslim authorities. Some Jews in Oran preserved Ladino language—which was a uniquely conservative dialect of Spanish—until the 19th century. The fear of Spanish invasions in the 18th century caused Jews in Algeria to face potential expulsion and confiscation of property, similar to what had occurred in Spain. Jewish merchants did well financially in late Ottoman Algiers. The French attack on Algeria was provoked by the Dey's demands that the French government pay its large outstanding wheat debts to two Jewish merchants. Between the 16th and 17th centuries, richer Jews from Livorno in Italy started settling in Algeria. Commercial trading and exchanges between Europe and the Ottoman Empire reinforced the Jewish community. Later again in the 19th century, many Sephardic Jews from Tetouan settled in Algeria, creating new communities, particularly in Oran. On the eve of the conquest of 1830, Algerian Judaism was as far removed culturally from the France of the Enlightenment as Islam. Three features characterise this distance. The first is civilisational. Algerian Judaism, and more broadly North African Judaism, is a traditional Judaism that bases its social and religious order not only on the law of God and rabbinical teaching, but also on a foundation of values, beliefs, and practices common to all North African ethnic groups. The centuries-long cohabitation with Islam has given rise to an original culture, as evidenced by the Judeo-Arabic language, fertility rituals, and the practice of maraboutism. The second characteristic is institutional. Submission to the Muslim sovereign had one major consequence for community organisation: independence. Indeed, the dhimma pact, which gave Jews the status of protected inferiors, granted - apart from its very restrictive aspects - almost total freedom to the communities in the management of their worship. Each community (independently of the others) was free to manage its own resources, appoint its own rabbis, keep civil records, and maintain its religious infrastructure (synagogues, ritual baths, cemeteries). Finally, the third determining factor is social and political. Community independence and extreme poverty resulted in a clannish organisation of Jewish society. The need to be represented before the Muslim sovereign stimulated competition between the large families and clientelism. In 1830, the Jewish population of Algeria was estimated at 26,000 mostly congregated in the coastal area. As a frontier population, natural intermediaries between Europeans and Muslims and fluent in Arabic, Jews were recruited as a priority to accompany French troops in the operations of conquest. Some Algerian Jews aided in the conquest, serving as interpreters or suppliers. However sympathetic some Algerian Jews were to the conqueror, the first priority was to subjugate the ‘indigenous’ populations. In this respect, the Jews were no exception. Although Ottoman subjection had been abolished with the conquest and annexation by France, they did not enjoy the rights of French citizens because they had a specific personal legal status of religious origin. Until the decree of 1870, the legal status of the Jews of Algeria was hardly different from that of Muslims. The Act of Capitulation of 5 July 1830 guaranteed the ‘inhabitants of Algeria’, whether Muslims or Jews, freedom of worship and respect for their religious traditions. In other words, Algerian Jews remained subject to the jurisdiction of the rabbis, in accordance with Mosaic law. More specifically, the natives of Algeria had the status of French citizens by virtue of the general principles of international law applied to cases of annexation. However, to avoid any confusion and for fear of giving too much weight and rights to this status, the ruling specifies that the natives do not enjoy the rights of French citizens due to the maintenance of their own laws (respect for religion recognised since 1830): ‘While not being a French citizen, the Muslim or Jewish native is French’. By 1841, the Jewish batei din "religious courts" were placed under French jurisdiction, linked to the Israelite Central Consistory of France. Regional Algerian courts or consistoires were put in place, operating under French oversight. On 9 November 1845, the French government organised community worship along metropolitan lines by creating an Algerian Israelite Consistory in Algiers and two provincial consistories, in Oran and Constantine (with metropolitan rabbis), thus completing the legal ‘assimilation’ of Algerian Jews. The creation of consistories would make it possible to achieve two other objectives: firstly, to remove the communities from the authority of the rabbis, considered to be the breeding ground for fanaticism, by entrusting management to a secular and ‘enlightened’ elite; secondly, to break down the clan structure of indigenous society by imposing a single authority. In 1845, the French colonial government reorganized communal structure, appointing French Jews, who were Ashkenazi Jews, as chief rabbis for each region, with the duty "to inculcate unconditional obedience to the laws, loyalty to France, and the obligation to defend it". Such oversight was an example of the French Jews' attempt to "civilize" Jewish Algerians, as they believed their European traditions were superior to Sephardic practices. This marked a change in the Jewish relationship with the state. They were separated from the Muslim court system, where they had previously been classified as dhimmis. As a result, Algerian Jews resisted those French Jews attempting to settle in Algeria; in some cases, there was rioting, in others the local Jews refused to allow French Jewish burials in Algerian Jews' cemeteries. In 1865, the Senatus-Consulte liberalized rules of citizenship, to allow Jewish and Muslim "indigenous" peoples in Algeria to become French citizens if they requested it. Few did so, however, because French citizenship required renouncing certain traditional mores. The Algerians considered that a kind of apostasy. In October 1870, Adolphe Crémieux, a lawyer and former minister under the Second Republic, but also President of the Alliance Israélite Universelle, as Minister of Justice in the National Defence government, promulgated the decree that today bears his name. The decree declared the ‘indigenous Israelites’ of the Algerian departments to be French citizens and made them legally subject to the Civil Code. However, the Jews of the Southern Territories, an administrative entity that existed from 1902 to 1957, were ‘indigenous’ subjects subject to ‘local civil status’ (also known as ‘personal status’ or ‘local status’); as a result, they had no political rights whatsoever. The importance of the decree lies in the massive and compulsory nature of the change in status. That decree met with resistance from hostile Algerian Jewish circles, especially from traditional Algerian rabbis faced with the intrusion of French Judaism. The French government granted the 'indigenous Israelites' (nothing is said about the explicit definition of the category of 'Israelite', unlike what will later happen under Vichy), who by then numbered some 33,000, French citizenship in 1870 under the Crémieux Decree, while maintaining an inferior status for Muslims who, though technically French nationals, were required to apply for French citizenship and undergo a naturalization process. For this reason, they are sometimes incorrectly categorized as pieds-noirs. The decision to extend citizenship to Algerian Jews was a result of pressures from prominent members of the liberal, intellectual French Jewish community, which considered the North African Jews to be "backward" and wanted to bring them into modernity. Within a generation, despite initial resistance, most Algerian Jews came to speak French rather than Arabic or Judaeo-Spanish, and they embraced many aspects of French culture. In embracing "Frenchness," the Algerian Jews joined the colonizers, although they were still considered "other" to the French. Although some took on more typically European occupations, "the majority of Jews were poor artisans and shopkeepers catering to a Muslim clientele." Moreover, conflicts between Sephardic Jewish religious law and French law produced contention within the community. They resisted changes related to domestic issues, such as marriage. The Crémieux decree, which brought so-called ‘indigenous’ Jews into the fold of French citizenship, de facto separated Muslims and Jews on a legal and civic level. The latter, albeit with apparent differences depending on the region, welcomed the French policy of assimilation, into which many threw themselves wholeheartedly, and which accelerated the march towards Westernisation. In everyday life, however, relations were often cordial and even fraternal, with Jews not being assimilated into the colonists and frequently acting as intermediaries between Muslims and Europeans. In the end, the naturalisation decree of October 1870 was a measure devised by the ruling circle of French Judaism. It was in no way the consecration of a de facto state of affairs—namely the spontaneous francization of Algerian Jews—but a measure to encourage them to enter (willingly or by force) into French normality. After the 1882 conquest of the M'zab, the French government in Algeria legally categorized southern Algerian Jews, like the Muslims, as "indigènes", and thus subject to restricted and decreased rights under the indigénat compared to their northern Jewish counterparts, who were still French citizens under the Crémieux Decree of 1870. In 1881, there were only about 30,000 Mozabite Jews in Southern Algeria. They established, in Southern Algeria, “local civil status” laws, with rabbis overseeing legal issues. The French government recognized Jewish laws pertaining to domestic issues, such as marriage and inheritance. While these laws allowed for Jews to be structured under halakha, it prevented southern Jews from accessing “elite” opportunities, as their indigenous status established them as lesser citizens. French antisemitism set down strong roots among the expatriate French community in Algeria, where every municipal council was controlled by anti-Semites, and newspapers were rife with xenophobic attacks on the local Jewish communities. Much of this was encouraged by the French colonial administration, in particular by the militant antisemitic Max Régis. In Algiers when Émile Zola was brought to trial for his defense in an 1898 open letter, J'Accuse…!, of Alfred Dreyfus, sympathy for whom was widespread in the Arabic press, over 158 Jewish owned shops were looted and burned and two Jews were killed, while the army stood by and refused to intervene (see 1898 Algerian riots). Hannah Arendt was to comment later that,'that pogroms against Jews in Algeria were carried out not, as it was claimed, by “‘backward Arabs’” but by “thoroughly sophisticated officers of the French colonial administration” and by the mayor of Algiers, Max Régis.' Under French rule, some Muslim anti-Jewish riots still occurred, as in 1897 in Oran. In the late 19th century and during the 1930s, mayors elected on anti-Jewish agendas sought to disenfranchise Jewish voters in their municipalities, as seen in Sidi-Bel-Abbès, when they could not directly repeal the Crémieux Decree. In these municipalities, Jewish voters were required to provide proof that they or their ancestors had been born in Algeria before 1830. Failure to provide such proof was considered attempted fraud and resulted in removal from the electoral rolls. In 1931, Jews made up less than 2% of Algeria's total population. This population was more represented in the largest cities: Algiers, Constantine, and Oran, which each had Jewish populations of over 7%. Many smaller cities such as Blida, Tlemcen, and Setif also had small Jewish populations.[citation needed] By the mid-thirties, François de La Rocque's extremist Croix-de-Feu and, later, the French Social Party movements in Algeria proved active in trying to turn Muslims against Algerian Jews by publishing tracts in Arabic, and were responsible for inciting the 1934 Constantine Pogrom, in which 25 Jews were killed and some 200 stores were pillaged. With regard to the scope of Zionism in the upper echelons of Algerian Jewry, the wealthy and influential Jews of Algeria are opposed to Zionism and so far we have not counted on their support". They consider that ‘being French first and foremost, they have no interest in the question of Zionism and that they belong here’. One of the first moves of the pro-German Vichy regime was to revoke the effects of the Crémieux Decree, on October 7, 1940, thereby abolishing French citizenship for Algerian Jews, affecting some 110,000 Algerians. Under Vichy rule in Algeria, even Karaites and Jews who had converted to another religion were subject to anti-semitic laws, known collectively as Statut des Juifs. The Vichy regime's laws ensured that Jews were forbidden from holding public office or other governmental positions, as well as from holding jobs in industries such as insurance and real estate. In addition, the Vichy regime set strict limitations on Jewish people working as doctors or lawyers. It is essential to acknowledge that in no way dictated by Nazi Germany, this policy of blacklisting Jews was exclusively the initiative of the Vichy government. The Vichy regime also limited the number of Jewish children in Algeria's public school system, and eventually terminated all Jewish enrollment in public schools. In response, Jewish professors who had been forced from their jobs set up a Jewish university in 1941, only for its forced dissolution to occur at the end of that same year. The Jewish communities of Algeria also set up a system of Jewish primary schools for children, and by 1942 some 20,000 Jewish children were enrolled in 70 elementary and 5 secondary schools all over Algeria. The Vichy government eventually created legislation allowing the government to control school curriculum and schedules, which helped dampen efforts to educate young Jews in Algeria. It was in this context that the French authorities carried out a legal spoliation of Jewish property, known as ‘economic Aryanisation’ in Nazi-inspired terminology, in order to ‘eliminate all Jewish influence’ in the economy. In the space of a few months, more than 2,500 properties were taken from their owners and entrusted to ‘commissaires-gérants’ who had put themselves forward as candidates and whose dossiers had been validated. While most of them were French nationals from the colony, there were also a handful of Muslims among the provisional administrators, at least those who had not been rejected by the authorities. Careful examination of the archives reveals a geography of spoliation. As the Jews were main actors in trading with the Muslims, the colonial authorities sometimes feared that spoliation would jeopardise the fragile social balance, or even create unrest among the ‘indigenous’ population that would be difficult to control. After the Allied landings, the return to normality was slow and many spoliated people had difficulty recovering their property. Under Admiral Darlan and General Giraud, two French officials who administered the French military in North Africa, the antisemitic legislation was applied more severely in Algeria than France itself, under the pretext that it enabled greater equality between Muslims and Jews and considered racial laws a condition sine qua non of the armistice. Under the Vichy regime in Algeria, an office called the "Special Department for the Control of the Jewish Problem" handled the execution of laws applying to Algeria's Jewish population. This was unique in French North Africa, and as such the laws covering the status of Jews were governed much more harshly in Algeria than in Morocco or Tunisia. A bureau for "Economic Aryanization" was also installed in order to eradicate the Jewish community's significance in the economy, mostly by taking control of Jewish businesses. On March 31, 1942, the Vichy government issued a decree demanding the creation of a local Jewish government called the Union Générale des Israélites d’Algérie (UGIA). The UGIA was intended to be a body of Jews that would execute the Vichy regulations within Jewish communities, and was seen by much of the Jewish population as collaboration with the government. In response, many young Jews joined the Algerian resistance movement, which itself had been founded by Jews in 1940. On November 8, 1942, the Algerian resistance to the Vichy government took part in the takeover of Algiers in preparation for the Allied liberation of North Africa, known as "Operation Torch." Of the 377 resistance members who took Algiers, 315 were Jewish. In November 1942, Allied forces landed and took control of Algiers and the rest of Algeria. However, Jews were not returned all of their former civil rights and liberties, nor their French citizenships until 1943. This can partially be explained by the fact that Giraud himself, along with the Governor-General Marcel Peyrouton, in promulgating the cancellation of Vichy statutes on March 14, 1943, after the allies landed in North Africa, retained exceptionally the decree abolishing citizenship rights for Algerian Jews, claiming that they did not wish to incite violence between the Jewish and Muslim communities in Algiers. It was not until the arrival of Charles De Gaulle in October 1943 that Jewish Algerians finally regained their French citizenship with the reinstatement of the Crémieux Decree. In addition to the discriminatory and antisemitic laws faced by Jews throughout Algeria, some 2,000 Jews were placed in concentration camps at Bedeau and Djelfa. The camp at Bedeau, near Sidi-bel-Abbes, became a place for the concentration of Jewish Algerian soldiers, who were forced to perform hard labor. These prisoners formed the "Jewish Work Group," and worked on a Vichy plan for a trans-Saharan railroad; many died from hunger, exhaustion, disease, or beatings. During the Algerian War, most Algerian Jews took sides with France, out of loyalty to the Republic which gave them French citizenship, against the Arab Independence movement, though they rejected that part of the official policy which proposed independence for Algeria. Some Jews did join the FLN fighting for independence, but a larger group made common cause with the OAS, secret paramilitary group. Throughout the Independence War, violence remained palpable and relations deteriorated following clashes and assassinations, such as that of the rabbi of Nédroma in November 1956, the chief rabbi of Médéa in March 1957, and the great singer Cheikh Raymond Leyris. The FLN published declarations guaranteeing a place in Algeria for Jews as an integral constituent of the Algerian people, hoping to attract their support. Algerian Muslims had assisted Jews during their trials under the Vichy régime in WW2, when their citizenship rights under the Crémieux Degree had been revoked. Some Algerian Jews responded positively to the call from the FLN, joining with local militias or making financial contributions. For these Jews, they recognized a common attachment to Algeria and the antisemitism prevalent among the French. For others, memories of the 1934 pogrom, and incidents of violent Muslim assault on Jews in Constantine and Batna, together with arson attacks on the Batna and Orleanville synagogues, played a role in their decisions to turn down the offer. In 1961, with the French National Assembly Law 61-805, the Mozabite Jews, who had been excluded from the Cremieux Decree, were also given French citizenship. Between late 1961 and late summer 1962, 130,000 of Algeria's approximately 140,000 Jews left for France, while about 10,000 of them emigrated to Israel. Moroccan Jews who were living in Algeria and Jews from the M'zab Valley in the Algerian Sahara, who did not have French citizenship, as well as a small number of Algerian Jews from Constantine, also emigrated to Israel at that time. The fact that Israel was unable to attract more Jewish immigrants from Algeria dismayed Zionist representatives in Algeria as well as the Israeli authorities; Zionism remained a marginal movement within Algerian Jewish society compared to other North African countries. In any case, it is clear that the weakness of Zionist action in Algeria was largely due to the French nationality of Algerian Jews. Algerian Jews massively left Algeria because of the fear of a return to the status of dhimmi, which an independent state founded on an Arab-Muslim identity - the pillar of the future Algeria might engender, where the Jewish minority would find no place, as many had foreseen from the eve of independence. In 1961, the Bizerte crisis caused upheaval and increased the rate of Jewish migration to France. Jews, considered to be responsible for the conflict because of their proximity to Europeans, were the victims of anti-Semitic movements. In just a few months, France received as many refugees as in the previous six years. Following a 1961 referendum, the 1962 Évian Accords secured Algerian independence. Some Algerian Jews had joined the Organisation armée secrète, which aimed to disrupt the process of independence with bombings and assassination attempts, targets including Charles de Gaulle and Jean-Paul Sartre. These accords led to a mass exodus of ‘pieds noirs’. In the early 1960s while North African Jews faced a wave of anti-Semitism in the Maghreb. Moreover, Algerian Jews also identify with their attachment to France, which Algerians revealed in the nineteenth century in their fight for French naturalisation. The Jews of Algeria, but also of Morocco and Tunisia, showed a great attachment to metropolitan France, as shown by their attitude during the colonial wars and their choice to settle in France: "The Jews felt perfectly French and proud to be so. Moreover, the anti-Semitic violence that had been manifesting itself in the colony since the last quarter of the nineteenth century affected every aspect of daily life in minute detail. More than 90% of Algerian Jews (110,000 out of about 130,000) opted for France, they left Algeria en masse, not because they were persecuted there as Jews but because they had so deeply internalized their "Frenchness" that they considered their destiny linked to that of the French, although some went to Israel.[unbalanced opinion?] By 1969, fewer than 1,000 Jews were still living in Algeria.[better source needed] By 1975, because of a lack of worshippers, all but one of the country's synagogues were closed, having been converted into mosques or libraries. After the Evian agreements of 19 March 1962, the vast majority of the remaining Jews in Algeria were among the 800,000 French people who crossed the Mediterranean at in the space of a few months. Regarded as repatriates in the same way as the 'Pieds Noirs', they gradually integrated into the French Jewish community, which they helped to reshape, like their co-religionists who had previously arrived from Egypt, Tunisia and Morocco. The number of those who settled in Israel is estimated at 10,000. As for those who remain in Algeria, they can only observe that the young independent nation is being built on an Arab-Muslim identity, since the law on the nationality code of 12 March 1963 automatically grants Algerian nationality only to those who have a ‘nationality of origin’, defined by their Muslim personal status under French domination. Since 2005, the Algerian government has attempted to reduce discrimination against the Jewish population by establishing a Jewish association and passing a law that recognized freedom of religion. They also allowed a relaunching of Jewish pilgrimage, to the most holy Jewish sites in North Africa. In 2014, the Minister of Religious Affairs Mohammed Eissa announced that the Algerian government would foster the reopening of Jewish synagogues. However, this never came to fruition, with Eissa stating that it was no longer in the interest of Algerian Jews. In 2017, there were an estimated 50 Jews remaining in Algeria, mostly in Algiers. As of 2020, there were an estimated 200 Jews in Algeria. Traditional dress According to the Jewish Encyclopedia, A contemporary Jewess of Algiers wears on her head a "takrita" (handkerchief), is dressed in a "bedenor" (gown with a bodice trimmed with lace) and a striped vest with long sleeves coming to the waist. The "mosse" (girdle) is of silk. The native Algerian Jew wears a "ṭarbush" or oblong turban with silken tassel, a "ṣadriyyah" or vest with large sleeves, and "sarwal" or pantaloons fastened by a "ḥizam" (girdle), all being covered by a mantle, a burnus (also spelled burnoose), and a large silk handkerchief, the tassels of which hang down to his feet. At an earlier stage the Algerian Jewess wore a tall cone-shaped hat resembling those used in England in the fifteenth century. Synagogues in Algeria Notable Algerian Jews Genetics The largest study to date on the Jews of North Africa has been led by Gerard Lucotte et al. in 2003. Sephardi population studied is as follows: 58 Jews from Algeria, 190 from Morocco, 64 from Tunisia, 49 from the island of Djerba, 9 and 11 from Libya and Egypt, respectively, which makes 381 people. This study showed that the Jews of North Africa showed frequencies of their paternal haplotypes almost equal to those of the Lebanese and Palestinian non-Jews when compared to European non-Jews. The Moroccan/Algerian, Djerban/Tunisian and Libyan subgroups of North African Jewry were found to demonstrate varying levels of Middle Eastern (40-42%), European (37-39%) and North African ancestry (20-21%), with Moroccan and Algerian Jews tending to be genetically closer to each other than to Djerban Jews and Libyan Jews. According to the study: "distinctive North African Jewish population clusters with proximity to other Jewish populations and variable degrees of Middle Eastern, European, and North African admixture. Two major subgroups were identified by principal component, neighbor joining tree, and identity-by-descent analysis—Moroccan/Algerian and Djerban/Libyan—that varied in their degree of European admixture. These populations showed a high degree of endogamy and were part of a larger Ashkenazi and Sephardic Jewish group. By principal component analysis, these North African groups were orthogonal to contemporary populations from North and South Morocco, Western Sahara, Tunisia, Libya, and Egypt. Thus, this study is compatible with the history of North African Jews—founding during Classical Antiquity with proselytism of local populations, followed by genetic isolation with the rise of Christianity and then Islam, and admixture following the emigration of Sephardic Jews during the Inquisition." Population numbers There are gaps in the information available on the Jewish population in Algeria over time, and everyone tries to get around them as best they can, which leads to variations in estimates and results depending on the source. These estimates are clearly flawed because part of the territory was outside French control, because only ‘municipal populations’ are counted, because rural dwellers are not counted, and because the evaluation methods were incomplete. In culture See also Notes References Sources External links Media related to Jewish people of Algeria at Wikimedia Commons
========================================
[SOURCE: https://en.wikipedia.org/wiki/Nordic_aliens] | [TOKENS: 810]
Contents Nordic aliens In ufology and the study of alleged extraterrestrial beings and lifeforms visiting Earth, "Nordics", "Nordic aliens" or "Tall Whites" are among the names given to one of several purported humanoid races hailing from the Pleiades star cluster (i.e., Pleiadians), as they reportedly share superficial similarities with "Nordic", Germanic, or Scandinavian humans. Alleged contactees describe Nordics as being somewhat taller than the average human, standing roughly 6–7 ft (1.8–2.1 m) in height (with an equally proportional weight), and showing stereotypically "European" or "White" features, such as long, straight blond hair, blue eyes, and fair skin. The skin tone has also been reported by individuals who say they have seen such beings as being a pale blue-grey or pastel purple.[citation needed] In the 1950s, George Adamski, a Polish-American ufologist, was among the first to publicly report his alleged contact with Nordic beings. Scholars note that the mythology of extraterrestrial visitations from such beings (with physical features superficially described as "Aryan") often make mention of telepathy, benevolence, and physical beauty and grace; however, many purported alien and extraterrestrial encounters also involve some degree of telepathy serving as the primary communication with human beings. History Cultural historian David J. Skal wrote that early stories of Nordic-type aliens may have been partially inspired by the 1951 film The Day the Earth Stood Still, in which an extraterrestrial arrives on Earth to warn humanity about the dangers of atomic weapons. Bates College professor Stephanie Kelley-Romano described alien abduction beliefs as "a living myth", and notes that, among believers, Nordic aliens "are often associated with spiritual growth and love and act as protectors for the experiencers." In contactee and ufology literature, Nordic aliens are often described as benevolent or even "magical" beings who want to observe and communicate with humans and are concerned about the Earth's ecology or prospects for world peace. Believers also ascribe telepathic powers to Nordic aliens, and describe them as "paternal, watchful, smiling, affectionate, and youthful". During the 1950s, many people alleging to be contactees, especially those in Europe, claimed encounters with beings fitting this description. Such claims became relatively less common in subsequent decades, as the grey alien supplanted the Nordic in most alleged accounts of extraterrestrial encounters. Publications from people who claim to have been contacted and the topic in popular culture Books claiming personal contact with Nordic aliens include George Adamski's Flying Saucers Have Landed and Inside the Space Ships, Howard Menger's From Outer Space to You, Travis Walton's The Walton Experience and Charles James Hall's Millennial Hospitality (film adapted as Walking with the Tall Whites (2020). The UFO religion Universe People contains a variety of interactions, published as "Talks with Teachings from my Cosmic Friends". The Brazilian science fiction novella "Major Atlas - Uma Novela Sobre Alienígenas Nórdicos", stands out in the science fiction landscape for its deep and extensive exploration of the Nordic alien (Pleiadian) theme, arguably more so than any other work of fiction. The narrative centers on a police officer who undergoes an extraordinary transformation. After being exposed to a mysterious cosmic essence, he is endowed with superhuman abilities strikingly similar to those of Superman. As the plot unfolds, the protagonist finds himself involved in a complex web of deceit, realizing he is a pawn in a larger, manipulated narrative and slowly uncovers the secrets of the Pleiadian beings, highlighting themes of manipulation, hidden power structures, and the nature of reality. The book uses the Nordic alien mythology not merely as a backdrop but as a fundamental, driving force of its central conflict and character development. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Python_(programming_language)#cite_note-lj-bdfl-resignation-48] | [TOKENS: 4314]
Contents Python (programming language) Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically type-checked and garbage-collected. It supports multiple programming paradigms, including structured (particularly procedural), object-oriented and functional programming. Guido van Rossum began working on Python in the late 1980s as a successor to the ABC programming language. Python 3.0, released in 2008, was a major revision and not completely backward-compatible with earlier versions. Beginning with Python 3.5, capabilities and keywords for typing were added to the language, allowing optional static typing. As of 2026[update], the Python Software Foundation supports Python 3.10, 3.11, 3.12, 3.13, and 3.14, following the project's annual release cycle and five-year support policy. Python 3.15 is currently in the alpha development phase, and the stable release is expected to come out in October 2026. Earlier versions in the 3.x series have reached end-of-life and no longer receive security updates. Python has gained widespread use in the machine learning community. It is widely taught as an introductory programming language. Since 2003, Python has consistently ranked in the top ten of the most popular programming languages in the TIOBE Programming Community Index, which ranks based on searches in 24 platforms. History Python was conceived in the late 1980s by Guido van Rossum at Centrum Wiskunde & Informatica (CWI) in the Netherlands. It was designed as a successor to the ABC programming language, which was inspired by SETL, capable of exception handling and interfacing with the Amoeba operating system. Python implementation began in December 1989. Van Rossum first released it in 1991 as Python 0.9.0. Van Rossum assumed sole responsibility for the project, as the lead developer, until 12 July 2018, when he announced his "permanent vacation" from responsibilities as Python's "benevolent dictator for life" (BDFL); this title was bestowed on him by the Python community to reflect his long-term commitment as the project's chief decision-maker. (He has since come out of retirement and is self-titled "BDFL-emeritus".) In January 2019, active Python core developers elected a five-member Steering Council to lead the project. The name Python derives from the British comedy series Monty Python's Flying Circus. (See § Naming.) Python 2.0 was released on 16 October 2000, featuring many new features such as list comprehensions, cycle-detecting garbage collection, reference counting, and Unicode support. Python 2.7's end-of-life was initially set for 2015, and then postponed to 2020 out of concern that a large body of existing code could not easily be forward-ported to Python 3. It no longer receives security patches or updates. While Python 2.7 and older versions are officially unsupported, a different unofficial Python implementation, PyPy, continues to support Python 2, i.e., "2.7.18+" (plus 3.11), with the plus signifying (at least some) "backported security updates". Python 3.0 was released on 3 December 2008, and was a major revision and not completely backward-compatible with earlier versions, with some new semantics and changed syntax. Python 2.7.18, released in 2020, was the last release of Python 2. Several releases in the Python 3.x series have added new syntax to the language, and made a few (considered very minor) backward-incompatible changes. As of January 2026[update], Python 3.14.3 is the latest stable release. All older 3.x versions had a security update down to Python 3.9.24 then again with 3.9.25, the final version in 3.9 series. Python 3.10 is, since November 2025, the oldest supported branch. Python 3.15 has an alpha released, and Android has an official downloadable executable available for Python 3.14. Releases receive two years of full support followed by three years of security support. Design philosophy and features Python is a multi-paradigm programming language. Object-oriented programming and structured programming are fully supported, and many of their features support functional programming and aspect-oriented programming – including metaprogramming and metaobjects. Many other paradigms are supported via extensions, including design by contract and logic programming. Python is often referred to as a 'glue language' because it is purposely designed to be able to integrate components written in other languages. Python uses dynamic typing and a combination of reference counting and a cycle-detecting garbage collector for memory management. It uses dynamic name resolution (late binding), which binds method and variable names during program execution. Python's design offers some support for functional programming in the "Lisp tradition". It has filter, map, and reduce functions; list comprehensions, dictionaries, sets, and generator expressions. The standard library has two modules (itertools and functools) that implement functional tools borrowed from Haskell and Standard ML. Python's core philosophy is summarized in the Zen of Python (PEP 20) written by Tim Peters, which includes aphorisms such as these: However, Python has received criticism for violating these principles and adding unnecessary language bloat. Responses to these criticisms note that the Zen of Python is a guideline rather than a rule. The addition of some new features had been controversial: Guido van Rossum resigned as Benevolent Dictator for Life after conflict about adding the assignment expression operator in Python 3.8. Nevertheless, rather than building all functionality into its core, Python was designed to be highly extensible via modules. This compact modularity has made it particularly popular as a means of adding programmable interfaces to existing applications. Van Rossum's vision of a small core language with a large standard library and easily extensible interpreter stemmed from his frustrations with ABC, which represented the opposite approach. Python claims to strive for a simpler, less-cluttered syntax and grammar, while giving developers a choice in their coding methodology. Python lacks do .. while loops, which Rossum considered harmful. In contrast to Perl's motto "there is more than one way to do it", Python advocates an approach where "there should be one – and preferably only one – obvious way to do it". In practice, however, Python provides many ways to achieve a given goal. There are at least three ways to format a string literal, with no certainty as to which one a programmer should use. Alex Martelli is a Fellow at the Python Software Foundation and Python book author; he wrote that "To describe something as 'clever' is not considered a compliment in the Python culture." Python's developers typically prioritize readability over performance. For example, they reject patches to non-critical parts of the CPython reference implementation that would offer increases in speed that do not justify the cost of clarity and readability.[failed verification] Execution speed can be improved by moving speed-critical functions to extension modules written in languages such as C, or by using a just-in-time compiler like PyPy. Also, it is possible to transpile to other languages. However, this approach either fails to achieve the expected speed-up, since Python is a very dynamic language, or only a restricted subset of Python is compiled (with potential minor semantic changes). Python is meant to be a fun language to use. This goal is reflected in the name – a tribute to the British comedy group Monty Python – and in playful approaches to some tutorials and reference materials. For instance, some code examples use the terms "spam" and "eggs" (in reference to a Monty Python sketch), rather than the typical terms "foo" and "bar". A common neologism in the Python community is pythonic, which has a broad range of meanings related to program style: Pythonic code may use Python idioms well; be natural or show fluency in the language; or conform with Python's minimalist philosophy and emphasis on readability. Syntax and semantics Python is meant to be an easily readable language. Its formatting is visually uncluttered and often uses English keywords where other languages use punctuation. Unlike many other languages, it does not use curly brackets to delimit blocks, and semicolons after statements are allowed but rarely used. It has fewer syntactic exceptions and special cases than C or Pascal. Python uses whitespace indentation, rather than curly brackets or keywords, to delimit blocks. An increase in indentation comes after certain statements; a decrease in indentation signifies the end of the current block. Thus, the program's visual structure accurately represents its semantic structure. This feature is sometimes termed the off-side rule. Some other languages use indentation this way; but in most, indentation has no semantic meaning. The recommended indent size is four spaces. Python's statements include the following: The assignment statement (=) binds a name as a reference to a separate, dynamically allocated object. Variables may subsequently be rebound at any time to any object. In Python, a variable name is a generic reference holder without a fixed data type; however, it always refers to some object with a type. This is called dynamic typing—in contrast to statically-typed languages, where each variable may contain only a value of a certain type. Python does not support tail call optimization or first-class continuations; according to Van Rossum, the language never will. However, better support for coroutine-like functionality is provided by extending Python's generators. Before 2.5, generators were lazy iterators; data was passed unidirectionally out of the generator. From Python 2.5 on, it is possible to pass data back into a generator function; and from version 3.3, data can be passed through multiple stack levels. Python's expressions include the following: In Python, a distinction between expressions and statements is rigidly enforced, in contrast to languages such as Common Lisp, Scheme, or Ruby. This distinction leads to duplicating some functionality, for example: A statement cannot be part of an expression; because of this restriction, expressions such as list and dict comprehensions (and lambda expressions) cannot contain statements. As a particular case, an assignment statement such as a = 1 cannot be part of the conditional expression of a conditional statement. Python uses duck typing, and it has typed objects but untyped variable names. Type constraints are not checked at definition time; rather, operations on an object may fail at usage time, indicating that the object is not of an appropriate type. Despite being dynamically typed, Python is strongly typed, forbidding operations that are poorly defined (e.g., adding a number and a string) rather than quietly attempting to interpret them. Python allows programmers to define their own types using classes, most often for object-oriented programming. New instances of classes are constructed by calling the class, for example, SpamClass() or EggsClass()); the classes are instances of the metaclass type (which is an instance of itself), thereby allowing metaprogramming and reflection. Before version 3.0, Python had two kinds of classes, both using the same syntax: old-style and new-style. Current Python versions support the semantics of only the new style. Python supports optional type annotations. These annotations are not enforced by the language, but may be used by external tools such as mypy to catch errors. Python includes a module typing including several type names for type annotations. Also, mypy supports a Python compiler called mypyc, which leverages type annotations for optimization. 1.33333 frozenset() Python includes conventional symbols for arithmetic operators (+, -, *, /), the floor-division operator //, and the modulo operator %. (With the modulo operator, a remainder can be negative, e.g., 4 % -3 == -2.) Also, Python offers the ** symbol for exponentiation, e.g. 5**3 == 125 and 9**0.5 == 3.0. Also, it offers the matrix‑multiplication operator @ . These operators work as in traditional mathematics; with the same precedence rules, the infix operators + and - can also be unary, to represent positive and negative numbers respectively. Division between integers produces floating-point results. The behavior of division has changed significantly over time: In Python terms, the / operator represents true division (or simply division), while the // operator represents floor division. Before version 3.0, the / operator represents classic division. Rounding towards negative infinity, though a different method than in most languages, adds consistency to Python. For instance, this rounding implies that the equation (a + b)//b == a//b + 1 is always true. Also, the rounding implies that the equation b*(a//b) + a%b == a is valid for both positive and negative values of a. As expected, the result of a%b lies in the half-open interval [0, b), where b is a positive integer; however, maintaining the validity of the equation requires that the result must lie in the interval (b, 0] when b is negative. Python provides a round function for rounding a float to the nearest integer. For tie-breaking, Python 3 uses the round to even method: round(1.5) and round(2.5) both produce 2. Python versions before 3 used the round-away-from-zero method: round(0.5) is 1.0, and round(-0.5) is −1.0. Python allows Boolean expressions that contain multiple equality relations to be consistent with general usage in mathematics. For example, the expression a < b < c tests whether a is less than b and b is less than c. C-derived languages interpret this expression differently: in C, the expression would first evaluate a < b, resulting in 0 or 1, and that result would then be compared with c. Python uses arbitrary-precision arithmetic for all integer operations. The Decimal type/class in the decimal module provides decimal floating-point numbers to a pre-defined arbitrary precision with several rounding modes. The Fraction class in the fractions module provides arbitrary precision for rational numbers. Due to Python's extensive mathematics library and the third-party library NumPy, the language is frequently used for scientific scripting in tasks such as numerical data processing and manipulation. Functions are created in Python by using the def keyword. A function is defined similarly to how it is called, by first providing the function name and then the required parameters. Here is an example of a function that prints its inputs: To assign a default value to a function parameter in case no actual value is provided at run time, variable-definition syntax can be used inside the function header. Code examples "Hello, World!" program: Program to calculate the factorial of a non-negative integer: Libraries Python's large standard library is commonly cited as one of its greatest strengths. For Internet-facing applications, many standard formats and protocols such as MIME and HTTP are supported. The language includes modules for creating graphical user interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic with arbitrary-precision decimals, manipulating regular expressions, and unit testing. Some parts of the standard library are covered by specifications—for example, the Web Server Gateway Interface (WSGI) implementation wsgiref follows PEP 333—but most parts are specified by their code, internal documentation, and test suites. However, because most of the standard library is cross-platform Python code, only a few modules must be altered or rewritten for variant implementations. As of 13 March 2025,[update] the Python Package Index (PyPI), the official repository for third-party Python software, contains over 614,339 packages. Development environments Most[which?] Python implementations (including CPython) include a read–eval–print loop (REPL); this permits the environment to function as a command line interpreter, with which users enter statements sequentially and receive results immediately. Also, CPython is bundled with an integrated development environment (IDE) called IDLE, which is oriented toward beginners.[citation needed] Other shells, including IDLE and IPython, add additional capabilities such as improved auto-completion, session-state retention, and syntax highlighting. Standard desktop IDEs include PyCharm, Spyder, and Visual Studio Code; there are web browser-based IDEs, such as the following environments: Implementations CPython is the reference implementation of Python. This implementation is written in C, meeting the C11 standard since version 3.11. Older versions use the C89 standard with several select C99 features, but third-party extensions are not limited to older C versions—e.g., they can be implemented using C11 or C++. CPython compiles Python programs into an intermediate bytecode, which is then executed by a virtual machine. CPython is distributed with a large standard library written in a combination of C and native Python. CPython is available for many platforms, including Windows and most modern Unix-like systems, including macOS (and Apple M1 Macs, since Python 3.9.1, using an experimental installer). Starting with Python 3.9, the Python installer intentionally fails to install on Windows 7 and 8; Windows XP was supported until Python 3.5, with unofficial support for VMS. Platform portability was one of Python's earliest priorities. During development of Python 1 and 2, even OS/2 and Solaris were supported; since that time, support has been dropped for many platforms. All current Python versions (since 3.7) support only operating systems that feature multithreading, by now supporting not nearly as many operating systems (dropping many outdated) than in the past. All alternative implementations have at least slightly different semantics. For example, an alternative may include unordered dictionaries, in contrast to other current Python versions. As another example in the larger Python ecosystem, PyPy does not support the full C Python API. Creating an executable with Python often is done by bundling an entire Python interpreter into the executable, which causes binary sizes to be massive for small programs, yet there exist implementations that are capable of truly compiling Python. Alternative implementations include the following: Stackless Python is a significant fork of CPython that implements microthreads. This implementation uses the call stack differently, thus allowing massively concurrent programs. PyPy also offers a stackless version. Just-in-time Python compilers have been developed, but are now unsupported: There are several compilers/transpilers to high-level object languages; the source language is unrestricted Python, a subset of Python, or a language similar to Python: There are also specialized compilers: Some older projects existed, as well as compilers not designed for use with Python 3.x and related syntax: A performance comparison among various Python implementations, using a non-numerical (combinatorial) workload, was presented at EuroSciPy '13. In addition, Python's performance relative to other programming languages is benchmarked by The Computer Language Benchmarks Game. There are several approaches to optimizing Python performance, despite the inherent slowness of an interpreted language. These approaches include the following strategies or tools: Language Development Python's development is conducted mostly through the Python Enhancement Proposal (PEP) process; this process is the primary mechanism for proposing major new features, collecting community input on issues, and documenting Python design decisions. Python coding style is covered in PEP 8. Outstanding PEPs are reviewed and commented on by the Python community and the steering council. Enhancement of the language corresponds with development of the CPython reference implementation. The mailing list python-dev is the primary forum for the language's development. Specific issues were originally discussed in the Roundup bug tracker hosted by the foundation. In 2022, all issues and discussions were migrated to GitHub. Development originally took place on a self-hosted source-code repository running Mercurial, until Python moved to GitHub in January 2017. CPython's public releases have three types, distinguished by which part of the version number is incremented: Many alpha, beta, and release-candidates are also released as previews and for testing before final releases. Although there is a rough schedule for releases, they are often delayed if the code is not ready yet. Python's development team monitors the state of the code by running a large unit test suite during development. The major academic conference on Python is PyCon. Also, there are special Python mentoring programs, such as PyLadies. Naming Python's name is inspired by the British comedy group Monty Python, whom Python creator Guido van Rossum enjoyed while developing the language. Monty Python references appear frequently in Python code and culture; for example, the metasyntactic variables often used in Python literature are spam and eggs, rather than the traditional foo and bar. Also, the official Python documentation contains various references to Monty Python routines. Python users are sometimes referred to as "Pythonistas". Languages influenced by Python See also Notes References Further reading External links
========================================
[SOURCE: https://github.com/features/discussions] | [TOKENS: 231]
Navigation Menu Search code, repositories, users, issues, pull requests... Provide feedback We read every piece of feedback, and take your input very seriously. Saved searches Use saved searches to filter your results more quickly To see all available qualifiers, see our documentation. The home for developer communities Ask questions, share ideas, and build connections with each other—all right next to your code. GitHub Discussions enables healthy and productive software collaboration. Dedicated space for conversations Decrease the burden of managing active work in issues and pull requests by providing a separate space to host ongoing discussions, questions, and ideas. Customize Personalize for your community and team with any ways to make your space unique for you and your collaborators. Monitor insights Track the health and growth of your community with a dashboard full of actionable data. Count of total contribution activity to Discussions, Issues, and PRs. Total page views to Discussions segmented by logged in vs anonymous users. Count of unique users who have reacted, upvoted, marked an answer, commented, or posted in the selected period. Start the conversation with your community Site-wide Links Get tips, technical guides, and best practices. Twice a month.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-Valve_273-0] | [TOKENS: 12858]
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Animal#cite_note-44] | [TOKENS: 6011]
Contents Animal Animals are multicellular, eukaryotic organisms belonging to the biological kingdom Animalia (/ˌænɪˈmeɪliə/). With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Animals form a clade, meaning that they arose from a single common ancestor. Over 1.5 million living animal species have been described, of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are as many as 7.77 million animal species on Earth. Animal body lengths range from 8.5 μm (0.00033 in) to 33.6 m (110 ft). They have complex ecologies and interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology, and the study of animal behaviour is known as ethology. The animal kingdom is divided into five major clades, namely Porifera, Ctenophora, Placozoa, Cnidaria and Bilateria. Most living animal species belong to the clade Bilateria, a highly proliferative clade whose members have a bilaterally symmetric and significantly cephalised body plan, and the vast majority of bilaterians belong to two large clades: the protostomes, which includes organisms such as arthropods, molluscs, flatworms, annelids and nematodes; and the deuterostomes, which include echinoderms, hemichordates and chordates, the latter of which contains the vertebrates. The much smaller basal phylum Xenacoelomorpha have an uncertain position within Bilateria. Animals first appeared in the fossil record in the late Cryogenian period and diversified in the subsequent Ediacaran period in what is known as the Avalon explosion. Nearly all modern animal phyla first appeared in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago (Mya), and most classes during the Ordovician radiation 485.4 Mya. Common to all living animals, 6,331 groups of genes have been identified that may have arisen from a single common ancestor that lived about 650 Mya during the Cryogenian period. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on advanced techniques, such as molecular phylogenetics, which are effective at demonstrating the evolutionary relationships between taxa. Humans make use of many other animal species for food (including meat, eggs, and dairy products), for materials (such as leather, fur, and wool), as pets and as working animals for transportation, and services. Dogs, the first domesticated animal, have been used in hunting, in security and in warfare, as have horses, pigeons and birds of prey; while other terrestrial and aquatic animals are hunted for sports, trophies or profits. Non-human animals are also an important cultural element of human evolution, having appeared in cave arts and totems since the earliest times, and are frequently featured in mythology, religion, arts, literature, heraldry, politics, and sports. Etymology The word animal comes from the Latin noun animal of the same meaning, which is itself derived from Latin animalis 'having breath or soul'. The biological definition includes all members of the kingdom Animalia. In colloquial usage, the term animal is often used to refer only to nonhuman animals. The term metazoa is derived from Ancient Greek μετα meta 'after' (in biology, the prefix meta- stands for 'later') and ζῷᾰ zōia 'animals', plural of ζῷον zōion 'animal'. A metazoan is any member of the group Metazoa. Characteristics Animals have several characteristics that they share with other living things. Animals are eukaryotic, multicellular, and aerobic, as are plants and fungi. Unlike plants and algae, which produce their own food, animals cannot produce their own food, a feature they share with fungi. Animals ingest organic material and digest it internally. Animals have structural characteristics that set them apart from all other living things: Typically, there is an internal digestive chamber with either one opening (in Ctenophora, Cnidaria, and flatworms) or two openings (in most bilaterians). Animal development is controlled by Hox genes, which signal the times and places to develop structures such as body segments and limbs. During development, the animal extracellular matrix forms a relatively flexible framework upon which cells can move about and be reorganised into specialised tissues and organs, making the formation of complex structures possible, and allowing cells to be differentiated. The extracellular matrix may be calcified, forming structures such as shells, bones, and spicules. In contrast, the cells of other multicellular organisms (primarily algae, plants, and fungi) are held in place by cell walls, and so develop by progressive growth. Nearly all animals make use of some form of sexual reproduction. They produce haploid gametes by meiosis; the smaller, motile gametes are spermatozoa and the larger, non-motile gametes are ova. These fuse to form zygotes, which develop via mitosis into a hollow sphere, called a blastula. In sponges, blastula larvae swim to a new location, attach to the seabed, and develop into a new sponge. In most other groups, the blastula undergoes more complicated rearrangement. It first invaginates to form a gastrula with a digestive chamber and two separate germ layers, an external ectoderm and an internal endoderm. In most cases, a third germ layer, the mesoderm, also develops between them. These germ layers then differentiate to form tissues and organs. Repeated instances of mating with a close relative during sexual reproduction generally leads to inbreeding depression within a population due to the increased prevalence of harmful recessive traits. Animals have evolved numerous mechanisms for avoiding close inbreeding. Some animals are capable of asexual reproduction, which often results in a genetic clone of the parent. This may take place through fragmentation; budding, such as in Hydra and other cnidarians; or parthenogenesis, where fertile eggs are produced without mating, such as in aphids. Ecology Animals are categorised into ecological groups depending on their trophic levels and how they consume organic material. Such groupings include carnivores (further divided into subcategories such as piscivores, insectivores, ovivores, etc.), herbivores (subcategorised into folivores, graminivores, frugivores, granivores, nectarivores, algivores, etc.), omnivores, fungivores, scavengers/detritivores, and parasites. Interactions between animals of each biome form complex food webs within that ecosystem. In carnivorous or omnivorous species, predation is a consumer–resource interaction where the predator feeds on another organism, its prey, who often evolves anti-predator adaptations to avoid being fed upon. Selective pressures imposed on one another lead to an evolutionary arms race between predator and prey, resulting in various antagonistic/competitive coevolutions. Almost all multicellular predators are animals. Some consumers use multiple methods; for example, in parasitoid wasps, the larvae feed on the hosts' living tissues, killing them in the process, but the adults primarily consume nectar from flowers. Other animals may have very specific feeding behaviours, such as hawksbill sea turtles which mainly eat sponges. Most animals rely on biomass and bioenergy produced by plants and phytoplanktons (collectively called producers) through photosynthesis. Herbivores, as primary consumers, eat the plant material directly to digest and absorb the nutrients, while carnivores and other animals on higher trophic levels indirectly acquire the nutrients by eating the herbivores or other animals that have eaten the herbivores. Animals oxidise carbohydrates, lipids, proteins and other biomolecules in cellular respiration, which allows the animal to grow and to sustain basal metabolism and fuel other biological processes such as locomotion. Some benthic animals living close to hydrothermal vents and cold seeps on the dark sea floor consume organic matter produced through chemosynthesis (via oxidising inorganic compounds such as hydrogen sulfide) by archaea and bacteria. Animals originated in the ocean; all extant animal phyla, except for Micrognathozoa and Onychophora, feature at least some marine species. However, several lineages of arthropods begun to colonise land around the same time as land plants, probably between 510 and 471 million years ago, during the Late Cambrian or Early Ordovician. Vertebrates such as the lobe-finned fish Tiktaalik started to move on to land in the late Devonian, about 375 million years ago. Other notable animal groups that colonized land environments are Mollusca, Platyhelmintha, Annelida, Tardigrada, Onychophora, Rotifera, Nematoda. Animals occupy virtually all of earth's habitats and microhabitats, with faunas adapted to salt water, hydrothermal vents, fresh water, hot springs, swamps, forests, pastures, deserts, air, and the interiors of other organisms. Animals are however not particularly heat tolerant; very few of them can survive at constant temperatures above 50 °C (122 °F) or in the most extreme cold deserts of continental Antarctica. The collective global geomorphic influence of animals on the processes shaping the Earth's surface remains largely understudied, with most studies limited to individual species and well-known exemplars. Diversity The blue whale (Balaenoptera musculus) is the largest animal that has ever lived, weighing up to 190 tonnes and measuring up to 33.6 metres (110 ft) long. The largest extant terrestrial animal is the African bush elephant (Loxodonta africana), weighing up to 12.25 tonnes and measuring up to 10.67 metres (35.0 ft) long. The largest terrestrial animals that ever lived were titanosaur sauropod dinosaurs such as Argentinosaurus, which may have weighed as much as 73 tonnes, and Supersaurus which may have reached 39 metres. Several animals are microscopic; some Myxozoa (obligate parasites within the Cnidaria) never grow larger than 20 μm, and one of the smallest species (Myxobolus shekel) is no more than 8.5 μm when fully grown. The following table lists estimated numbers of described extant species for the major animal phyla, along with their principal habitats (terrestrial, fresh water, and marine), and free-living or parasitic ways of life. Species estimates shown here are based on numbers described scientifically; much larger estimates have been calculated based on various means of prediction, and these can vary wildly. For instance, around 25,000–27,000 species of nematodes have been described, while published estimates of the total number of nematode species include 10,000–20,000; 500,000; 10 million; and 100 million. Using patterns within the taxonomic hierarchy, the total number of animal species—including those not yet described—was calculated to be about 7.77 million in 2011.[a] 3,000–6,500 4,000–25,000 Evolutionary origin Evidence of animals is found as long ago as the Cryogenian period. 24-Isopropylcholestane (24-ipc) has been found in rocks from roughly 650 million years ago; it is only produced by sponges and pelagophyte algae. Its likely origin is from sponges based on molecular clock estimates for the origin of 24-ipc production in both groups. Analyses of pelagophyte algae consistently recover a Phanerozoic origin, while analyses of sponges recover a Neoproterozoic origin, consistent with the appearance of 24-ipc in the fossil record. The first body fossils of animals appear in the Ediacaran, represented by forms such as Charnia and Spriggina. It had long been doubted whether these fossils truly represented animals, but the discovery of the animal lipid cholesterol in fossils of Dickinsonia establishes their nature. Animals are thought to have originated under low-oxygen conditions, suggesting that they were capable of living entirely by anaerobic respiration, but as they became specialised for aerobic metabolism they became fully dependent on oxygen in their environments. Many animal phyla first appear in the fossil record during the Cambrian explosion, starting about 539 million years ago, in beds such as the Burgess Shale. Extant phyla in these rocks include molluscs, brachiopods, onychophorans, tardigrades, arthropods, echinoderms and hemichordates, along with numerous now-extinct forms such as the predatory Anomalocaris. The apparent suddenness of the event may however be an artefact of the fossil record, rather than showing that all these animals appeared simultaneously. That view is supported by the discovery of Auroralumina attenboroughii, the earliest known Ediacaran crown-group cnidarian (557–562 mya, some 20 million years before the Cambrian explosion) from Charnwood Forest, England. It is thought to be one of the earliest predators, catching small prey with its nematocysts as modern cnidarians do. Some palaeontologists have suggested that animals appeared much earlier than the Cambrian explosion, possibly as early as 1 billion years ago. Early fossils that might represent animals appear for example in the 665-million-year-old rocks of the Trezona Formation of South Australia. These fossils are interpreted as most probably being early sponges. Trace fossils such as tracks and burrows found in the Tonian period (from 1 gya) may indicate the presence of triploblastic worm-like animals, roughly as large (about 5 mm wide) and complex as earthworms. However, similar tracks are produced by the giant single-celled protist Gromia sphaerica, so the Tonian trace fossils may not indicate early animal evolution. Around the same time, the layered mats of microorganisms called stromatolites decreased in diversity, perhaps due to grazing by newly evolved animals. Objects such as sediment-filled tubes that resemble trace fossils of the burrows of wormlike animals have been found in 1.2 gya rocks in North America, in 1.5 gya rocks in Australia and North America, and in 1.7 gya rocks in Australia. Their interpretation as having an animal origin is disputed, as they might be water-escape or other structures. Phylogeny Animals are monophyletic, meaning they are derived from a common ancestor. Animals are the sister group to the choanoflagellates, with which they form the Choanozoa. Ros-Rocher and colleagues (2021) trace the origins of animals to unicellular ancestors, providing the external phylogeny shown in the cladogram. Uncertainty of relationships is indicated with dashed lines. The animal clade had certainly originated by 650 mya, and may have come into being as much as 800 mya, based on molecular clock evidence for different phyla. Holomycota (inc. fungi) Ichthyosporea Pluriformea Filasterea The relationships at the base of the animal tree have been debated. Other than Ctenophora, the Bilateria and Cnidaria are the only groups with symmetry, and other evidence shows they are closely related. In addition to sponges, Placozoa has no symmetry and was often considered a "missing link" between protists and multicellular animals. The presence of hox genes in Placozoa shows that they were once more complex. The Porifera (sponges) have long been assumed to be sister to the rest of the animals, but there is evidence that the Ctenophora may be in that position. Molecular phylogenetics has supported both the sponge-sister and ctenophore-sister hypotheses. In 2017, Roberto Feuda and colleagues, using amino acid differences, presented both, with the following cladogram for the sponge-sister view that they supported (their ctenophore-sister tree simply interchanging the places of ctenophores and sponges): Porifera Ctenophora Placozoa Cnidaria Bilateria Conversely, a 2023 study by Darrin Schultz and colleagues uses ancient gene linkages to construct the following ctenophore-sister phylogeny: Ctenophora Porifera Placozoa Cnidaria Bilateria Sponges are physically very distinct from other animals, and were long thought to have diverged first, representing the oldest animal phylum and forming a sister clade to all other animals. Despite their morphological dissimilarity with all other animals, genetic evidence suggests sponges may be more closely related to other animals than the comb jellies are. Sponges lack the complex organisation found in most other animal phyla; their cells are differentiated, but in most cases not organised into distinct tissues, unlike all other animals. They typically feed by drawing in water through pores, filtering out small particles of food. The Ctenophora and Cnidaria are radially symmetric and have digestive chambers with a single opening, which serves as both mouth and anus. Animals in both phyla have distinct tissues, but these are not organised into discrete organs. They are diploblastic, having only two main germ layers, ectoderm and endoderm. The tiny placozoans have no permanent digestive chamber and no symmetry; they superficially resemble amoebae. Their phylogeny is poorly defined, and under active research. The remaining animals, the great majority—comprising some 29 phyla and over a million species—form the Bilateria clade, which have a bilaterally symmetric body plan. The Bilateria are triploblastic, with three well-developed germ layers, and their tissues form distinct organs. The digestive chamber has two openings, a mouth and an anus, and in the Nephrozoa there is an internal body cavity, a coelom or pseudocoelom. These animals have a head end (anterior) and a tail end (posterior), a back (dorsal) surface and a belly (ventral) surface, and a left and a right side. A modern consensus phylogenetic tree for the Bilateria is shown below. Xenacoelomorpha Ambulacraria Chordata Ecdysozoa Spiralia Having a front end means that this part of the body encounters stimuli, such as food, favouring cephalisation, the development of a head with sense organs and a mouth. Many bilaterians have a combination of circular muscles that constrict the body, making it longer, and an opposing set of longitudinal muscles, that shorten the body; these enable soft-bodied animals with a hydrostatic skeleton to move by peristalsis. They also have a gut that extends through the basically cylindrical body from mouth to anus. Many bilaterian phyla have primary larvae which swim with cilia and have an apical organ containing sensory cells. However, over evolutionary time, descendant spaces have evolved which have lost one or more of each of these characteristics. For example, adult echinoderms are radially symmetric (unlike their larvae), while some parasitic worms have extremely simplified body structures. Genetic studies have considerably changed zoologists' understanding of the relationships within the Bilateria. Most appear to belong to two major lineages, the protostomes and the deuterostomes. It is often suggested that the basalmost bilaterians are the Xenacoelomorpha, with all other bilaterians belonging to the subclade Nephrozoa. However, this suggestion has been contested, with other studies finding that xenacoelomorphs are more closely related to Ambulacraria than to other bilaterians. Protostomes and deuterostomes differ in several ways. Early in development, deuterostome embryos undergo radial cleavage during cell division, while many protostomes (the Spiralia) undergo spiral cleavage. Animals from both groups possess a complete digestive tract, but in protostomes the first opening of the embryonic gut develops into the mouth, and the anus forms secondarily. In deuterostomes, the anus forms first while the mouth develops secondarily. Most protostomes have schizocoelous development, where cells simply fill in the interior of the gastrula to form the mesoderm. In deuterostomes, the mesoderm forms by enterocoelic pouching, through invagination of the endoderm. The main deuterostome taxa are the Ambulacraria and the Chordata. Ambulacraria are exclusively marine and include acorn worms, starfish, sea urchins, and sea cucumbers. The chordates are dominated by the vertebrates (animals with backbones), which consist of fishes, amphibians, reptiles, birds, and mammals. The protostomes include the Ecdysozoa, named after their shared trait of ecdysis, growth by moulting, Among the largest ecdysozoan phyla are the arthropods and the nematodes. The rest of the protostomes are in the Spiralia, named for their pattern of developing by spiral cleavage in the early embryo. Major spiralian phyla include the annelids and molluscs. History of classification In the classical era, Aristotle divided animals,[d] based on his own observations, into those with blood (roughly, the vertebrates) and those without. The animals were then arranged on a scale from man (with blood, two legs, rational soul) down through the live-bearing tetrapods (with blood, four legs, sensitive soul) and other groups such as crustaceans (no blood, many legs, sensitive soul) down to spontaneously generating creatures like sponges (no blood, no legs, vegetable soul). Aristotle was uncertain whether sponges were animals, which in his system ought to have sensation, appetite, and locomotion, or plants, which did not: he knew that sponges could sense touch and would contract if about to be pulled off their rocks, but that they were rooted like plants and never moved about. In 1758, Carl Linnaeus created the first hierarchical classification in his Systema Naturae. In his original scheme, the animals were one of three kingdoms, divided into the classes of Vermes, Insecta, Pisces, Amphibia, Aves, and Mammalia. Since then, the last four have all been subsumed into a single phylum, the Chordata, while his Insecta (which included the crustaceans and arachnids) and Vermes have been renamed or broken up. The process was begun in 1793 by Jean-Baptiste de Lamarck, who called the Vermes une espèce de chaos ('a chaotic mess')[e] and split the group into three new phyla: worms, echinoderms, and polyps (which contained corals and jellyfish). By 1809, in his Philosophie Zoologique, Lamarck had created nine phyla apart from vertebrates (where he still had four phyla: mammals, birds, reptiles, and fish) and molluscs, namely cirripedes, annelids, crustaceans, arachnids, insects, worms, radiates, polyps, and infusorians. In his 1817 Le Règne Animal, Georges Cuvier used comparative anatomy to group the animals into four embranchements ('branches' with different body plans, roughly corresponding to phyla), namely vertebrates, molluscs, articulated animals (arthropods and annelids), and zoophytes (radiata) (echinoderms, cnidaria and other forms). This division into four was followed by the embryologist Karl Ernst von Baer in 1828, the zoologist Louis Agassiz in 1857, and the comparative anatomist Richard Owen in 1860. In 1874, Ernst Haeckel divided the animal kingdom into two subkingdoms: Metazoa (multicellular animals, with five phyla: coelenterates, echinoderms, articulates, molluscs, and vertebrates) and Protozoa (single-celled animals), including a sixth animal phylum, sponges. The protozoa were later moved to the former kingdom Protista, leaving only the Metazoa as a synonym of Animalia. In human culture The human population exploits a large number of other animal species for food, both of domesticated livestock species in animal husbandry and, mainly at sea, by hunting wild species. Marine fish of many species are caught commercially for food. A smaller number of species are farmed commercially. Humans and their livestock make up more than 90% of the biomass of all terrestrial vertebrates, and almost as much as all insects combined. Invertebrates including cephalopods, crustaceans, insects—principally bees and silkworms—and bivalve or gastropod molluscs are hunted or farmed for food, fibres. Chickens, cattle, sheep, pigs, and other animals are raised as livestock for meat across the world. Animal fibres such as wool and silk are used to make textiles, while animal sinews have been used as lashings and bindings, and leather is widely used to make shoes and other items. Animals have been hunted and farmed for their fur to make items such as coats and hats. Dyestuffs including carmine (cochineal), shellac, and kermes have been made from the bodies of insects. Working animals including cattle and horses have been used for work and transport from the first days of agriculture. Animals such as the fruit fly Drosophila melanogaster serve a major role in science as experimental models. Animals have been used to create vaccines since their discovery in the 18th century. Some medicines such as the cancer drug trabectedin are based on toxins or other molecules of animal origin. People have used hunting dogs to help chase down and retrieve animals, and birds of prey to catch birds and mammals, while tethered cormorants have been used to catch fish. Poison dart frogs have been used to poison the tips of blowpipe darts. A wide variety of animals are kept as pets, from invertebrates such as tarantulas, octopuses, and praying mantises, reptiles such as snakes and chameleons, and birds including canaries, parakeets, and parrots all finding a place. However, the most kept pet species are mammals, namely dogs, cats, and rabbits. There is a tension between the role of animals as companions to humans, and their existence as individuals with rights of their own. A wide variety of terrestrial and aquatic animals are hunted for sport. The signs of the Western and Chinese zodiacs are based on animals. In China and Japan, the butterfly has been seen as the personification of a person's soul, and in classical representation the butterfly is also the symbol of the soul. Animals have been the subjects of art from the earliest times, both historical, as in ancient Egypt, and prehistoric, as in the cave paintings at Lascaux. Major animal paintings include Albrecht Dürer's 1515 The Rhinoceros, and George Stubbs's c. 1762 horse portrait Whistlejacket. Insects, birds and mammals play roles in literature and film, such as in giant bug movies. Animals including insects and mammals feature in mythology and religion. The scarab beetle was sacred in ancient Egypt, and the cow is sacred in Hinduism. Among other mammals, deer, horses, lions, bats, bears, and wolves are the subjects of myths and worship. See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Old_School_Renaissance] | [TOKENS: 857]
Contents Old School Renaissance The Old School Renaissance, Old School Revival, or OSR is a play style movement in tabletop role-playing games which draws inspiration from the earliest days of tabletop RPGs in the 1970s, especially Dungeons & Dragons. It consists of a loose network or community of gamers and game designers who share an interest in a certain style of play and set of game design principles. Terminology The terms "old school revival" and "old school renaissance" were first used on the Dragonsfoot forum as early as 2004 and 2005, respectively, to refer to a growing interest in older editions of Dungeons and Dragons and games inspired by those older editions. By February of 2008, a pre-launch call for submissions for Fight On! magazine described it as "a quarterly fanzine for the old-school Renaissance". The two terms (revival and renaissance) continue to be used interchangeably according to user preference, though a 2018 survey found that most respondents understood the R in OSR to mean "renaissance" over "revival", with "rules" and "revolution" as distant third- and fourth-place choices. Ben Milton describes the use of "Revival" as a return to older role-playing games, and "Renaissance" as taking inspiration from the kinds of play they engendered. History The OSR movement first developed in the early 2000s, primarily in discussion on internet forums such as Dragonsfoot, Knights & Knaves Alehouse, and Original D&D Discussion, soon expanded by discussion on a large and diverse network of blogs. Partly as a reaction to the publication of the Third Edition of Dungeons and Dragons, interest in and discussion of "old school" play also led to the creation of Dungeons and Dragons retro-clones (legal emulations of RPG rules from the 1970s and early 1980s), including games such as Labyrinth Lord and OSRIC which were developed in OSR-related forums. Zines dedicated to OSR content, such as Fight On! and Knockspell, began to be published as early as 2008. In addition to the development of internet platforms and printed rule books, other printed OSR products became widely available. In 2008, Matthew Finch (creator of OSRIC) released his free "Quick Primer for Old School Gaming", which tried to sum up the OSR aesthetic. Print-on-demand sites such as Lulu and DriveThruRPG allowed authors to market periodicals, such as Fight On! and many new adventure scenarios and game settings. These continue to be created and marketed, along with older, formerly out of print gaming products, via print-on-demand services. In 2012, Wizards of the Coast began publishing reprints and PDFs of Advanced Dungeons and Dragons and Dungeons and Dragons Basic Set materials, possibly in response to a perceived market for these materials driven by the OSR. By the early 2020s, the OSR had inspired such diverse developments in tabletop gaming that new classifications such as "BrOSR", "Classic OSR", "OSR-Adjacent", "Nu-OSR/NSR" and "Commercial OSR" were being used. Games A variety of published RPGs can be understood to be influenced by or part of the OSR trend, ranging from emulations of specific editions of Dungeons and Dragons such as OSRIC Old-School Essentials, and Labyrinth Lord to games such as The Black Hack, Mörk Borg, and Electric Bastionland, which are designed to recreate the "feel" of 1970s roleplaying while taking only slight (if any) inspiration from the early rules. Style of play Broadly, OSR games encourage a tonal fidelity to early editions of Dungeons & Dragons—less emphasis on predefined endings, and a greater emphasis on player choice determining the fate of characters. OSR Games provide play where wrong decisions can easily become lethal for characters and do not guarantee satisfying endings to character arcs. Characters live and die by player choice as opposed to the story's needs. Matthew Finch, in his 2008 book A Quick Primer for Old School Gaming, sets out the four pillars of OSR: See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Orion_(constellation)#cite_note-30] | [TOKENS: 4993]
Contents Orion (constellation) Orion is a prominent set of stars visible during winter in the northern celestial hemisphere. It is one of the 88 modern constellations; it was among the 48 constellations listed by the 2nd-century AD/CE astronomer Ptolemy. It is named after a hunter in Greek mythology. Orion is most prominent during winter evenings in the Northern Hemisphere, as are five other constellations that have stars in the Winter Hexagon asterism. Orion's two brightest stars, Rigel (β) and Betelgeuse (α), are both among the brightest stars in the night sky; both are supergiants and slightly variable. There are a further six stars brighter than magnitude 3.0, including three making the short straight line of the Orion's Belt asterism. Orion also hosts the radiant of the annual Orionids, the strongest meteor shower associated with Halley's Comet, and the Orion Nebula, one of the brightest nebulae in the sky. Characteristics Orion is bordered by Taurus to the northwest, Eridanus to the southwest, Lepus to the south, Monoceros to the east, and Gemini to the northeast. Covering 594 square degrees, Orion ranks 26th of the 88 constellations in size. The constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 26 sides. In the equatorial coordinate system, the right ascension coordinates of these borders lie between 04h 43.3m and 06h 25.5m , while the declination coordinates are between 22.87° and −10.97°. The constellation's three-letter abbreviation, as adopted by the International Astronomical Union in 1922, is "Ori". Orion is most visible in the evening sky from January to April, winter in the Northern Hemisphere, and summer in the Southern Hemisphere. In the tropics (less than about 8° from the equator), the constellation transits at the zenith. From May to July (summer in the Northern Hemisphere, winter in the Southern Hemisphere), Orion is in the daytime sky and thus invisible at most latitudes. However, for much of Antarctica in the Southern Hemisphere's winter months, the Sun is below the horizon even at midday. Stars (and thus Orion, but only the brightest stars) are then visible at twilight for a few hours around local noon, just in the brightest section of the sky low in the North where the Sun is just below the horizon. At the same time of day at the South Pole itself (Amundsen–Scott South Pole Station), Rigel is only 8° above the horizon, and the Belt sweeps just along it. In the Southern Hemisphere's summer months, when Orion is normally visible in the night sky, the constellation is actually not visible in Antarctica because the Sun does not set at that time of year south of the Antarctic Circle. In countries close to the equator (e.g. Kenya, Indonesia, Colombia, Ecuador), Orion appears overhead in December around midnight and in the February evening sky. Navigational aid Orion is very useful as an aid to locating other stars. By extending the line of the Belt southeastward, Sirius (α CMa) can be found; northwestward, Aldebaran (α Tau). A line eastward across the two shoulders indicates the direction of Procyon (α CMi). A line from Rigel through Betelgeuse points to Castor and Pollux (α Gem and β Gem). Additionally, Rigel is part of the Winter Circle asterism. Sirius and Procyon, which may be located from Orion by following imaginary lines (see map), also are points in both the Winter Triangle and the Circle. Features Orion's seven brightest stars form a distinctive hourglass-shaped asterism, or pattern, in the night sky. Four stars—Rigel, Betelgeuse, Bellatrix, and Saiph—form a large roughly rectangular shape, at the center of which lie the three stars of Orion's Belt—Alnitak, Alnilam, and Mintaka. His head is marked by an additional eighth star called Meissa, which is fairly bright to the observer. Descending from the Belt is a smaller line of three stars, Orion's Sword (the middle of which is in fact not a star but the Orion Nebula), also known as the hunter's sword. Many of the stars are luminous hot blue supergiants, with the stars of the Belt and Sword forming the Orion OB1 association. Standing out by its red hue, Betelgeuse may nevertheless be a runaway member of the same group. Orion's Belt, or The Belt of Orion, is an asterism within the constellation. It consists of three bright stars: Alnitak (Zeta Orionis), Alnilam (Epsilon Orionis), and Mintaka (Delta Orionis). Alnitak is around 800 light-years away from Earth, 100,000 times more luminous than the Sun, and shines with a magnitude of 1.8; much of its radiation is in the ultraviolet range, which the human eye cannot see. Alnilam is approximately 2,000 light-years from Earth, shines with a magnitude of 1.70, and with an ultraviolet light that is 375,000 times more luminous than the Sun. Mintaka is 915 light-years away and shines with a magnitude of 2.21. It is 90,000 times more luminous than the Sun and is a double star: the two orbit each other every 5.73 days. In the Northern Hemisphere, Orion's Belt is best visible in the night sky during the month of January at around 9:00 pm, when it is approximately around the local meridian. Just southwest of Alnitak lies Sigma Orionis, a multiple star system composed of five stars that have a combined apparent magnitude of 3.7 and lying at a distance of 1150 light-years. Southwest of Mintaka lies the quadruple star Eta Orionis. Orion's Sword contains the Orion Nebula, the Messier 43 nebula, Sh 2-279 (also known as the Running Man Nebula), and the stars Theta Orionis, Iota Orionis, and 42 Orionis. Three stars comprise a small triangle that marks the head. The apex is marked by Meissa (Lambda Orionis), a hot blue giant of spectral type O8 III and apparent magnitude 3.54, which lies some 1100 light-years distant. Phi-1 and Phi-2 Orionis make up the base. Also nearby is the young star FU Orionis. Stretching north from Betelgeuse are the stars that make up Orion's club. Mu Orionis marks the elbow, Nu and Xi mark the handle of the club, and Chi1 and Chi2 mark the end of the club. Just east of Chi1 is the Mira-type variable red giant star U Orionis. West from Bellatrix lie six stars all designated Pi Orionis (π1 Ori, π2 Ori, π3 Ori, π4 Ori, π5 Ori, and π6 Ori) which make up Orion's shield. Around 20 October each year, the Orionid meteor shower (Orionids) reaches its peak. Coming from the border with the constellation Gemini, as many as 20 meteors per hour can be seen. The shower's parent body is Halley's Comet. Hanging from Orion's Belt is his sword, consisting of the multiple stars θ1 and θ2 Orionis, called the Trapezium and the Orion Nebula (M42). This is a spectacular object that can be clearly identified with the naked eye as something other than a star. Using binoculars, its clouds of nascent stars, luminous gas, and dust can be observed. The Trapezium cluster has many newborn stars, including several brown dwarfs, all of which are at an approximate distance of 1,500 light-years. Named for the four bright stars that form a trapezoid, it is largely illuminated by the brightest stars, which are only a few hundred thousand years old. Observations by the Chandra X-ray Observatory show both the extreme temperatures of the main stars—up to 60,000 kelvins—and the star forming regions still extant in the surrounding nebula. M78 (NGC 2068) is a nebula in Orion. With an overall magnitude of 8.0, it is significantly dimmer than the Great Orion Nebula that lies to its south; however, it is at approximately the same distance, at 1600 light-years from Earth. It can easily be mistaken for a comet in the eyepiece of a telescope. M78 is associated with the variable star V351 Orionis, whose magnitude changes are visible in very short periods of time. Another fairly bright nebula in Orion is NGC 1999, also close to the Great Orion Nebula. It has an integrated magnitude of 10.5 and is 1500 light-years from Earth. The variable star V380 Orionis is embedded in NGC 1999. Another famous nebula is IC 434, the Horsehead Nebula, near Alnitak (Zeta Orionis). It contains a dark dust cloud whose shape gives the nebula its name. NGC 2174 is an emission nebula located 6400 light-years from Earth. Besides these nebulae, surveying Orion with a small telescope will reveal a wealth of interesting deep-sky objects, including M43, M78, and multiple stars including Iota Orionis and Sigma Orionis. A larger telescope may reveal objects such as the Flame Nebula (NGC 2024), as well as fainter and tighter multiple stars and nebulae. Barnard's Loop can be seen on very dark nights or using long-exposure photography. All of these nebulae are part of the larger Orion molecular cloud complex, which is located approximately 1,500 light-years away and is hundreds of light-years across. Due to its proximity, it is one of the most intense regions of stellar formation visible from Earth. The Orion molecular cloud complex forms the eastern part of an even larger structure, the Orion–Eridanus Superbubble, which is visible in X-rays and in hydrogen emissions. History and mythology The distinctive pattern of Orion is recognized in numerous cultures around the world, and many myths are associated with it. Orion is used as a symbol in the modern world. In Siberia, the Chukchi people see Orion as a hunter; an arrow he has shot is represented by Aldebaran (Alpha Tauri), with the same figure as other Western depictions. In Greek mythology, Orion was a gigantic, supernaturally strong hunter, born to Euryale, a Gorgon, and Poseidon (Neptune), god of the sea. One myth recounts Gaia's rage at Orion, who dared to say that he would kill every animal on Earth. The angry goddess tried to dispatch Orion with a scorpion. This is given as the reason that the constellations of Scorpius and Orion are never in the sky at the same time. However, Ophiuchus, the Serpent Bearer, revived Orion with an antidote. This is said to be the reason that the constellation of Ophiuchus stands midway between the Scorpion and the Hunter in the sky. The constellation is mentioned in Horace's Odes (Ode 3.27.18), Homer's Odyssey (Book 5, line 283) and Iliad, and Virgil's Aeneid (Book 1, line 535). In old Hungarian tradition, Orion is known as "Archer" (Íjász), or "Reaper" (Kaszás). In recently rediscovered myths, he is called Nimrod (Hungarian: Nimród), the greatest hunter, father of the twins Hunor and Magor. The π and o stars (on upper right) form together the reflex bow or the lifted scythe. In other Hungarian traditions, Orion's Belt is known as "Judge's stick" (Bírópálca). In Ireland and Scotland, Orion was called An Bodach, a figure from Irish folklore whose name literally means "the one with a penis [bod]" and was the husband of the Cailleach (hag). In Scandinavian tradition, Orion's Belt was known as "Frigg's Distaff" (friggerock) or "Freyja's distaff". The Finns call Orion's Belt and the stars below it "Väinämöinen's scythe" (Väinämöisen viikate). Another name for the asterism of Alnilam, Alnitak, and Mintaka is "Väinämöinen's Belt" (Väinämöisen vyö) and the stars "hanging" from the Belt as "Kaleva's sword" (Kalevanmiekka). There are claims in popular media that the Adorant from the Geißenklösterle cave, an ivory carving estimated to be 35,000 to 40,000 years old, is the first known depiction of the constellation. Scholars dismiss such interpretations, saying that perceived details such as a belt and sword derive from preexisting features in the grain structure of the ivory. The Babylonian star catalogues of the Late Bronze Age name Orion MULSIPA.ZI.AN.NA,[note 1] "The Heavenly Shepherd" or "True Shepherd of Anu" – Anu being the chief god of the heavenly realms. The Babylonian constellation is sacred to Papshukal and Ninshubur, both minor gods fulfilling the role of "messenger to the gods". Papshukal is closely associated with the figure of a walking bird on Babylonian boundary stones, and on the star map the figure of the Rooster is located below and behind the figure of the True Shepherd—both constellations represent the herald of the gods, in his bird and human forms respectively. In ancient Egypt, the stars of Orion were regarded as a god, called Sah. Because Orion rises before Sirius, the star whose heliacal rising was the basis for the Solar Egyptian calendar, Sah was closely linked with Sopdet, the goddess who personified Sirius. The god Sopdu is said to be the son of Sah and Sopdet. Sah is syncretized with Osiris, while Sopdet is syncretized with Osiris' mythological wife, Isis. In the Pyramid Texts, from the 24th and 23rd centuries BC, Sah is one of many gods whose form the dead pharaoh is said to take in the afterlife. The Armenians identified their legendary patriarch and founder Hayk with Orion. Hayk is also the name of the Orion constellation in the Armenian translation of the Bible. The Bible mentions Orion three times, naming it "Kesil" (כסיל, literally – fool). Though, this name perhaps is etymologically connected with "Kislev", the name for the ninth month of the Hebrew calendar (i.e. November–December), which, in turn, may derive from the Hebrew root K-S-L as in the words "kesel, kisla" (כֵּסֶל, כִּסְלָה, hope, positiveness), i.e. hope for winter rains.: Job 9:9 ("He is the maker of the Bear and Orion"), Job 38:31 ("Can you loosen Orion's belt?"), and Amos 5:8 ("He who made the Pleiades and Orion"). In ancient Aram, the constellation was known as Nephîlā′, the Nephilim are said to be Orion's descendants. In medieval Muslim astronomy, Orion was known as al-jabbar, "the giant". Orion's sixth brightest star, Saiph, is named from the Arabic, saif al-jabbar, meaning "sword of the giant". In China, Orion was one of the 28 lunar mansions Sieu (Xiù) (宿). It is known as Shen (參), literally meaning "three", for the stars of Orion's Belt. The Chinese character 參 (pinyin shēn) originally meant the constellation Orion (Chinese: 參宿; pinyin: shēnxiù); its Shang dynasty version, over three millennia old, contains at the top a representation of the three stars of Orion's Belt atop a man's head (the bottom portion representing the sound of the word was added later). The Rigveda refers to the constellation as Mriga (the Deer). Nataraja, "the cosmic dancer", is often interpreted as the representation of Orion. Rudra, the Rigvedic form of Shiva, is the presiding deity of Ardra nakshatra (Betelgeuse) of Hindu astrology. The Jain Symbol carved in the Udayagiri and Khandagiri Caves, India in 1st century BCE has a striking resemblance with Orion. Bugis sailors identified the three stars in Orion's Belt as tanra tellué, meaning "sign of three". The Seri people of northwestern Mexico call the three stars in Orion's Belt Hapj (a name denoting a hunter) which consists of three stars: Hap (mule deer), Haamoja (pronghorn), and Mojet (bighorn sheep). Hap is in the middle and has been shot by the hunter; its blood has dripped onto Tiburón Island. The same three stars are known in Spain and most of Latin America as "Las tres Marías" (Spanish for "The Three Marys"). In Puerto Rico, the three stars are known as the "Los Tres Reyes Magos" (Spanish for The Three Wise Men). The Ojibwa/Chippewa Native Americans call this constellation Mesabi for Big Man. To the Lakota Native Americans, Tayamnicankhu (Orion's Belt) is the spine of a bison. The great rectangle of Orion is the bison's ribs; the Pleiades star cluster in nearby Taurus is the bison's head; and Sirius in Canis Major, known as Tayamnisinte, is its tail. Another Lakota myth mentions that the bottom half of Orion, the Constellation of the Hand, represented the arm of a chief that was ripped off by the Thunder People as a punishment from the gods for his selfishness. His daughter offered to marry the person who can retrieve his arm from the sky, so the young warrior Fallen Star (whose father was a star and whose mother was human) returned his arm and married his daughter, symbolizing harmony between the gods and humanity with the help of the younger generation. The index finger is represented by Rigel; the Orion Nebula is the thumb; the Belt of Orion is the wrist; and the star Beta Eridani is the pinky finger. The seven primary stars of Orion make up the Polynesian constellation Heiheionakeiki which represents a child's string figure similar to a cat's cradle. Several precolonial Filipinos referred to the belt region in particular as "balatik" (ballista) as it resembles a trap of the same name which fires arrows by itself and is usually used for catching pigs from the bush. Spanish colonization later led to some ethnic groups referring to Orion's Belt as "Tres Marias" or "Tatlong Maria." In Māori tradition, the star Rigel (known as Puanga or Puaka) is closely connected with the celebration of Matariki. The rising of Matariki (the Pleiades) and Rigel before sunrise in midwinter marks the start of the Māori year. In Javanese culture, the constellation is often called Lintang Waluku or Bintang Bajak, referring to the shape of a paddy field plow. The imagery of the Belt and Sword has found its way into popular Western culture, for example in the form of the shoulder insignia of the 27th Infantry Division of the United States Army during both World Wars, probably owing to a pun on the name of the division's first commander, Major General John F. O'Ryan. The film distribution company Orion Pictures used the constellation as its logo. In artistic renderings, the surrounding constellations are sometimes related to Orion: he is depicted standing next to the river Eridanus with his two hunting dogs Canis Major and Canis Minor, fighting Taurus. He is sometimes depicted hunting Lepus the hare. He sometimes is depicted to have a lion's hide in his hand. There are alternative ways to visualise Orion. From the Southern Hemisphere, Orion is oriented south-upward, and the Belt and Sword are sometimes called the saucepan or pot in Australia and New Zealand. Orion's Belt is called Drie Konings (Three Kings) or the Drie Susters (Three Sisters) by Afrikaans speakers in South Africa and are referred to as les Trois Rois (the Three Kings) in Daudet's Lettres de Mon Moulin (1866). The appellation Driekoningen (the Three Kings) is also often found in 17th and 18th-century Dutch star charts and seaman's guides. The same three stars are known in Spain, Latin America, and the Philippines as "Las Tres Marías" (The Three Marys), and as "Los Tres Reyes Magos" (The Three Wise Men) in Puerto Rico. Even traditional depictions of Orion have varied greatly. Cicero drew Orion in a similar fashion to the modern depiction. The Hunter held an unidentified animal skin aloft in his right hand; his hand was represented by Omicron2 Orionis and the skin was represented by the five stars designated Pi Orionis. Saiph and Rigel represented his left and right knees, while Eta Orionis and Lambda Leporis were his left and right feet, respectively. As in the modern depiction, Mintaka, Alnilam, and Alnitak represented his Belt. His left shoulder was represented by Betelgeuse, and Mu Orionis made up his left arm. Meissa was his head, and Bellatrix his right shoulder. The depiction of Hyginus was similar to that of Cicero, though the two differed in a few important areas. Cicero's animal skin became Hyginus's shield (Omicron and Pi Orionis), and instead of an arm marked out by Mu Orionis, he holds a club (Chi Orionis). His right leg is represented by Theta Orionis and his left leg is represented by Lambda, Mu, and Epsilon Leporis. Further Western European and Arabic depictions have followed these two models. Future Orion is located on the celestial equator, but it will not always be so located due to the effects of precession of the Earth's axis. Orion lies well south of the ecliptic, and it only happens to lie on the celestial equator because the point on the ecliptic that corresponds to the June solstice is close to the border of Gemini and Taurus, to the north of Orion. Precession will eventually carry Orion further south, and by AD 14000, Orion will be far enough south that it will no longer be visible from the latitude of Great Britain. Further in the future, Orion's stars will gradually move away from the constellation due to proper motion. However, Orion's brightest stars all lie at a large distance from Earth on an astronomical scale—much farther away than Sirius, for example. Orion will still be recognizable long after most of the other constellations—composed of relatively nearby stars—have distorted into new configurations, with the exception of a few of its stars eventually exploding as supernovae, for example Betelgeuse, which is predicted to explode sometime in the next million years. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Halide_(programming_language)] | [TOKENS: 265]
Contents Halide (programming language) Halide is a computer programming language designed for writing digital image processing code that takes advantage of memory locality, vectorized computation and multi-core central processing units (CPU) and graphics processing units (GPU). Halide is implemented as an internal domain-specific language (DSL) in C++. Halide was announced by MIT in 2012 and released in 2013. Purpose The main innovation Halide brings is the separation of the algorithm being implemented from its execution schedule, i.e. code specifying the loop nesting, parallelization, loop unrolling and vector instruction. These two are usually interleaved together and experimenting with changing the schedule requires the programmer to rewrite large portions of the algorithm with every change. With Halide, changing the schedule does not require any changes to the algorithm, allowing the programmer to experiment with scheduling. Scheduled blur function The following function defines and sets the schedule for a 3×3 box filter defined as a series of two 3×1 passes, allowing the blur algorithm to remain independent of the execution schedule. Uses and development Halide was developed primarily at MIT's CSAIL lab. Both Google and Adobe have been involved in Halide research. Google uses Halide in Pixel 2's Pixel Visual Core. Adobe Photoshop also uses Halide. See also References External links
========================================