text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Vertebrate] | [TOKENS: 3691] |
Contents Vertebrate Vertebrates (/ˈvɜːrtəbrɪt, -ˌbreɪt/), also called craniates, are animals with a vertebral column and a cranium. The vertebral column surrounds and protects the spinal cord, while the cranium protects the brain. The vertebrates make up the subphylum Vertebrata (/ˌvɜːrtəˈbreɪtə/ VUR-tə-BRAY-tə) with some 65,000 species, by far the largest ranked grouping in the phylum Chordata. The vertebrates include mammals, birds, amphibians, and various classes of fish and reptiles. The fish include the jawless Agnatha, and the jawed Gnathostomata. The jawed fish include both the cartilaginous fish and the bony fish. Bony fish include the lobe-finned fish, which gave rise to the tetrapods, the animals with four limbs. Despite their success, vertebrates still only make up less than five percent of all described animal species. The first vertebrates appeared in the Cambrian explosion some 518 million years ago. Jawed vertebrates evolved in the Ordovician or Silurian; bony fishes appeared in the Silurian and diversified widely in the Devonian. The first tetrapods appeared towards the end of the Devonian, and the first amphibians appeared on land in the Carboniferous. During the Triassic, mammals and dinosaurs appeared, the latter giving rise to birds in the Jurassic. Extant species are roughly equally divided between fishes of all kinds, and tetrapods. Populations of many species have been in steep decline since 1970 because of land-use change, overexploitation of natural resources, climate change, pollution and the impact of invasive species. Characteristics Vertebrates belong to Chordata, a phylum characterised by five synapomorphies (unique characteristics): namely a notochord, a hollow nerve cord along the back, a post-anal tail, an endostyle (often as a thyroid gland), and pharyngeal gills arranged in pairs. Vertebrates share these characteristics with other chordates. Vertebrates are distinguished from all other animals, including other chordates, by multiple synapomorphies: namely a vertebral column; a skull of bone or cartilage; a large brain divided into 3 or more sections, a muscular heart with multiple chambers; an inner ear with semicircular canals; sense organs including the eyes, ears, and nose; and digestive organs including the intestines, liver, pancreas, and stomach. Vertebrates (and other chordates) belong to the Bilateria, a group of animals with mirror symmetrical bodies. They move, typically by swimming, using muscles along the back, supported by a strong but flexible skeletal structure, the spine or vertebral column. The name 'vertebrate' derives from the Latin vertebratus, 'jointed', from vertebra, 'joint', in turn from Latin vertere, 'to turn'. As embryos, vertebrates still have a notochord. In all but the jawless fishes, it is replaced with a vertebral column (made of bone or cartilage) during development. Vertebrate embryos have pharyngeal arches; in adult fish, these support the gills, while in adult tetrapods they develop into other structures. In the embryo, a layer of cells along the back folds and fuses into a hollow neural tube. This develops into the spinal cord, and at its front end, the brain. The brain receives information about the world through nerves which carry signals from sense organs in the skin and body. Because the ancestors of vertebrates usually moved forwards, the front of the body encountered stimuli before the rest of the body, favouring cephalisation, the evolution of a head containing sense organs and a brain to process the sensory information. Vertebrates have a tubular gut that extends from the mouth to the anus. The vertebral column typically continues beyond the anus to form an elongated tail. The ancestral vertebrates, and most extant species, are aquatic and carry out gas exchange in their gills. The gills are finely-branched structures which bring the blood close to the water. They are positioned just behind the head, supported by cartilaginous or bony branchial arches. In jawed vertebrates, the first gill arch pair evolved into the jaws. In amphibians and some primitive bony fishes, the larvae have external gills, branching off from the gill arches. Oxygen is carried from the gills to the body in the blood, and carbon dioxide is returned to the gills, in a closed circulatory system driven by a chambered heart. The tetrapods have lost the gills of their fish ancestors; they have adapted the swim bladder (that fish use for buoyancy) into lungs to breathe air, and the circulatory system is adapted accordingly. At the same time, they adapted the bony fins of the lobe-finned fishes into two pairs of walking legs, carrying the weight of the body via the shoulder and pelvic girdles. Vertebrates vary in size from the smallest frog species such as Brachycephalus pulex, with a minimum adult snout–vent length of 6.45 millimetres (0.254 in) to the blue whale, at up to 33 m (108 ft) and weighing some 150 tonnes. Molecular markers known as conserved signature indels in protein sequences have been identified and provide distinguishing criteria for the vertebrate subphylum. Five molecular markers are exclusively shared by all vertebrates and reliably distinguish them from all other animals; these include protein synthesis elongation factor-2, eukaryotic translation initiation factor 3, adenosine kinase and a protein related to ubiquitin carboxyl-terminal hydrolase). A specific relationship between vertebrates and tunicates is supported by two molecular markers, the proteins Rrp44 (associated with the exosome complex) and serine C-palmitoyltransferase. These are exclusively shared by species from these two subphyla, but not by cephalochordates. Evolutionary history Vertebrates originated during the Cambrian explosion at the start of the Paleozoic, which saw a rise in animal diversity. The earliest known vertebrates belong to the Chengjiang biota and lived about 518 million years ago. These include Haikouichthys, Myllokunmingia, Zhongjianichthys, and probably Yunnanozoon. Unlike other Cambrian animals, these groups had the basic vertebrate body plan: a notochord, rudimentary vertebrae, and a well-defined head and tail, but lacked jaws. As such, one perspective is that Haikouichthys and other Myllokunmingiidae probably represent basal stem group craniates rather than actual vertebrates. A vertebrate group of uncertain phylogeny, small eel-like conodonts, are known from microfossils of their paired tooth segments from the late Cambrian to the end of the Triassic. Zoologists have debated whether teeth mineralized first, given the hard teeth of the soft-bodied conodonts, and then bones, or vice versa, but it seems that the mineralized skeleton came first. The first jawed vertebrates may have appeared in the late Ordovician (~445 mya) or Silurian, and became common in the Devonian period, often known as the "Age of Fishes". The bony fishes appeared in the Silurian; they became common in the Devonian. By the middle of the Devonian, a lineage of bony fishes, the sarcopterygii, with both gills and air-breathing lungs adapted to life in swampy pools, used their muscular paired fins to propel themselves on land. The fins, already possessing bones and joints, evolved into the two pairs of walking legs of the first tetrapods in the Famennian stage of the Devonian. These tetrapods established themselves on land as amphibians in the next geological period, the Carboniferous. A group of vertebrates, the amniotes, with membranes around the embryo allowing it to survive on dry land, branched from amphibious tetrapods in the Carboniferous. At the onset of the Mesozoic, all larger vertebrate groups were devastated after the largest mass extinction in earth history. The following recovery phase saw the emergence of many new vertebrate groups that are still around today, and this time has been described as the origin of modern ecosystems. On the continents, the ancestors of modern lissamphibians, turtles, crocodilians, lizards, and mammals appeared, as well as dinosaurs, which gave rise to birds later in the Mesozoic. In the seas, various groups of marine reptiles evolved, as did new groups of fish. At the end of the Mesozoic, another extinction event extirpated dinosaurs (other than birds) and many other vertebrate groups. The Cenozoic, the current era, is sometimes called the "Age of Mammals", because of the dominance of the terrestrial environment by that group. Placental mammals have predominantly occupied the Northern Hemisphere, with marsupial mammals in the Southern Hemisphere. Approaches to classification In 1801, Jean-Baptiste Lamarck defined the vertebrates as a taxonomic group, a phylum distinct from the invertebrates he was studying. He described them as consisting of four classes, namely fish, reptiles, birds, and mammals, but treated the cephalochordates and tunicates as molluscs. In 1866, Ernst Haeckel called both his Craniata (vertebrates) and his Acrania (cephalochordates) Vertebrata. In 1877, Ray Lankester grouped the craniates, cephalochordates, and urochordates (tunicates) as Vertebrata. In 1880–1881, Francis Maitland Balfour placed the Vertebrata as a subphylum within the chordates. In 2018, Naoki Irie and colleagues proposed making Vertebrata a full phylum. In 1758, Linnaeus classified hagfishes as Vermes, not vertebrates. In 1806, André Marie Constant Duméril grouped hagfishes and lampreys in the taxon Cyclostomi, characterized by horny teeth borne on a tongue-like apparatus, a large notochord as adults, and pouch-shaped gills (Marsupibranchii). The cyclostomes were seen as either degenerate cartilaginous fishes or primitive vertebrates. In 1889, Edward Drinker Cope coined the name Agnatha ("jawless") for a group that included the cyclostomes and fossil groups in which jaws could not be observed. Vertebrates were subsequently divided into two major sister-groups: the Agnatha and the Gnathostomata (jawed vertebrates). In 1927, Erik Stensiö suggested that the two groups of living agnathans (i.e. the cyclostomes) arose independently from fossil agnathans. In 1977, Søren Løvtrup argued that lampreys are more closely related to gnathostomes, based on characters such as radial muscles in the fins, true lymphocytes, neuromasts in the inner ear, and a cerebellum. This implied that Vertebrata and Craniata were distinct taxa. The validity of the taxon "Craniata" was examined in 2002 by Delarbre et al. using mtDNA sequencing, concluding that Myxini is more closely related to Hyperoartia than to Gnathostomata - i.e., that modern jawless fishes form a clade called Cyclostomata. This implies that Vertebrata should return to its old content (Gnathostomata + Cyclostomata) and the name Craniata is a junior synonym of Vertebrata. In 2010, the debate concluded when the French paleontologist Philippe Janvier stated that he accepted that both vertebrates and cyclostomes were monophyletic, and that "the intuitions of 19th century zoologists were correct in assuming that [cyclostomes] (notably, hagfishes) are strongly degenerate and have lost many characters over time." Conventional evolutionary taxonomy groups extant vertebrates into seven classes based on traditional interpretations of gross anatomical and physiological traits. The commonly held classification lists three classes of fish and four of tetrapods. This ignores some of the natural relationships between the groupings. For example, the birds derive from a group of reptiles, so "Reptilia" excluding Aves is not a natural grouping; it is described as paraphyletic and shown in quotation marks. In addition to these, there are two classes of extinct armoured fishes, Placodermi and Acanthodii. Other ways of classifying the vertebrates have been devised, particularly with emphasis on the phylogeny of early amphibians and reptiles. An example based on work by M.J. Benton in 2004 is given here († = extinct, "" = paraphyletic): While this traditional taxonomy is orderly, most of the groups are paraphyletic, meaning that the structure does not accurately reflect the natural evolved grouping. For instance, descendants of the first reptiles include modern reptiles, mammals and birds; the agnathans have given rise to the jawed vertebrates; the bony fishes have given rise to the land vertebrates; a group of amphibians, the labyrinthodonts, have given rise to the reptiles (traditionally including the mammal-like synapsids), which in turn have given rise to the mammals and birds. Most scientists working with vertebrates use a classification based purely on phylogeny, organized by their known evolutionary history. The closest relatives of vertebrates have been debated over the years. It was once thought that the Cephalochordata was the sister taxon to Vertebrata. This group, Notochordata, was taken to be sister to the Tunicata. Since 2006, analysis has shown that the tunicates + vertebrates form a clade, the Olfactores, with Cephalochordata as its sister (the Olfactores hypothesis), as shown in the following phylogenetic tree. Leptocardii (lancelets) Tunicata (sea squirts, etc) Vertebrata The internal phylogeny of extant vertebrates is shown in the tree. Coelacanths Lungfishes Amphibians Mammals Lepidosaurs Turtles Crocodilians Dinosaurs The placement of hagfishes within the vertebrates has been controversial. Their lack of proper vertebrae (among other characteristics of jawless lampreys and jawed vertebrates) led authors of phylogenetic analyses based on morphology to place them outside Vertebrata. Molecular data however indicates that they are vertebrates, being most closely related to lampreys. An older view is that they are a sister group of vertebrates in the common taxon of Craniata. In 2019, Tetsuto Miyashita and colleagues reconciled the two types of analysis, supporting the Cyclostomata hypothesis using only morphological data. A wider issue is the position of fossil agnathans, such as the Myllokunmingiida. Tetsuto Miyashita and colleagues in 2019 place them tentatively as part of the Vertebrata total group, outside the Vertebrata crown group that led to all extant vertebrates. These fossils have a cranium (a skull of bone or cartilage) but at most a rudimentary vertebral column, so they can be viewed as part of a craniate clade that also includes the crown group vertebrates which possess a full vertebral column. †Myllokunmingiida †Metaspriggina †Anaspida †Pipiscius †Conodonta Cyclostomata (lampreys and hagfishes) Gnathostomata (jawed vertebrates) Diversity Described and extant vertebrate species are split roughly evenly but non-phylogenetically between non-tetrapod "fish" and tetrapods. The following table lists the number of described extant species for each vertebrate class as estimated in the IUCN Red List of Threatened Species, 2014.3. Paraphyletic groups are shown in quotation marks. (birds) The IUCN estimates that 1,305,075 extant invertebrate species have been described, which means that less than 5% of the described animal species in the world are vertebrates. The Living Planet Index, following 16,704 populations of 4,005 species of vertebrates, shows a decline of 60% between 1970 and 2014. Since 1970, freshwater species declined 83%, and tropical populations in South and Central America declined 89%. The authors note that "An average trend in population change is not an average of total numbers of animals lost." According to WWF, this could lead to a sixth major extinction event. The five main causes of biodiversity loss are land-use change, overexploitation of natural resources, climate change, pollution and invasive species. Notes See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-FOOTNOTEMcFerran201512-9] | [TOKENS: 10728] |
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ecliptic] | [TOKENS: 2820] |
Contents Ecliptic The ecliptic or ecliptic plane is the orbital plane of Earth around the Sun.[a] It was a central concept in a number of ancient sciences, providing the framework for key measurements in astronomy, astrology and calendar-making. From the perspective of an observer on Earth, the Sun's movement around the celestial sphere over the course of a year traces out a path along the ecliptic against the background of stars – specifically the Zodiac constellations. The planets of the Solar System can be seen along the ecliptic because their orbital planes are very close to Earth's. The Moon also appears near the plane, offset by lunar nodes; the ecliptic is so named because the ancients noted that eclipses only occur when the Moon is crossing it. The ecliptic is an important reference plane and is the basis of the ecliptic coordinate system. Ancient scientists were able to calculate Earth's axial tilt by comparing the angle of the ecliptic (about 23.4°) to that of the equatorial plane. Sun's apparent motion The ecliptic is the apparent path of the Sun throughout the course of a year. Because Earth takes one year to orbit the Sun, the apparent position of the Sun takes one year to make a complete circuit of the ecliptic. With slightly more than 365 days in one year, the Sun moves a little less than 1° eastward every day. This small difference in the Sun's position against the stars causes any particular spot on Earth's surface to catch up with (and stand directly north or south of) the Sun about four minutes later each day than it would if Earth did not orbit; a day on Earth is therefore 24 hours long rather than the approximately 23-hour 56-minute sidereal day. Again, this is a simplification, based on a hypothetical Earth that orbits at a uniform angular speed around the Sun. The actual speed with which Earth orbits the Sun varies slightly during the year, so the speed with which the Sun seems to move along the ecliptic also varies. For example, the Sun is north of the celestial equator for about 185 days of each year, and south of it for about 180 days. The variation of orbital speed accounts for part of the equation of time. Because of the movement of Earth around the Earth–Moon center of mass, the apparent path of the Sun wobbles slightly, with a period of about one month. Because of further perturbations by the other planets of the Solar System, the Earth–Moon barycenter wobbles slightly around a mean position in a complex fashion. Relationship to the celestial equator Because Earth's rotational axis is not perpendicular to its orbital plane, Earth's equatorial plane is not coplanar with the ecliptic plane, but is inclined to it by an angle of about 23.4°, which is known as the obliquity of the ecliptic. If the equator is projected outward to the celestial sphere, forming the celestial equator, it crosses the ecliptic at two points known as the equinoxes. The Sun, in its apparent motion along the ecliptic, crosses the celestial equator at these points, one from south to north, the other from north to south. The crossing from south to north is known as the March equinox, also known as the first point of Aries and the ascending node of the ecliptic on the celestial equator. The crossing from north to south is the September equinox or descending node. The orientation of Earth's axis and equator are not fixed in space, but rotate about the poles of the ecliptic with a period of about 26,000 years, a process known as lunisolar precession, as it is due mostly to the gravitational effect of the Moon and Sun on Earth's equatorial bulge. Likewise, the ecliptic itself is not fixed. The gravitational perturbations of the other bodies of the Solar System cause a much smaller motion of the plane of Earth's orbit, and hence of the ecliptic, known as planetary precession. The combined action of these two motions is called general precession, and changes the position of the equinoxes by about 50 arc seconds (about 0.014°) per year. Once again, this is a simplification. Periodic motions of the Moon and apparent periodic motions of the Sun (actually of Earth in its orbit) cause short-term small-amplitude periodic oscillations of Earth's axis, and hence the celestial equator, known as nutation. This adds a periodic component to the position of the equinoxes; the positions of the celestial equator and (March) equinox with fully updated precession and nutation are called the true equator and equinox; the positions without nutation are the mean equator and equinox. Obliquity of the ecliptic Obliquity of the ecliptic is the term used by astronomers for the inclination of Earth's equator with respect to the ecliptic, or of Earth's rotation axis to a perpendicular to the ecliptic. It is about 23.4° and is currently decreasing 0.013 degrees (47 arcseconds) per hundred years because of planetary perturbations. The angular value of the obliquity is found by observation of the motions of Earth and other planets over many years. Astronomers produce new fundamental ephemerides as the accuracy of observation improves and as the understanding of the dynamics increases, and from these ephemerides various astronomical values, including the obliquity, are derived. Until 1983 the obliquity for any date was calculated from work of Newcomb, who analyzed positions of the planets until about 1895: ε = 23°27′08.26″ − 46.845″ T − 0.0059″ T2 + 0.00181″ T3 where ε is the obliquity and T is tropical centuries from B1900.0 to the date in question. From 1984, the Jet Propulsion Laboratory's DE series of computer-generated ephemerides took over as the fundamental ephemeris of the Astronomical Almanac. Obliquity based on DE200, which analyzed observations from 1911 to 1979, was calculated: ε = 23°26′21.45″ − 46.815″ T − 0.0006″ T2 + 0.00181″ T3 where hereafter T is Julian centuries from J2000.0. JPL's fundamental ephemerides have been continually updated. The Astronomical Almanac for 2010 specifies: ε = 23°26′21.406″ − 46.836769″ T − 0.0001831″ T2 + 0.00200340″ T3 − 0.576×10−6″ T4 − 4.34×10−8″ T5 These expressions for the obliquity are intended for high precision over a relatively short time span, perhaps several centuries. J. Laskar computed an expression to order T10 good to 0.04″/1000 years over 10,000 years. All of these expressions are for the mean obliquity, that is, without the nutation of the equator included. The true or instantaneous obliquity includes the nutation. Plane of the Solar System Most of the major bodies of the Solar System orbit the Sun in nearly the same plane. This is likely due to the way in which the Solar System formed from a protoplanetary disk. Probably the closest current representation of the disk is known as the invariable plane of the Solar System. Earth's orbit, and hence, the ecliptic, is inclined a little more than 1° to the invariable plane, Jupiter's orbit is within a little more than ½° of it, and the other major planets are all within about 6°. Because of this, most Solar System bodies appear very close to the ecliptic in the sky. The invariable plane is defined by the angular momentum of the entire Solar System, essentially the vector sum of all of the orbital and rotational angular momenta of all the bodies of the system; more than 60% of the total comes from the orbit of Jupiter. That sum requires precise knowledge of every object in the system, making it a somewhat uncertain value. Because of the uncertainty regarding the exact location of the invariable plane, and because the ecliptic is well defined by the apparent motion of the Sun, the ecliptic is used as the reference plane of the Solar System both for precision and convenience. The only drawback of using the ecliptic instead of the invariable plane is that over geologic time scales, it will move against fixed reference points in the sky's distant background. Celestial reference plane The ecliptic forms one of the two fundamental planes used as reference for positions on the celestial sphere, the other being the celestial equator. Perpendicular to the ecliptic are the ecliptic poles, the north ecliptic pole being the pole north of the equator. Of the two fundamental planes, the ecliptic is closer to unmoving against the background stars, its motion due to planetary precession being roughly 1/100 that of the celestial equator. Spherical coordinates, known as ecliptic longitude and latitude or celestial longitude and latitude, are used to specify positions of bodies on the celestial sphere with respect to the ecliptic. Longitude is measured positively eastward 0° to 360° along the ecliptic from the March equinox, the same direction in which the Sun appears to move. Latitude is measured perpendicular to the ecliptic, to +90° northward or −90° southward to the poles of the ecliptic, the ecliptic itself being 0° latitude. For a complete spherical position, a distance parameter is also necessary. Different distance units are used for different objects. Within the Solar System, astronomical units are used, and for objects near Earth, Earth radii or kilometers are used. A corresponding right-handed rectangular coordinate system is also used occasionally; the x-axis is directed toward the March equinox, the y-axis 90° to the east, and the z-axis toward the north ecliptic pole; the astronomical unit is the unit of measure. Symbols for ecliptic coordinates are somewhat standardized; see the table. Ecliptic coordinates are convenient for specifying positions of Solar System objects, as most of the planets' orbits have small inclinations to the ecliptic, and therefore always appear relatively close to it on the sky. Because Earth's orbit, and hence the ecliptic, moves very little, it is a relatively fixed reference with respect to the stars. Because of the precessional motion of the equinox, the ecliptic coordinates of objects on the celestial sphere are continuously changing. Specifying a position in ecliptic coordinates requires specifying a particular equinox, that is, the equinox of a particular date, known as an epoch; the coordinates are referred to the direction of the equinox at that date. For instance, the Astronomical Almanac lists the heliocentric position of Mars at 0h Terrestrial Time, 4 January 2010 as: longitude 118°09′15.8″, latitude +1°43′16.7″, true heliocentric distance 1.6302454 AU, mean equinox and ecliptic of date. This specifies the mean equinox of 4 January 2010 0h TT as above, without the addition of nutation. Eclipses Because the orbit of the Moon is inclined only about 5.145° to the ecliptic and the Sun is always very near the ecliptic, eclipses always occur on or near it. Because of the inclination of the Moon's orbit, eclipses do not occur at every conjunction and opposition of the Sun and Moon, but only when the Moon is near an ascending or descending node at the same time it is at conjunction (new) or opposition (full). The ecliptic is so named because the ancients noted that eclipses only occur when the Moon is crossing it. Equinoxes and solstices The exact instants of equinoxes and solstices are the times when the apparent ecliptic longitude (including the effects of aberration and nutation) of the Sun is 0°, 90°, 180°, and 270°. Because of perturbations of Earth's orbit and anomalies of the calendar, the dates of these are not fixed. In the constellations The ecliptic currently passes through the following thirteen constellations: There are twelve constellations that are not on the ecliptic, but are close enough that the Moon and planets can occasionally appear in them. Astrology The ecliptic forms the center of the zodiac, a celestial belt about 20° wide in latitude through which the Sun, Moon, and planets always appear to move. Traditionally, this region is divided into 12 signs of 30° longitude, each of which approximates the Sun's motion in one month. In ancient times, the signs corresponded roughly to 12 of the constellations that straddle the ecliptic. These signs are sometimes still used in modern terminology. The "First Point of Aries" was named when the March equinox Sun was actually in the constellation Aries; it has since moved into Pisces because of precession of the equinoxes. See also Notes and references External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/2025_Tesla_vandalism] | [TOKENS: 2271] |
Contents 2025 Tesla vandalism Beginning in early 2025, there has been an increased number of incidents of vandalism targeting Tesla property, including vehicles, dealerships, and charging stations. The incidents have been part of a larger wave of protests against Elon Musk, Tesla's owner and a key figure of the Department of Government Efficiency (DOGE). The majority of incidents have occurred in the United States, but they have also been reported in Canada, France, Germany, the United Kingdom, and New Zealand. The Federal Bureau of Investigation (FBI) and the Attorney General Pam Bondi have labeled the vandalism as domestic terrorism and President Donald Trump suggested the perpetrators be sent to prison in El Salvador. Although Musk has suggested that the vandalism is funded and coordinated, the FBI and Bureau of Alcohol, Tobacco, Firearms, and Explosives said they could not find any evidence that attacks have been coordinated. Incidents A woman was arrested for spray-painting a Tesla dealership in Buffalo Grove, Illinois. A Tesla owner in Wichita, Kansas, reported a man keying her vehicle outside a restaurant. Several Cybertrucks were set on fire at a dealership in Kansas City, Missouri. In Bloomington, Minnesota, a woman was filmed keying a Tesla in a parking lot. Authorities decided not to move forward with criminal charges after the woman turned herself in and agreed to pay for damages. A Tesla was also keyed in West Fargo, North Dakota, and a suspect was arrested. In west Michigan, a Cybertruck owner reported someone spray-painted "FUCK OFF NAZI" on the side of his vehicle in early March. In Kentwood, five Cybertrucks were vandalized on March 10, with one having the words "Nazis always lose" spray-painted on the side. Several Tesla chargers were set on fire outside a shopping center in Littleton, Massachusetts, on March 3. In a Dedham dealership, two Cybertrucks were spray-painted, and a Model S had all of its tires punctured. The same dealership had been targeted by vandalism on February 26. In Brookline, a man was arrested for allegedly placing Elon Musk-related stickers on Teslas. A Tesla owner in Syracuse, New York, reported an unidentified person wrote "This car supports Nazis" on his car while he was in a theater. On March 6, two men in New York City spray-painted swastikas on a Cybertruck in Lower Manhattan. The New York City Police Department said they were investigating the vandalism as a hate crime. In Pennsylvania on March 23, a Tesla Cybertruck was vandalized while parked outside a restaurant in Newtown Township, Bucks County by an apparent juvenile who dragged something along the side of it, leaving a mark. A Tesla dealership in Owings Mills, Maryland, was spray-painted on March 2, the day after a protest was held outside the same dealership. In Washington, D.C., police said they were looking for a man and woman who had graffitied at least two Teslas with unspecified "political hate speech". The word "NAZI" was found etched on the side of a Tesla in Garner, North Carolina. Police in Tulsa, Oklahoma reported that a masked individual spray-painted the word "NAZI" on the side of a Cybertruck. In North Charleston, South Carolina, federal authorities charged a man with arson after he allegedly set Tesla chargers on fire with Molotov cocktails. The suspect is also accused of spray-painting "Fuck Trump, long live Ukraine" next to the chargers. On March 24, Austin Police Department responded to calls of suspicious devices found in the local Tesla showroom, the Austin Bomb Squad determined the devices to be incendiary devices . On March 25, a man in Texarkana, Texas was arrested and charged with a felony for vandalizing several Teslas. Video from one of the cars appeared to show the suspect driving a mini four-wheeler into the side of a Tesla parked outside a restaurant. On March 29, a woman in Aventura, Florida was arrested and charged with felony criminal mischief for sticking a wad of gum to the door handle of a Tesla, an event which the vehicle's owner characterized as a result of "unfortunately, a divide in our country where certain views that are not accepted by a subset". On April 6, a man slashed the tire of a Tesla parked outside a grocery store in Clovis. In San Jose, police arrested a man who was seen on video keying a Tesla. A dealership and several vehicles in Encinitas were spray-painted with swastikas. In Vista, a person reportedly broke the side-view mirror of a Tesla parked in a driveway. In Berkeley, a person was filmed spray-painting a Tesla in a Whole Foods Market parking lot. On March 29, a Cybertruck parked outside a home in Novato was vandalized, with a suspect slitting the tires and throwing a rock at the windshield. One person was arrested after a molotov cocktail was thrown at a Tesla dealership in Loveland, Colorado, on March 7. A second person had been charged with vandalizing the same dealership earlier in the month, though a police said the incidents appeared to be unrelated. Police in Colorado Springs said they had responded to two reports of Tesla vehicles being vandalized in 2025. Sometime on the night of April 2 and 3, 16 Cybertrucks and a dealership were spray-painted in Meridian, Idaho. In Nevada, multiple Teslas were set on fire at a dealership in Enterprise. The Federal Bureau of Investigation said they were investigating. A 36-year-old Las Vegas Asian-American male resident was arrested and charged with federal offenses. In Salem, Oregon, a Tesla vehicle was set on fire in a dealership parking lot in January. The following month, several gunshots were fired at the windows of the same dealership. A suspect was arrested and faces federal charges. A dealership in Tigard was also damaged by gunfire in March. A security guard was present, but they were not injured. In Portland, a person spray-painted the word "Nazi" on a Tesla. In Eugene, the words "Divest" and "Depose" were spray-painted on two Tesla vehicles. In Portland, Oregon, a man was arrested for trying to blind Tesla employees with a laser pointer. In Washington, four Cybertrucks were damaged by a fire in Seattle in March. An explosion occurred at a Tesla supercharger station located in Lacey in April. On March 22, four Tesla chargers in Rock Springs, Wyoming, were spray-painted with swastikas. In Vancouver, police arrested a man suspected of vandalizing a Tesla dealership multiple times between January 1 and March 21, spray-painting obscenities on the building. In Montreal, police arrested two members of an activist group who sprayed paint on a dealership. On March 17, a Tesla Model S parked in a test drive spot reserved for a showroom inside Masonville Place in London, Ontario, was set on fire; no injuries were reported. March 18, a Tesla car was burned in southeast Calgary. March 19, a Tesla Cybertruck was burned in Calgary. On March 20, 80 Teslas were damaged at a dealership in Hamilton, Ontario, with cars having deep scratches and tires punctured. February 13 and March 31 Tesla vehicles were vandalized in Victoria, Canada. In Vancouver, there have been 28 acts of vandalism against Teslas, charging stations and car dealerships since January (as of April 1). On April 3 a pregnant woman was injured after a rock was thrown at her Tesla. Around a dozen Teslas were set on fire outside a dealership in Toulouse on February 23. Eight cars were destroyed and another four were damaged. Several Tesla Superchargers were set on fire in Saint-Chamond. The police said two chargers were completely destroyed, while the others were damaged. A dozen burnings of Teslas have happened from September, 2024 to April, 2025 in Deux-Sèvres. Seven Teslas were destroyed after catching on fire outside a dealership in Ottersberg, Germany, around 3:30 a.m. on March 29, a worldwide day of action announced by the Tesla Takedown movement. Seventeen Tesla vehicles were damaged in a fire at a store in Rome. In New Zealand, several Tesla cars were spray painted in Auckland. Members of Just Stop Oil poured an orange liquid latex over a Tesla robot at a store in London. Response In a Truth Social post, President Trump suggested people who vandalized Teslas should be sent to prisons in El Salvador, the same country where the United States had recently sent hundreds of migrants. Trump also claimed, without evidence, that "people that are very highly political on the left" were paying the vandals. Attorney General Pam Bondi described Tesla vandalism as "nothing short of domestic terrorism" and vowed to "impose severe consequences on those involved in these attacks, including those operating behind the scenes to coordinate and fund these crimes". Elon Musk said those vandalizing cars should "stop being psycho" and called Tesla a "peaceful company". He said he believed there was a "mental illness thing going on", and suggested that the perpetrators were being led by a "larger force" who were funding and coordinating the violence. He also referred to the acts as "trans violence" and baselessly claimed that "the probability of a trans person being violent appears to be vastly higher". Law enforcement and domestic terrorism experts have found no evidence that the attacks are coordinated. On March 24, the Federal Bureau of Investigation (FBI) launched an investigation task force. The director of the FBI, Kash Patel, has called the attacks "domestic terrorism". Bruce Hoffman, senior fellow for counterterrorism and homeland security at the Council on Foreign Relations, agreed with the classification of terrorism by saying "It's absolutely domestic terrorism. Vandalism is a crime that if it's committed with a political motive, can certainly be defined as terrorism." After increasing losses due to an increase in vandalism, insurance companies have suggested their rates could rise to insure Tesla vehicles. Compared to the average rise in cost of 10% to insure US vehicles, the Model Y has risen 29% and the Model 3 has risen 24% from 2024 to 2025. The average cost to insure Model Y and 3 increased $300 and $101, respectively, from January to March 2025. In March 2025, the Vancouver International Auto Show removed Tesla from its lineup due to safety concerns. The April 5, 2025, episode of Saturday Night Live referenced the increase in vandalism in its cold open, with Musk, played by Mike Myers, blaming his own unpopularity for the situation and introducing a "fully self-vandalizing" Tesla vehicle that includes "AI-powered graffiti" of swastikas and penises. Musk responded to the sketch in a post on X, giving his view that the show "hasn't been funny in a long time". References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/File:USA_orthographic.svg] | [TOKENS: 114] |
File:USA orthographic.svg Summary Africa Americas Asia Europe Oceania Intercontinental Historical Subnationals Licensing File history Click on a date/time to view the file as it appeared at that time. File usage The following 52 pages use this file: Global file usage The following other wikis use this file: View more global usage of this file. Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Lod#cite_note-Shapira-10] | [TOKENS: 4733] |
Contents Lod Lod (Hebrew: לוד, fully vocalized: לֹד), also known as Lydda (Ancient Greek: Λύδδα) and Lidd (Arabic: اللِّدّ, romanized: al-Lidd, or اللُّدّ, al-Ludd), is a city 15 km (9+1⁄2 mi) southeast of Tel Aviv and 40 km (25 mi) northwest of Jerusalem in the Central District of Israel. It is situated between the lower Shephelah on the east and the coastal plain on the west. The city had a population of 90,814 in 2023. Lod has been inhabited since at least the Neolithic period. It is mentioned a few times in the Hebrew Bible and in the New Testament. Between the 5th century BCE and up until the late Roman period, it was a prominent center for Jewish scholarship and trade. Around 200 CE, the city became a Roman colony and was renamed Diospolis (Ancient Greek: Διόσπολις, lit. 'city of Zeus'). Tradition identifies Lod as the 4th century martyrdom site of Saint George; the Church of Saint George and Mosque of Al-Khadr located in the city is believed to have housed his remains. Following the Arab conquest of the Levant, Lod served as the capital of Jund Filastin; however, a few decades later, the seat of power was transferred to Ramla, and Lod slipped in importance. Under Crusader rule, the city was a Catholic diocese of the Latin Church and it remains a titular see to this day.[citation needed] Lod underwent a major change in its population in the mid-20th century. Exclusively Palestinian Arab in 1947, Lod was part of the area designated for an Arab state in the United Nations Partition Plan for Palestine; however, in July 1948, the city was occupied by the Israel Defense Forces, and most of its Arab inhabitants were expelled in the Palestinian expulsion from Lydda and Ramle. The city was largely resettled by Jewish immigrants, most of them expelled from Arab countries. Today, Lod is one of Israel's mixed cities, with an Arab population of 30%. Lod is one of Israel's major transportation hubs. The main international airport, Ben Gurion Airport, is located 8 km (5 miles) north of the city. The city is also a major railway and road junction. Religious references The Hebrew name Lod appears in the Hebrew Bible as a town of Benjamin, founded along with Ono by Shamed or Shamer (1 Chronicles 8:12; Ezra 2:33; Nehemiah 7:37; 11:35). In Ezra 2:33, it is mentioned as one of the cities whose inhabitants returned after the Babylonian captivity. Lod is not mentioned among the towns allocated to the tribe of Benjamin in Joshua 18:11–28. The name Lod derives from a tri-consonantal root not extant in Northwest Semitic, but only in Arabic (“to quarrel; withhold, hinder”). An Arabic etymology of such an ancient name is unlikely (the earliest attestation is from the Achaemenid period). In the New Testament, the town appears in its Greek form, Lydda, as the site of Peter's healing of Aeneas in Acts 9:32–38. The city is also mentioned in an Islamic hadith as the location of the battlefield where the false messiah (al-Masih ad-Dajjal) will be slain before the Day of Judgment. History The first occupation dates to the Neolithic in the Near East and is associated with the Lodian culture. Occupation continued in the Levant Chalcolithic. Pottery finds have dated the initial settlement in the area now occupied by the town to 5600–5250 BCE. In the Early Bronze, it was an important settlement in the central coastal plain between the Judean Shephelah and the Mediterranean coast, along Nahal Ayalon. Other important nearby sites were Tel Dalit, Tel Bareqet, Khirbat Abu Hamid (Shoham North), Tel Afeq, Azor and Jaffa. Two architectural phases belong to the late EB I in Area B. The first phase had a mudbrick wall, while the late phase included a circulat stone structure. Later excavations have produced an occupation later, Stratum IV. It consists of two phases, Stratum IVb with mudbrick wall on stone foundations and rounded exterior corners. In Stratum IVa there was a mudbrick wall with no stone foundations, with imported Egyptian potter and local pottery imitations. Another excavations revealed nine occupation strata. Strata VI-III belonged to Early Bronze IB. The material culture showed Egyptian imports in strata V and IV. Occupation continued into Early Bronze II with four strata (V-II). There was continuity in the material culture and indications of centralized urban planning. North to the tell were scattered MB II burials. The earliest written record is in a list of Canaanite towns drawn up by the Egyptian pharaoh Thutmose III at Karnak in 1465 BCE. From the fifth century BCE until the Roman period, the city was a centre of Jewish scholarship and commerce. According to British historian Martin Gilbert, during the Hasmonean period, Jonathan Maccabee and his brother, Simon Maccabaeus, enlarged the area under Jewish control, which included conquering the city. The Jewish community in Lod during the Mishnah and Talmud era is described in a significant number of sources, including information on its institutions, demographics, and way of life. The city reached its height as a Jewish center between the First Jewish-Roman War and the Bar Kokhba revolt, and again in the days of Judah ha-Nasi and the start of the Amoraim period. The city was then the site of numerous public institutions, including schools, study houses, and synagogues. In 43 BC, Cassius, the Roman governor of Syria, sold the inhabitants of Lod into slavery, but they were set free two years later by Mark Antony. During the First Jewish–Roman War, the Roman proconsul of Syria, Cestius Gallus, razed the town on his way to Jerusalem in Tishrei 66 CE. According to Josephus, "[he] found the city deserted, for the entire population had gone up to Jerusalem for the Feast of Tabernacles. He killed fifty people whom he found, burned the town and marched on". Lydda was occupied by Emperor Vespasian in 68 CE. In the period following the destruction of Jerusalem in 70 CE, Rabbi Tarfon, who appears in many Tannaitic and Jewish legal discussions, served as a rabbinic authority in Lod. During the Kitos War, 115–117 CE, the Roman army laid siege to Lod, where the rebel Jews had gathered under the leadership of Julian and Pappos. Torah study was outlawed by the Romans and pursued mostly in the underground. The distress became so great, the patriarch Rabban Gamaliel II, who was shut up there and died soon afterwards, permitted fasting on Ḥanukkah. Other rabbis disagreed with this ruling. Lydda was next taken and many of the Jews were executed; the "slain of Lydda" are often mentioned in words of reverential praise in the Talmud. In 200 CE, emperor Septimius Severus elevated the town to the status of a city, calling it Colonia Lucia Septimia Severa Diospolis. The name Diospolis ("City of Zeus") may have been bestowed earlier, possibly by Hadrian. At that point, most of its inhabitants were Christian. The earliest known bishop is Aëtius, a friend of Arius. During the following century (200-300CE), it's said that Joshua ben Levi founded a yeshiva in Lod. In December 415, the Council of Diospolis was held here to try Pelagius; he was acquitted. In the sixth century, the city was renamed Georgiopolis after St. George, a soldier in the guard of the emperor Diocletian, who was born there between 256 and 285 CE. The Church of Saint George and Mosque of Al-Khadr is named for him. The 6th-century Madaba map shows Lydda as an unwalled city with a cluster of buildings under a black inscription reading "Lod, also Lydea, also Diospolis". An isolated large building with a semicircular colonnaded plaza in front of it might represent the St George shrine. After the Muslim conquest of Palestine by Amr ibn al-'As in 636 CE, Lod which was referred to as "al-Ludd" in Arabic served as the capital of Jund Filastin ("Military District of Palaestina") before the seat of power was moved to nearby Ramla during the reign of the Umayyad Caliph Suleiman ibn Abd al-Malik in 715–716. The population of al-Ludd was relocated to Ramla, as well. With the relocation of its inhabitants and the construction of the White Mosque in Ramla, al-Ludd lost its importance and fell into decay. The city was visited by the local Arab geographer al-Muqaddasi in 985, when it was under the Fatimid Caliphate, and was noted for its Great Mosque which served the residents of al-Ludd, Ramla, and the nearby villages. He also wrote of the city's "wonderful church (of St. George) at the gate of which Christ will slay the Antichrist." The Crusaders occupied the city in 1099 and named it St Jorge de Lidde. It was briefly conquered by Saladin, but retaken by the Crusaders in 1191. For the English Crusaders, it was a place of great significance as the birthplace of Saint George. The Crusaders made it the seat of a Latin Church diocese, and it remains a titular see. It owed the service of 10 knights and 20 sergeants, and it had its own burgess court during this era. In 1226, Ayyubid Syrian geographer Yaqut al-Hamawi visited al-Ludd and stated it was part of the Jerusalem District during Ayyubid rule. Sultan Baybars brought Lydda again under Muslim control by 1267–8. According to Qalqashandi, Lydda was an administrative centre of a wilaya during the fourteenth and fifteenth century in the Mamluk empire. Mujir al-Din described it as a pleasant village with an active Friday mosque. During this time, Lydda was a station on the postal route between Cairo and Damascus. In 1517, Lydda was incorporated into the Ottoman Empire as part of the Damascus Eyalet, and in the 1550s, the revenues of Lydda were designated for the new waqf of Hasseki Sultan Imaret in Jerusalem, established by Hasseki Hurrem Sultan (Roxelana), the wife of Suleiman the Magnificent. By 1596 Lydda was a part of the nahiya ("subdistrict") of Ramla, which was under the administration of the liwa ("district") of Gaza. It had a population of 241 households and 14 bachelors who were all Muslims, and 233 households who were Christians. They paid a fixed tax-rate of 33,3 % on agricultural products, including wheat, barley, summer crops, vineyards, fruit trees, sesame, special product ("dawalib" =spinning wheels), goats and beehives, in addition to occasional revenues and market toll, a total of 45,000 Akçe. All of the revenue went to the Waqf. In 1051 AH/1641/2, the Bedouin tribe of al-Sawālima from around Jaffa attacked the villages of Subṭāra, Bayt Dajan, al-Sāfiriya, Jindās, Lydda and Yāzūr belonging to Waqf Haseki Sultan. The village appeared as Lydda, though misplaced, on the map of Pierre Jacotin compiled in 1799. Missionary William M. Thomson visited Lydda in the mid-19th century, describing it as a "flourishing village of some 2,000 inhabitants, imbosomed in noble orchards of olive, fig, pomegranate, mulberry, sycamore, and other trees, surrounded every way by a very fertile neighbourhood. The inhabitants are evidently industrious and thriving, and the whole country between this and Ramleh is fast being filled up with their flourishing orchards. Rarely have I beheld a rural scene more delightful than this presented in early harvest ... It must be seen, heard, and enjoyed to be appreciated." In 1869, the population of Ludd was given as: 55 Catholics, 1,940 "Greeks", 5 Protestants and 4,850 Muslims. In 1870, the Church of Saint George was rebuilt. In 1892, the first railway station in the entire region was established in the city. In the second half of the 19th century, Jewish merchants migrated to the city, but left after the 1921 Jaffa riots. In 1882, the Palestine Exploration Fund's Survey of Western Palestine described Lod as "A small town, standing among enclosure of prickly pear, and having fine olive groves around it, especially to the south. The minaret of the mosque is a very conspicuous object over the whole of the plain. The inhabitants are principally Moslim, though the place is the seat of a Greek bishop resident of Jerusalem. The Crusading church has lately been restored, and is used by the Greeks. Wells are found in the gardens...." From 1918, Lydda was under the administration of the British Mandate in Palestine, as per a League of Nations decree that followed the Great War. During the Second World War, the British set up supply posts in and around Lydda and its railway station, also building an airport that was renamed Ben Gurion Airport after the death of Israel's first prime minister in 1973. At the time of the 1922 census of Palestine, Lydda had a population of 8,103 inhabitants (7,166 Muslims, 926 Christians, and 11 Jews), the Christians were 921 Orthodox, 4 Roman Catholics and 1 Melkite. This had increased by the 1931 census to 11,250 (10,002 Muslims, 1,210 Christians, 28 Jews, and 10 Bahai), in a total of 2475 residential houses. In 1938, Lydda had a population of 12,750. In 1945, Lydda had a population of 16,780 (14,910 Muslims, 1,840 Christians, 20 Jews and 10 "other"). Until 1948, Lydda was an Arab town with a population of around 20,000—18,500 Muslims and 1,500 Christians. In 1947, the United Nations proposed dividing Mandatory Palestine into two states, one Jewish state and one Arab; Lydda was to form part of the proposed Arab state. In the ensuing war, Israel captured Arab towns outside the area the UN had allotted it, including Lydda. In December 1947, thirteen Jewish passengers in a seven-car convoy to Ben Shemen Youth Village were ambushed and murdered.In a separate incident, three Jewish youths, two men and a woman were captured, then raped and murdered in a neighbouring village. Their bodies were paraded in Lydda’s principal street. The Israel Defense Forces entered Lydda on 11 July 1948. The following day, under the impression that it was under attack, the 3rd Battalion was ordered to shoot anyone "seen on the streets". According to Israel, 250 Arabs were killed. Other estimates are higher: Arab historian Aref al Aref estimated 400, and Nimr al Khatib 1,700. In 1948, the population rose to 50,000 during the Nakba, as Arab refugees fleeing other areas made their way there. A key event was the Palestinian expulsion from Lydda and Ramle, with the expulsion of 50,000-70,000 Palestinians from Lydda and Ramle by the Israel Defense Forces. All but 700 to 1,056 were expelled by order of the Israeli high command, and forced to walk 17 km (10+1⁄2 mi) to the Jordanian Arab Legion lines. Estimates of those who died from exhaustion and dehydration vary from a handful to 355. The town was subsequently sacked by the Israeli army. Some scholars, including Ilan Pappé, characterize this as ethnic cleansing. The few hundred Arabs who remained in the city were soon outnumbered by the influx of Jews who immigrated to Lod from August 1948 onward, most of them from Arab countries. As a result, Lod became a predominantly Jewish town. After the establishment of the state, the biblical name Lod was readopted. The Jewish immigrants who settled Lod came in waves, first from Morocco and Tunisia, later from Ethiopia, and then from the former Soviet Union. Since 2008, many urban development projects have been undertaken to improve the image of the city. Upscale neighbourhoods have been built, among them Ganei Ya'ar and Ahisemah, expanding the city to the east. According to a 2010 report in the Economist, a three-meter-high wall was built between Jewish and Arab neighbourhoods and construction in Jewish areas was given priority over construction in Arab neighborhoods. The newspaper says that violent crime in the Arab sector revolves mainly around family feuds over turf and honour crimes. In 2010, the Lod Community Foundation organised an event for representatives of bicultural youth movements, volunteer aid organisations, educational start-ups, businessmen, sports organizations, and conservationists working on programmes to better the city. In the 2021 Israel–Palestine crisis, a state of emergency was declared in Lod after Arab rioting led to the death of an Israeli Jew. The Mayor of Lod, Yair Revivio, urged Prime Minister of Israel Benjamin Netanyahu to deploy Israel Border Police to restore order in the city. This was the first time since 1966 that Israel had declared this kind of emergency lockdown. International media noted that both Jewish and Palestinian mobs were active in Lod, but the "crackdown came for one side" only. Demographics In the 19th century and until the Lydda Death March, Lod was an exclusively Muslim-Christian town, with an estimated 6,850 inhabitants, of whom approximately 2,000 (29%) were Christian. According to the Israel Central Bureau of Statistics (CBS), the population of Lod in 2010 was 69,500 people. According to the 2019 census, the population of Lod was 77,223, of which 53,581 people, comprising 69.4% of the city's population, were classified as "Jews and Others", and 23,642 people, comprising 30.6% as "Arab". Education According to CBS, 38 schools and 13,188 pupils are in the city. They are spread out as 26 elementary schools and 8,325 elementary school pupils, and 13 high schools and 4,863 high school pupils. About 52.5% of 12th-grade pupils were entitled to a matriculation certificate in 2001.[citation needed] Economy The airport and related industries are a major source of employment for the residents of Lod. Other important factories in the city are the communication equipment company "Talard", "Cafe-Co" - a subsidiary of the Strauss Group and "Kashev" - the computer center of Bank Leumi. A Jewish Agency Absorption Centre is also located in Lod. According to CBS figures for 2000, 23,032 people were salaried workers and 1,405 were self-employed. The mean monthly wage for a salaried worker was NIS 4,754, a real change of 2.9% over the course of 2000. Salaried men had a mean monthly wage of NIS 5,821 (a real change of 1.4%) versus NIS 3,547 for women (a real change of 4.6%). The mean income for the self-employed was NIS 4,991. About 1,275 people were receiving unemployment benefits and 7,145 were receiving an income supplement. Art and culture In 2009-2010, Dor Guez held an exhibit, Georgeopolis, at the Petach Tikva art museum that focuses on Lod. Archaeology A well-preserved mosaic floor dating to the Roman period was excavated in 1996 as part of a salvage dig conducted on behalf of the Israel Antiquities Authority and the Municipality of Lod, prior to widening HeHalutz Street. According to Jacob Fisch, executive director of the Friends of the Israel Antiquities Authority, a worker at the construction site noticed the tail of a tiger and halted work. The mosaic was initially covered over with soil at the conclusion of the excavation for lack of funds to conserve and develop the site. The mosaic is now part of the Lod Mosaic Archaeological Center. The floor, with its colorful display of birds, fish, exotic animals and merchant ships, is believed to have been commissioned by a wealthy resident of the city for his private home. The Lod Community Archaeology Program, which operates in ten Lod schools, five Jewish and five Israeli Arab, combines archaeological studies with participation in digs in Lod. Sports The city's major football club, Hapoel Bnei Lod, plays in Liga Leumit (the second division). Its home is at the Lod Municipal Stadium. The club was formed by a merger of Bnei Lod and Rakevet Lod in the 1980s. Two other clubs in the city play in the regional leagues: Hapoel MS Ortodoxim Lod in Liga Bet and Maccabi Lod in Liga Gimel. Hapoel Lod played in the top division during the 1960s and 1980s, and won the State Cup in 1984. The club folded in 2002. A new club, Hapoel Maxim Lod (named after former mayor Maxim Levy) was established soon after, but folded in 2007. Notable people Twin towns-sister cities Lod is twinned with: See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Sociology_of_art] | [TOKENS: 429] |
Contents Sociology of art 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias The sociology of art is a subfield of sociology that explores the societal dimensions of art and aesthetics. Scholars who have written on the sociology of art include Pierre Bourdieu, Vera Zolberg, Howard S. Becker, Arnold Hauser, and Harrison White. Approaches In her 1970 book Meaning and Expression: Toward a Sociology of Art, Hanna Deinhard gives one approach: "The point of departure of the sociology of art is the question: How is it possible that works of art, which always originate as products of human activity within a particular time and society and for a particular time, society, or function -- even though they are not necessarily produced as 'works of art' -- can live beyond their time and seem expressive and meaningful in completely different epochs and societies? On the other hand, how can the age and society that produced them be recognized in the works"? Other approaches consider the social and economic background to the creation of works of art, which has been a great focus of art history in recent decades. For example, research has examined the role of gender and nationality of artists in museum exhibition and textbook inclusion. The role of patrons and consumers of art, as well as those of the artist(s) themselves, are considered. Work into the role geographic location of art collections/collectors has been shown to affect the prestige and recognition of collectors in the art world. There has also been a great interest in the history of art collecting, and the history of older objects between their creation and their current location, beyond a mere provenance. Recent work has also employed new analysis techniques such as social network analysis to understand how an artist's reputation can be affected by association with other artists in exhibition. See also References Further reading This sociology-related article is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Equator] | [TOKENS: 2681] |
Contents Equator The equator is the circle of latitude that divides Earth into the Northern and Southern Hemispheres. It is an imaginary line located at 0 degrees latitude, about 21,639 nautical miles (40,075 kilometres; 24,902 miles) in circumference, halfway between the North and South Poles. The term can also be used for any other celestial body that is roughly spherical. In spatial (3D) geometry, as applied in astronomy, the equator of a rotating spheroid (such as a planet) is the parallel (circle of latitude) at which latitude is defined to be 0°. It is an imaginary line on the spheroid, equidistant from its poles, dividing it into northern and southern hemispheres. In other words, it is the intersection of the spheroid with the plane perpendicular to its axis of rotation and midway between its geographical poles. On and near the equator (on Earth), noontime sunlight appears almost directly overhead (no more than about 23° from the zenith) every day, year-round. Consequently, the equator has a rather stable daytime temperature throughout the year. On the equinoxes (approximately 20 March and 23 September) the subsolar point crosses Earth's equator at a shallow angle, sunlight shines perpendicular to Earth's axis of rotation, and all latitudes have nearly a 12-hour day and 12-hour night. Etymology The name is derived from medieval Latin word aequator, in the phrase circulus aequator diei et noctis, meaning 'circle equalizing day and night', from the Latin word aequare 'make equal'. Overview The latitude of the Earth's equator is, by definition, 0° (zero degrees) of arc. The equator is one of the five notable circles of latitude on Earth; the other four are the two polar circles (the Arctic Circle and the Antarctic Circle) and the two tropical circles (the Tropic of Cancer and the Tropic of Capricorn). The equator is the only line of latitude which is also a great circle—meaning, one whose plane passes through the center of the globe. The plane of Earth's equator, when projected outwards to the celestial sphere, defines the celestial equator. In the cycle of Earth's seasons, the equatorial plane runs through the Sun twice a year: on the equinoxes in March and September. To a person on Earth, the Sun appears to travel along the equator (or along the celestial equator) at these times. Locations on the equator experience the shortest sunrises and sunsets because the Sun's daily path is nearly perpendicular to the horizon for most of the year. The length of daylight (sunrise to sunset) is almost constant throughout the year; it is about 14 minutes longer than nighttime due to atmospheric refraction and the fact that sunrise begins (or sunset ends) as the upper limb, not the center, of the Sun's disk contacts the horizon. Earth bulges slightly at the equator; its average diameter is 12,742 km (7,918 mi), but the diameter at the equator is about 43 km (27 mi) greater than at the poles. Sites near the equator, such as the Guiana Space Centre in Kourou, French Guiana, are good locations for spaceports as they have the fastest rotational speed of any latitude, 460 metres (1,510 ft)/sec. The added velocity reduces the fuel needed to launch spacecraft eastward (in the direction of Earth's rotation) to orbit, while simultaneously avoiding costly maneuvers to flatten inclination during missions such as the Apollo Moon landings. Geodesy The precise location of the equator is not truly fixed; the true equatorial plane is perpendicular to the Earth's rotation axis, which drifts about 9 metres (30 ft) during a year. Geological samples show that the equator significantly changed positions between 48 and 12 million years ago, as sediment deposited by ocean thermal currents at the equator shifted. The deposits by thermal currents are determined by the axis of Earth, which determines solar coverage of Earth's surface. Changes in Earth's axis can also be observed in the geographical layout of volcanic island chains, which are created by shifting hot spots under Earth's crust as the axis and crust move. This is consistent with the Indian tectonic plate colliding with the Eurasian tectonic plate, which is causing the Himalayan uplift. The International Association of Geodesy (IAG) and the International Astronomical Union (IAU) use an equatorial radius of 6,378.1366 kilometres (3,963.1903 mi) (codified as the IAU 2009 value). This equatorial radius is also in the 2003 and 2010 IERS Conventions. It is also the equatorial radius used for the IERS 2003 ellipsoid. If it were really circular, the length of the equator would then be exactly 2π times the radius, namely 40,075.0142 kilometres (24,901.4594 mi). The GRS 80 (Geodetic Reference System 1980) as approved and adopted by the IUGG at its Canberra, Australia meeting of 1979 has an equatorial radius of 6,378.137 kilometres (3,963.191 mi). The WGS 84 (World Geodetic System 1984) which is a standard for use in cartography, geodesy, and satellite navigation including GPS, also has an equatorial radius of 6,378.137 kilometres (3,963.191 mi). For both GRS 80 and WGS 84, this results in a length for the equator of 40,075.0167 kilometres (24,901.4609 mi). The geographical mile is defined as one arc-minute of the equator, so it has different values depending on which radius is assumed. For example, by WSG-84, the distance is 1,855.3248 metres (6,087.024 ft), while by IAU-2000, it is 1,855.3257 metres (6,087.027 ft). This is a difference of less than one millimetre (0.039 in) over the total distance (approximately 1.86 kilometres or 1.16 miles). Earth is commonly modeled as a sphere flattened 0.336% along its axis. This makes the equator 0.16% longer than a meridian (a great circle passing through the two poles). The IUGG standard meridian is, to the nearest millimetre, 40,007.862917 kilometres (24,859.733480 mi), one arc-minute of which is 1,852.216 metres (6,076.82 ft), explaining the SI standardization of the nautical mile as 1,852 metres (6,076 ft), more than 3 metres (9.8 ft) less than the geographical mile. The sea-level surface of Earth (the geoid) is irregular, so the actual length of the equator is not so easy to determine. Aviation Week and Space Technology on 9 October 1961 reported that measurements using the Transit IV-A satellite had shown the equatorial diameter from longitude 11° West to 169° East to be 300 metres (1,000 ft) greater than its diameter ninety degrees away. Equatorial countries and territories Download coordinates as: The equator passes over approximately 8714 km of land (21.7%) and 31,361 km of sea (78.3%). It passes through the land of eleven sovereign states. Indonesia is the country straddling the greatest length of the equatorial line across both land and sea. Starting at the Prime Meridian and heading eastwards, the equator passes through: The equator also passes through the territorial seas of three countries: Maldives (south of Gaafu Dhaalu Atoll), Kiribati (south of Buariki Island), and the United States (south of Baker Island). Despite its name, no part of Equatorial Guinea lies on the equator. However, its island of Annobón is 155 km (96 mi) south of the equator, and the rest of the country lies to the north. France (Mayotte, Réunion), Norway (Bouvet Island), and the United Kingdom (British Antarctic Territory, British Indian Ocean Territory, Falkland Islands, Pitcairn Islands, Saint Helena, Ascension and Tristan da Cunha, South Georgia and the South Sandwich Islands) are the other three Northern Hemisphere-based countries which have territories in the Southern Hemisphere. Equatorial seasons and climate Seasons result from the tilt of Earth's axis away from a line perpendicular to the plane of its revolution around the Sun. Throughout the year, the Northern and Southern hemispheres are alternately turned either toward or away from the Sun, depending on Earth's position in its orbit. The hemisphere turned toward the Sun receives more sunlight and is in summer, while the other hemisphere receives less sun and is in winter (see solstice). At the equinoxes, Earth's axis is perpendicular to the Sun rather than tilted toward or away, meaning that day and night are both about 12 hours long across the whole of Earth. Near the equator, this means the variation in the strength of solar radiation is different relative to the time of year than it is at higher latitudes: maximum solar radiation is received during the equinoxes, when a place at the equator is under the subsolar point at high noon, and the intermediate seasons of spring and autumn occur at higher latitudes; and the minimum occurs during both solstices, when either pole is tilted towards or away from the sun, resulting in either summer or winter in both hemispheres. This also results in a corresponding movement of the equator away from the subsolar point, which is then situated over or near the relevant tropic circle. Nevertheless, temperatures are high year-round due to the Earth's axial tilt of 23.5° not being enough to create a low minimum midday declination to sufficiently weaken the Sun's rays even during the solstices. High year-round temperatures extend to about 25° north or south of the equator, although the moderate seasonal temperature difference is defined by the opposing solstices (as it is at higher latitudes) near the poleward limits of this range. Near the equator, there is little temperature change throughout the year, though there may be dramatic differences in rainfall and humidity. The terms summer, autumn, winter and spring do not generally apply. Lowlands around the equator generally have a tropical rainforest climate, also known as an equatorial climate, though cold ocean currents cause some regions to have tropical monsoon climates with a dry season in the middle of the year, and the Somali Current generated by the Asian monsoon due to continental heating via the high Tibetan Plateau causes Greater Somalia to have an arid climate despite its equatorial location. Average annual temperatures in equatorial lowlands are around 31 °C (88 °F) during the afternoon and 23 °C (73 °F) around sunrise. Rainfall is very high away from cold ocean current upwelling zones, from 2,500 to 3,500 mm (100 to 140 in) per year. There are about 200 rainy days per year and average annual sunshine hours are around 2,000. Despite high year-round sea level temperatures, some higher altitudes such as the Andes and Mount Kilimanjaro have glaciers. The highest point on the equator is at the elevation of 4,690 metres (15,387 ft), at 0°0′0″N 77°59′31″W / 0.00000°N 77.99194°W / 0.00000; -77.99194 (highest point on the equator), found on the southern slopes of Volcán Cayambe [summit 5,790 metres (18,996 ft)] in Ecuador. This is slightly above the snow line and is the only place on the equator where snow lies on the ground. At the equator, the snow line is around 1,000 metres (3,300 ft) lower than on Mount Everest and as much as 2,000 metres (6,600 ft) lower than the highest snow line in the world, near the Tropic of Capricorn on Llullaillaco. Line-crossing ceremonies There is a widespread maritime tradition of holding ceremonies to mark a sailor's first crossing of the equator. In the past, these ceremonies have been notorious for their brutality, especially in naval practice.[citation needed] Milder line-crossing ceremonies, typically featuring King Neptune, are also held for passengers' entertainment on some civilian ocean liners and cruise ships.[citation needed] See also References Sources |
======================================== |
[SOURCE: https://techcrunch.com/2026/02/15/openclaw-creator-peter-steinberger-joins-openai/] | [TOKENS: 524] |
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Posted: OpenClaw creator Peter Steinberger joins OpenAI Peter Steinberger, who created the AI personal assistant now known as OpenClaw, has joined OpenAI. Previously known as Clawdbot, then Moltbot, OpenClaw achieved viral popularity over the past few weeks with its promise to be the “AI that actually does things,” whether that’s managing your calendar, booking flights, or even joining a social network full of other AI assistants. (The name changed the first time after Anthropic threatened legal action over its similarity to Claude, then changed again because Steinberger liked the new name better.) In a blog post announcing his decision to join OpenAI, the Austrian developer said that while he might have been able to turn OpenClaw into a huge company, “it’s not really exciting for me.” “What I want is to change the world, not build a large company[,] and teaming up with OpenAI is the fastest way to bring this to everyone,” Steinberger said. OpenAI CEO Sam Altman posted on X that in his new role, Steinberger will “drive the next generation of personal agents.” As for OpenClaw, Altman said it will “live in a foundation as an open source project that OpenAI will continue to support.” Topics Save up to $680 on your pass before February 27.Meet investors. Discover your next portfolio company. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what’s next. Don’t miss these one-time savings. Subscribe for the industry’s biggest tech news Every weekday and Sunday, you can get the best of TechCrunch’s coverage. TechCrunch Mobility is your destination for transportation news and insight. Startups are the core of TechCrunch, so get our best coverage delivered weekly. Provides movers and shakers with the info they need to start their day. By submitting your email, you agree to our Terms and Privacy Notice. Related Latest in AI © 2025 TechCrunch Media LLC. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Joke#cite_note-FOOTNOTEBeard2014185-11] | [TOKENS: 8460] |
Contents Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Zenith] | [TOKENS: 833] |
Contents Zenith The zenith (UK: /ˈzɛnɪθ/, US: /ˈziː-/) is the imaginary point on the celestial sphere directly "above" a particular location. "Above" means in the vertical direction (plumb line) opposite to the gravity direction at that location (nadir). The zenith is the "highest" point on the celestial sphere. The direction opposite of the zenith is the nadir. Origin The word zenith derives from an inaccurate reading of the Arabic expression سمت الرأس (samt al-raʾs), meaning "direction of the head" or "path above the head", by Medieval Latin scribes in the Middle Ages (during the 14th century), possibly through Old Spanish. It was reduced to samt ("direction") and miswritten as senit/cenit, the m being misread as ni. Through the Old French cenith, zenith first appeared in the 17th century. Relevance and use The term zenith sometimes means the highest point, way, or level reached by a celestial body on its daily apparent path around a given point of observation. This sense of the word is often used to describe the position of the Sun ("The sun reached its zenith..."), but to an astronomer, the Sun does not have its own zenith and is at the zenith only if it is directly overhead. In a scientific context, the zenith is the direction of reference for measuring the zenith angle (or zenith angular distance), the angle between a direction of interest (e.g. a star) and the local zenith - that is, the complement of the altitude angle (or elevation angle). The Sun reaches the observer's zenith when it is 90° above the horizon, and this only happens between the Tropic of Cancer and the Tropic of Capricorn. The point where this occurs is known as the subsolar point. In Islamic astronomy, the passing of the Sun over the zenith of Mecca becomes the basis of the qibla observation by shadows twice a year on 27/28 May and 15/16 July. At a given location during the course of a day, the Sun reaches not only its zenith but also its nadir, at the antipode of that location 12 hours from solar noon. In astronomy, the altitude in the horizontal coordinate system and the zenith angle are complementary angles, with the horizon perpendicular to the zenith. The astronomical meridian is also determined by the zenith, and is defined as a circle on the celestial sphere that passes through the zenith, nadir, and the celestial poles. A zenith telescope is a type of telescope designed to point straight up at or near the zenith, and used for precision measurement of star positions, to simplify telescope construction, or both. The NASA Orbital Debris Observatory and the Large Zenith Telescope are both zenith telescopes, since the use of liquid mirrors meant these telescopes could only point straight up. On the International Space Station, zenith and nadir are used instead of up and down, referring to directions within and around the station, relative to the earth. Zenith stars (also "star on top", "overhead star", "latitude star") are stars whose declination equals the latitude of the observers location, and hence at some time in the day or night pass culminate (pass) through the zenith. When at the zenith the right ascension of the star equals the local sidereal time at your location. In celestial navigation this allows latitude to be determined, since the declination of the star equals the latitude of the observer. If the current time at Greenwich is known at the time of the observation, the observers longitude can also be determined from the right ascension of the star. Hence "Zenith stars" lie on or near the circle of declination equal to the latitude of the observer ("zenith circle"). Zenith stars are not to be confused with "steering stars" of a sidereal compass rose of a sidereal compass. See also Media related to Zenith (topography) at Wikimedia Commons References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/U.S._state] | [TOKENS: 7077] |
Contents U.S. state In the United States, a state is a constituent political entity, of which there are 50. Bound together in a political union, each state holds governmental jurisdiction over a separate and defined geographic territory where it shares its sovereignty with the federal government. Due to this shared sovereignty, Americans are citizens both of the federal republic and of the state in which they reside. State citizenship and residency are flexible, and no government approval is required to move between states, except for persons restricted by certain types of court orders, such as paroled convicts and children of divorced spouses who share child custody. State governments in the U.S. are allocated power by the people of each respective state through their individual state constitutions. All are grounded in republican principles (this being required by the federal constitution), and each provides for a government, consisting of three branches, each with separate and independent powers: executive, legislative, and judicial. States are divided into counties or county-equivalents, which may be assigned some local governmental authority but are not sovereign. County or county-equivalent structure varies widely by state, and states also create other local governments. States, unlike U.S. territories, possess many powers and rights under the United States Constitution. States and their citizens are represented in the United States Congress, a bicameral legislature consisting of the Senate and the House of Representatives. Each state is also entitled to select a number of electors, equal to the total number of representatives and senators from that state, to vote in the Electoral College, the body that directly elects the president of the United States. Each state has the opportunity to ratify constitutional amendments. With the consent of Congress, two or more states may enter into interstate compacts with one another. The police power of each state is also recognized. Historically, the tasks of local law enforcement, public education, public health, intrastate commerce regulation, and local transportation and infrastructure, in addition to local, state, and federal elections, have generally been considered primarily state responsibilities, although all of these now have significant federal funding and regulation as well. Over time, the Constitution has been amended, and the interpretation and application of its provisions have changed. The general tendency has been toward centralization and incorporation, with the federal government playing a much larger role than it once did. There is a continuing debate over states' rights, which concerns the extent and nature of the states' powers and sovereignty in relation to the federal government and the rights of individuals. The Constitution grants to Congress the authority to admit new states into the Union. Since the establishment of the United States in 1776 by the Thirteen Colonies, the number of states has expanded from the original 13 to 50. Each new state has been admitted on an equal footing with the existing states. While the Constitution does not explicitly discuss secession from the Union, the United States Supreme Court, in Texas v. White (1869), held that the Constitution did not permit states to unilaterally do so. List The 50 states, in alphabetical order, along with each state's flag: Background The 13 original states came into existence in July 1776 during the American Revolutionary War (1775–1783), as the successors of the Thirteen Colonies, upon agreeing to the Lee Resolution and signing the United States Declaration of Independence. Prior to these events each state had been a British colony; each then joined the first Union of states between 1777 and 1781, upon ratifying the Articles of Confederation, the first U.S. constitution. During this period, the newly independent states developed their own individual state constitutions, among the earliest written constitutions in the world. Although different in detail, these state constitutions shared features that would be important in the American constitutional order: they were republican in form, and separated power among three branches, most had bicameral legislatures, and contained statements, or a bill, of rights. From 1787 to 1790, each of the states ratified a new federal frame of government in the Constitution of the United States. In relation to the states, the U.S. Constitution elaborated concepts of federalism. Governments Under U.S. constitutional law, the 50 individual states and the United States as a whole are each sovereign jurisdictions. The states are not administrative divisions of the country; the Tenth Amendment to the United States Constitution reserves to the states or to the people all powers of government not delegated to the federal government. Consequently, each of the 50 states reserves the right to organize its individual government in any way (within the broad parameters set by the U.S. Constitution and the Republican Guarantee enforced by Congress) deemed appropriate by its people, and to exercise all powers of government not delegated to the federal government by the Constitution. A state, unlike the federal government, has un-enumerated police power, that is, the right to generally make all necessary laws for the welfare of its people. As a result, while the governments of the various states share many similar features, they often vary greatly with regard to form and substance. No two state governments are identical. The government of each state is structured in accordance with its individual constitution, all of which are written constitutions. Many of these documents are more detailed and more elaborated than their federal counterpart. For example, before its revision in 2022, the Constitution of Alabama, contained 310,296 words, which is more than 40 times as many as the U.S. Constitution. In practice, each state has adopted a three-branch frame of government: executive, legislative, and judicial, even though doing so has never been required. Early in American history, four state governments differentiated themselves from the others in their first constitutions by choosing to self-identify as Commonwealths rather than as states: Virginia, in 1776; Pennsylvania, in 1777; Massachusetts, in 1780; and Kentucky, in 1792. Consequently, while these four are states like the other states, each is formally a commonwealth because the term is contained in its constitution. The term, commonwealth, which refers to a state in which the supreme power is vested in the people, was first used in Virginia during the Interregnum, the 1649–60 period between the reigns of Charles I and Charles II, during which parliament's Oliver Cromwell as Lord Protector established a republican government known as the Commonwealth of England. Virginia became a royal colony again in 1660, and the word was dropped from the full title. It went unused until reintroduced in 1776. In each state, the chief executive is called the governor, who serves as both head of state and head of government. All governors are chosen by statewide direct election. The governor may approve or veto bills passed by the state legislature, as well as recommend and work for the passage of bills, usually supported by their political party. In 44 states, governors have line item veto power. Most states have a plural executive, meaning that the governor is not the only government official in the state responsible for its executive branch. In these states, executive power is distributed amongst other officials, elected by the people independently of the governor—such as the lieutenant governor, attorney general, comptroller, secretary of state, and others. Elections of officials in the United States are generally for a fixed term of office. The constitutions of 19 states allow for citizens to remove and replace an elected public official before the end of their term of office through a recall election. Each state follows its own procedures for recall elections, and sets its own restrictions on how often, and how soon after a general election, they may be held. In all states, the legislatures can remove state executive branch officials, including governors, who have committed serious abuses of their power from office. The process of doing so includes impeachment (the bringing of specific charges), and a trial, wherein legislators act as a jury. The primary responsibilities of state legislatures are to enact state laws and appropriate money for the administration of public policy. In all states, if the governor vetoes a bill, or a portion of one, it can still become law if the legislature overrides the veto (repasses the bill), which in most states requires a two-thirds vote in each chamber. In 49 of the 50 states the legislature consists of two chambers: a lower house (variously called the House of Representatives, State Assembly, General Assembly or House of Delegates) and a smaller upper house, in all states called the Senate. The exception is the unicameral Nebraska Legislature, meaning it has only a single chamber. Most states have a part-time legislature, traditionally called a citizen legislature. Ten state legislatures are considered full-time. These bodies are more similar to the U.S. Congress than are the others. Members of each state's legislature are chosen by direct election. In Baker v. Carr (1962) and Reynolds v. Sims (1964), the U.S. Supreme Court held that all states are required to elect their legislatures in such a way as to afford each citizen the same degree of representation (the one person, one vote standard). In practice, most states elect legislators from single-member districts, each of which has approximately the same population. Some states, such as Maryland and Vermont, divide the state into single- and multi-member districts. In this case, multi-member districts must have proportionately larger populations, e.g., a district electing two representatives must have approximately twice the population of a district electing just one. The voting systems used across the nation are: first-past-the-post in single-member districts, and multiple non-transferable vote in multi-member districts. In 2013, there were 7,383 legislators in the 50 state legislative bodies. They earned from $0 annually (New Mexico) to $90,526 (California). There were various per diem and mileage compensation. States can also organize their judicial systems differently from the federal judiciary, as long as they protect the federal constitutional right of their citizens to procedural due process. Most have a trial-level court, generally called a district court, superior court or circuit court, a first-level appellate court, generally called a court of appeal (or appeals), and a supreme court. Oklahoma and Texas have separate highest courts for criminal appeals. Uniquely, in New York State, the trial court is called the Supreme Court; appeals go up first to the Supreme Court's Appellate Division, and from there to its highest court, the New York Court of Appeals. State court systems exercise broad, plenary, and general jurisdiction, in contrast to the federal courts, which are courts of limited jurisdiction. The overwhelming majority of criminal and civil cases in the United States are heard in state courts. Each year, roughly 30 million new cases are filed in state courts and the total number of judges across all state courts is about 30,000—for comparison, 1 million new cases are filed each year in federal courts, which have about 1,700 judges. Most states base their legal system on English common law (with substantial statutory changes and incorporation of certain civil law innovations), with the notable exception of Louisiana, a former French colony, which draws large parts of its legal system from French civil law. Only a few states choose to have the judges on the state's courts serve for life terms. In most states, the judges, including the justices of the highest court in the state, are either elected or appointed for terms of a limited number of years and are usually eligible for re-election or reappointment. All states are unitary states, not federations or aggregates of local governments. Local governments within them are created by and exist by virtue of state law, and local governments within each state are subject to the central authority of that particular state. State governments commonly delegate some authority to local units and channel policy decisions down to them for implementation. In a few states, local units of government are permitted a degree of home rule over various matters. The prevailing legal theory of state preeminence over local governments, referred to as Dillon's Rule, holds that, A municipal corporation possesses and can exercise the following powers and no others: First, those granted in express words; second, those necessarily implied or necessarily incident to the powers expressly granted; third, those absolutely essential to the declared objects and purposes of the corporation—not simply convenient but indispensable; fourth, any fair doubt as to the existence of power is resolved by the courts against the corporation—against the existence of the powers. Each state defines for itself what powers it will allow local governments. Generally, four categories of power may be given to local jurisdictions: Relationships Each state admitted to the Union by Congress since 1789 has entered it on an equal footing with the original states in all respects. With the growth of states' rights advocacy during the antebellum period, the Supreme Court asserted, in Lessee of Pollard v. Hagan (1845), that the Constitution mandated admission of new states on the basis of equality. With the consent of Congress, states may enter into interstate compacts, agreements between two or more states. Compacts are frequently used to manage a shared resource, such as transportation infrastructure or water rights. Under Article IV of the Constitution, which outlines the relationship between the states, each state is required to give full faith and credit to the acts of each other's legislatures and courts, which is generally held to include the recognition of most contracts and criminal judgments, and before 1865, slavery status. Pursuant to the Extradition Clause, a state must extradite people located there who have fled charges of "treason, felony, or other crimes" in another state if the other state so demands. The full faith and credit expectation does have exceptions, some legal arrangements, such as professional licensure and marriages, may be state-specific, and until recently states have not been found by the courts to be required to honor such arrangements from other states. Such legal acts are nevertheless often recognized state-to-state according to the common practice of comity. States are prohibited from discriminating against citizens of other states with respect to their basic rights, under the Privileges and Immunities Clause. Under Article IV, each state is guaranteed a form of government that is grounded in republican principles, such as the consent of the governed. This guarantee has long been at the forefront of the debate about the rights of citizens vis-à-vis the government. States are also guaranteed protection from invasion, and, upon the application of the state legislature (or executive, if the legislature cannot be convened), from domestic violence. This provision was discussed during the 1967 Detroit riot but was not invoked. The Supremacy Clause (Article VI, Clause 2) establishes that the Constitution, federal laws made pursuant to it, and treaties made under its authority, constitute the supreme law of the land. It provides that state courts are bound by the supreme law; in case of conflict between federal and state law, the federal law must be applied. Even state constitutions are subordinate to federal law. States' rights are understood mainly with reference to the Tenth Amendment. The Constitution delegates some powers to the national government, and it forbids some powers to the states. The Tenth Amendment reserves all other powers to the states, or to the people. Powers of the U.S. Congress are enumerated in Article I, Section 8, for example, the power to declare war. Making treaties is one power forbidden to the states, being listed among other such powers in Article I, Section 10. Among the Article I enumerated powers of Congress is the power to regulate commerce. Since the early 20th century, the Supreme Court's interpretation of this "Commerce Clause" has, over time, greatly expanded the scope of federal power, at the expense of powers formerly considered purely states' matters. The Cambridge Economic History of the United States says, "On the whole, especially after the mid-1880s, the Court construed the Commerce Clause in favor of increased federal power." In 1941, the Supreme Court in U.S. v. Darby upheld the Fair Labor Standards Act of 1938, holding that Congress had the power under the Commerce Clause to regulate employment conditions. In 1942, in Wickard v. Filburn, the Court expanded federal power to regulate the economy by holding that federal authority under the commerce clause extends to activities which may appear to be local in nature but in reality effect the entire national economy and are therefore of national concern. For example, Congress can regulate railway traffic across state lines, but it may also regulate rail traffic solely within a state, based on the reality that intrastate traffic still affects interstate commerce. Through such decisions, argues law professor David F. Forte, "the Court turned the commerce power into the equivalent of a general regulatory power and undid the Framers' original structure of limited and delegated powers." Subsequently, Congress invoked the Commerce Clause to expand federal criminal legislation, as well as for social reforms such as the Civil Rights Act of 1964. Only within the past couple of decades, through decisions in cases such as those in U.S. v. Lopez (1995) and U.S. v. Morrison (2000), has the Court tried to limit the Commerce Clause power of Congress. Another enumerated congressional power is its taxing and spending power. An example of this is the system of federal aid for highways, which include the Interstate Highway System. The system is mandated and largely funded by the federal government and serves the interests of the states. By threatening to withhold federal highway funds, Congress has been able to pressure state legislatures to pass various laws. An example is the nationwide legal drinking age of 21, enacted by each state, brought about by the National Minimum Drinking Age Act. Although some objected that this infringes on states' rights, the Supreme Court upheld the practice as a permissible use of the Constitution's Spending Clause in South Dakota v. Dole 483 U.S. 203 (1987). As prescribed by Article I of the Constitution, which establishes the U.S. Congress, each state is represented in the Senate (irrespective of population size) by two senators, and each is guaranteed at least one representative in the House. Both senators and representatives are chosen in direct popular elections in the various states. Prior to 1913, senators were elected by state legislatures. There are presently 100 senators, who are elected at-large to staggered terms of six years, with one-third of them being chosen every two years. Representatives are elected at large or from single-member districts to terms of two years, not staggered. The size of the House—presently 435 voting members—is set by federal statute. Seats in the House are distributed among the states in proportion to the most recent constitutionally mandated decennial census. The borders of these districts are established by the states individually through a process called redistricting, and within each state all districts are required to have approximately equal populations. Citizens in each state plus those in the District of Columbia indirectly elect the president and vice president. When casting ballots in presidential elections they are voting for presidential electors, who then, using procedures provided in the 12th amendment, elect the president and vice president. There were 538 electors for the most recent presidential election in 2024; the allocation of electoral votes was based on the 2010 census. Each state is entitled to a number of electors equal to the total number of representatives and senators from that state; the District of Columbia is entitled to three electors. While the Constitution does set parameters for the election of federal officials, state law, not federal, regulates most aspects of elections in the U.S., including primaries, the eligibility of voters (beyond the basic constitutional definition), the running of each state's electoral college, as well as the running of state and local elections. All elections—federal, state, and local—are administered by the individual states, and some voting rules and procedures may differ among them. Article V of the Constitution accords states a key role in the process of amending the U.S. Constitution. Amendments may be proposed either by Congress with a two-thirds vote in both the House and the Senate, or by a constitutional convention called for by two-thirds of the state legislatures. To become part of the Constitution, an amendment must be ratified by either—as determined by Congress—the legislatures of three-quarters of the states or state ratifying conventions in three-quarters of the states. The vote in each state (to either ratify or reject a proposed amendment) carries equal weight, regardless of a state's population or length of time in the Union. U.S. states are not sovereign in the Westphalian sense in international law which says that each State has sovereignty over its territory and domestic affairs, to the exclusion of all external powers, on the principle of non-interference in another State's domestic affairs, and that each State, no matter how large or small, is equal in international law. The 50 U.S. states do not possess international legal sovereignty, meaning that they are not recognized by other sovereign States such as, for example, France, Germany or the United Kingdom. The federal government is responsible for international relations, but state and local government leaders occasionally travel to other countries and form economic and cultural relationships. Admission into the Union Article IV also grants to Congress the authority to admit new states into the Union. Since the establishment of the United States in 1776, the number of states has expanded from the original 13 to 50. Each new state has been admitted on an equal footing with the existing states. Article IV also forbids the creation of new states from parts of existing states without the consent of both the affected states and Congress. This caveat was designed to give Eastern states that still had Western land claims (including Georgia, North Carolina, and Virginia) to have a veto over whether their western counties could become states, and has served this same function since, whenever a proposal to partition an existing state or states in order that a region within might either join another state or to create a new state has come before Congress. Most of the states admitted to the Union after the original 13 were formed from an organized territory established and governed by Congress in accord with its plenary power under Article IV, Section 3, Clause 2. The outline for this process was established by the Northwest Ordinance (1787), which predates the ratification of the Constitution. In some cases, an entire territory has become a state; in others some part of a territory has. When the people of a territory make their desire for statehood known to the federal government, Congress may pass an enabling act authorizing the people of that territory to organize a constitutional convention to write a state constitution as a step toward admission to the Union. Each act details the mechanism by which the territory will be admitted as a state following ratification of their constitution and election of state officers. Although the use of an enabling act is a traditional historic practice, a number of territories have drafted constitutions for submission to Congress absent an enabling act and were subsequently admitted. Upon acceptance of that constitution and meeting any additional congressional stipulations, Congress has always admitted that territory as a state. In addition to the original 13, six subsequent states were never an organized territory of the federal government, or part of one, before being admitted to the Union. Three were set off from an already existing state, two entered the Union after having been sovereign states, and one was established from unorganized territory: Congress is under no obligation to admit states, even in those areas whose population expresses a desire for statehood. Such has been the case numerous times during the nation's history. In one instance, Mormon pioneers in Salt Lake City sought to establish the state of Deseret in 1849. It existed for slightly over two years and was never approved by the United States Congress. In another, leaders of the Five Civilized Tribes (Cherokee, Chickasaw, Choctaw, Creek, and Seminole) in Indian Territory proposed to establish the state of Sequoyah in 1905, as a means to retain control of their lands. The proposed constitution ultimately failed in the U.S. Congress. Instead, the Indian Territory and Oklahoma Territory were both incorporated into the new state of Oklahoma in 1907. The first instance occurred while the nation still operated under the Articles of Confederation. The State of Franklin existed for several years, not long after the end of the American Revolution, but was never recognized by the Confederation Congress, which ultimately recognized North Carolina's claim of sovereignty over the area. The territory comprising Franklin later became part of the Southwest Territory, and ultimately of the state of Tennessee. The entry of several states into the Union was delayed due to distinctive complicating factors. Among them, Michigan Territory, which petitioned Congress for statehood in 1835, was not admitted to the Union until 1837, due to a boundary dispute with the adjoining state of Ohio. The Republic of Texas requested annexation to the United States in 1837, but fears about potential conflict with Mexico delayed the admission of Texas for nine years. Statehood for Kansas Territory was held up for several years (1854–61) due to a series of internal violent conflicts involving anti-slavery and pro-slavery factions. West Virginia's bid for statehood was also delayed over slavery and was settled when it agreed to adopt a gradual abolition plan. Proposed additions Guam is an organized, unincorporated territory of the United States in the western Pacific Ocean. The future political status of Guam has been a matter of significant discussion, with public opinion polls indicating a strong preference of becoming a U.S. state. Puerto Rico, an unincorporated U.S. territory, refers to itself as the "Commonwealth of Puerto Rico" in the English version of its constitution, and as "Estado Libre Asociado" (literally, Associated Free State) in the Spanish version. As with all U.S. territories, its residents do not have full representation in the United States Congress. Puerto Rico has limited representation in the U.S. House of Representatives in the form of a Resident Commissioner, a delegate with limited voting rights in the Committee of the Whole House on the State of the Union, but no voting rights otherwise. A non-binding referendum on statehood, independence, or a new option for an associated territory (different from the current status) was held on November 6, 2012. Sixty one percent (61%) of voters chose the statehood option, while one third of the ballots were submitted blank. On December 11, 2012, the Legislative Assembly of Puerto Rico enacted a concurrent resolution requesting the President and the Congress of the United States to respond to the referendum of the people of Puerto Rico, held on November 6, 2012, to end its current form of territorial status and to begin the process to admit Puerto Rico as a state. Another status referendum was held on June 11, 2017, wherein 97% percent of voters chose statehood. Turnout was low, as only 23% of voters went to the polls, with advocates of both continued territorial status and independence urging voters to boycott it. On June 27, 2018, the H.R. 6246 Act was introduced on the U.S. House with the purpose of responding to, and comply with, the democratic will of the United States citizens residing in Puerto Rico as expressed in the plebiscites held on November 6, 2012, and June 11, 2017, by setting forth the terms for the admission of the territory of Puerto Rico as a state of the Union. The act has 37 original cosponsors between Republicans and Democrats in the U.S. House of Representatives. On November 3, 2020, Puerto Rico held another referendum. In the non-binding referendum, Puerto Ricans voted in favor of becoming a state. They also voted for a pro-statehood governor, Pedro Pierluisi. The intention of the Founding Fathers was that the United States capital should be at a neutral site, not giving favor to any existing state; as a result, the District of Columbia was created in 1800 to serve as the seat of government. As it is not a state, the district does not have representation in the Senate and has a non-voting delegate in the House; neither does it have a sovereign elected government. Additionally, before ratification of the 23rd Amendment in 1961, district citizens did not get the right to vote in presidential elections. The strong majority of residents of the District support statehood of some form for that jurisdiction – either statehood for the whole district or for the inhabited part, with the remainder remaining under federal jurisdiction. In November 2016, Washington, D.C. residents voted in a statehood referendum in which 86% of voters supported statehood for Washington, D.C. For statehood to be achieved, it must be approved by Congress. Secession from the Union The Constitution speaks of "union" several times, but does not explicitly discuss the issue of whether a state can secede from the Union. Its predecessor, the Articles of Confederation, stated that the union of the United States "shall be perpetual." The question of whether or not individual states held the unilateral right to secession was a passionately debated feature of the nations' political discourse from early in its history and remained a difficult and divisive topic until the American Civil War. In 1860 and 1861, 11 southern states each declared secession from the United States and joined to form the Confederate States of America (CSA). Following the defeat of Confederate forces by Union armies in 1865, those states were brought back into the Union during the ensuing Reconstruction era. The federal government never recognized the sovereignty of the CSA, nor the validity of the ordinances of secession adopted by the seceding states. Following the war, the United States Supreme Court, in Texas v. White (1869), held that states did not have the right to secede and that any act of secession was legally void. Drawing on the "perpetual" union language of the Articles of Confederation, and its succeeding Preamble to the Constitution, which states that the Constitution intends to "form a more perfect union", and speaks of the people of the United States a single body politic who are the authors of the more perfect union ("We the people"), the Supreme Court found that states did not have a right to secede. The court's reference in the same decision to the possibility of such changes occurring "through revolution, or through consent of the States", essentially means that this decision holds that no state has a right to unilaterally decide to leave the Union. Name origins The 50 states have taken their names from a wide variety of languages. Twenty-four state names originate from Native American languages. Of these, eight are from Algonquian languages, seven are from Siouan languages, three are from Iroquoian languages, one is from Uto-Aztecan languages and five others are from other indigenous languages. Hawaii's name is derived from the Polynesian Hawaiian language. Of the remaining names, 22 are from European languages. Seven are from Latin (mainly Latinized forms of English names) and the rest are from English, Spanish and French. Eleven states are named after individual people, including seven named for royalty and one named after a President of the United States. The origins of six state names are unknown or disputed. Several of the states that derive their names from names used for Native peoples retain the final letter "s" in the indigenous name. Geography The borders of the 13 original states were largely determined by colonial charters. Their western boundaries were subsequently modified as the states ceded their western land claims to the Federal government during the 1780s and 1790s. Many state borders beyond those of the original 13 were set by Congress as it created territories, divided them, and over time, created states within them. Territorial and new state lines often followed various geographic features (such as rivers or mountain range peaks), and were influenced by settlement or transportation patterns. At various times, national borders with territories formerly controlled by other countries (British North America, New France, New Spain including Spanish Florida, and Russian America) became institutionalized as the borders of U.S. states. In the West, relatively arbitrary lines following latitude and longitude often prevail due to the sparseness of settlement west of the Mississippi River. Once established, most state borders have, with few exceptions, been generally stable. Only two states, Missouri (Platte Purchase) and Nevada grew appreciably after statehood. Several of the original states ceded land, over a several-year period, to the Federal government, which in turn became the Northwest Territory, Southwest Territory, and Mississippi Territory. In 1791, Maryland and Virginia ceded land to create the District of Columbia (Virginia's portion was returned in 1847). In 1850, Texas ceded a large swath of land to the federal government. Additionally, Massachusetts and Virginia (on two occasions), have lost land, in each instance to form a new state. There have been numerous other minor adjustments to state boundaries over the years due to improved surveys, resolution of ambiguous or disputed boundary definitions, or minor mutually agreed boundary adjustments for administrative convenience or other purposes. Occasionally, either Congress or the U.S. Supreme Court has had to settle state border disputes. One notable example is the case New Jersey v. New York, in which New Jersey won roughly 90% of Ellis Island from New York in 1998. Once a territory is admitted by Congress as a state of the Union, the state must consent to any changes pertaining to the jurisdiction of that state and Congress. The only potential violation of this occurred when the legislature of Virginia declared the secession of Virginia from the United States at the start of the American Civil War and a newly formed alternative Virginia legislature, recognized by the federal government, consented to have West Virginia secede from Virginia. States may be grouped in regions; there are many variations and possible groupings. Many are defined in law or regulations by the federal government. For example, the United States Census Bureau defines four statistical regions, with nine divisions. The Census Bureau region definition (Northeast, Midwest, South, and West) is "widely used ... for data collection and analysis," and is the most commonly used classification system. Other multi-state regions are unofficial, and defined by geography or cultural affinity rather than by state lines. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Azerbaijani_language] | [TOKENS: 6886] |
Contents Azerbaijani language Azerbaijani (/ˌæzərbaɪˈdʒæni, -ɑːn-/ AZ-ər-by-JA(H)N-ee; Azərbaycanca, آذربایجانجا, Азәрбајҹанҹа)[note 1] or Azeri (/æˈzɛəri, ɑːˈ-, əˈ-/ a(h)-ZAIR-ee, ə-), also referred to as Azerbaijani Turkic or Azerbaijani Turkish (Azərbaycan türkcəsi, آذربایجان تۆرکچهسی, Азәрбајҹан түркҹәси),[note 1] is a Turkic language from the Oghuz sub-branch. It is spoken primarily by the Azerbaijani people, who live mainly in the Republic of Azerbaijan, where the North Azerbaijani variety is spoken, while Iranian Azerbaijanis in the Azerbaijan region of Iran, speak the South Azerbaijani variety, but it is unclear whether these two varieties form one language, as the International Organization for Standardization (ISO) considers Northern and Southern Azerbaijani to be distinct languages. Azerbaijani is the only official language in the Republic of Azerbaijan and one of the 14 official languages of Dagestan (a federal subject of Russia), but it does not have official status in Iran, where the majority of Iranian Azerbaijani people live. Azerbaijani is also spoken to lesser varying degrees in Azerbaijani communities of Georgia and Turkey and by diaspora communities, primarily in Europe and North America. Although there is a high degree of mutual intelligibility between both forms of Azerbaijani, there are significant differences in phonology, lexicon, morphology, syntax, and sources of loanwords. The standardized form of North Azerbaijani (spoken in the Republic of Azerbaijan and Russia) is based on the Shirvani dialect, while South Azerbaijani uses a variety of regional dialects. Since the Republic of Azerbaijan's independence from the Soviet Union in 1991, Northern Azerbaijani has used the Latin script. On the other hand, South Azerbaijani has always used and continues to use the Perso-Arabic script. Azerbaijani is closely related to Turkmen, Turkish, Gagauz, and Qashqai, being mutually intelligible with each of these languages to varying degrees. Etymology and background Historically, the language was referred to by its native speakers as türk dili or türkcə, meaning either "Turkish" or "Turkic". In the early years following the establishment of the Azerbaijan Soviet Socialist Republic, the language was still referred to as "Turkic" in official documents. However, in the 1930s, its name was officially changed to "Azerbaijani". The language is often still referred to as Turki or Torki (Turkish or Turkic) in Iranian Azerbaijan. The term "Azeri", generally interchangeable with "Azerbaijani", is from Turkish Azeri. The 17th century Capuchin missionary Raphael du Mans used the expression "Turk Ajami" in relation to the Azerbaijani language. This term is used by many modern authors to designate the direct historical predecessor of the modern Azerbaijani language (see Middle Azerbaijani language). The term is derived from earlier designations, such as lingua turcica agemica, or Turc Agemi, which was used in a grammar book composed by the French writer Capuchin Raphaël du Mans (died 1696) in 1684. Local texts simply called the language türkī. During "the Isfahan phase of the Safavids", it was called ḳızılbaşī in contrast to rūmī (Ottoman) and çaġatā’ī (Chagatai), due to its close relation to dialects spoken by the Qizilbash. Azerbaijani or Azeri is the term that is used interchangeably for the language throughout the 19th and 20th centuries. History and evolution Azerbaijani evolved from the Eastern branch of Oghuz Turkic ("Western Turkic") which spread to the Caucasus in Eastern Europe and northern Iran in West Asia during the medieval Turkic migrations. Persian and Arabic influenced the language, but Arabic words were mainly transmitted through the intermediary of literary Persian. Azerbaijani is, perhaps after Uzbek, the Turkic language upon which Persian and other Iranian languages have exerted the strongest impact—mainly in phonology, syntax, and vocabulary, less in morphology. During the period of the Qara Qoyunlu and Aq Qoyunlu states, Azerbaijani Turkic (in the sources of that period, "Turki") gradually began to emerge as a means of literary and poetic expression. During this period, writing in Turkic became fashionable in the court and among poets. The ruler of the Qara Qoyunlu, Jahanshah, was known by his pen name "Haqiqi", and the ruler of the Aq Qoyunlu, Sultan Yaqub, was known for writing poems in Turkic. The great Sufi poet Qasim-i Anvar also accepted Turkic as a literary language and presented highly poetic examples in this language. The Turkic language of Azerbaijan gradually supplanted the Iranian languages in what is now northwestern Iran, and a variety of languages of the Caucasus and Iranian languages spoken in the Caucasus, particularly Udi and Old Azeri. By the beginning of the 16th century, it had become the dominant language of the region. It was one of the spoken languages in the court of the Safavids and Qajars. The historical development of Azerbaijani can be divided into two major periods: early (c. 14th to 18th century) and modern (18th century to present). Early Azerbaijani differs from its descendant in that it contained a much larger number of Persian and Arabic loanwords, phrases and syntactic elements. Early writings in Azerbaijani also demonstrate linguistic interchangeability between Oghuz and Kypchak elements in many aspects (such as pronouns, case endings, participles, etc.). As Azerbaijani gradually moved from being merely a language of epic and lyric poetry to being also a language of journalism and scientific research, its literary version has become more or less unified and simplified with the loss of many archaic Turkic elements, stilted Iranisms and Ottomanisms, and other words, expressions, and rules that failed to gain popularity among the Azerbaijani masses. The Russian annexation of Iran's territories in the Caucasus through the Russo-Iranian wars of 1804–1813 and 1826–1828 split the language community across two states. Afterwards, the Tsarist administration encouraged the spread of Azerbaijani in eastern Transcaucasia as a replacement for Persian spoken by the upper classes, and as a measure against Persian influence in the region. Between c. 1900 and 1930, there were several competing approaches to the unification of the national language in what is now the Azerbaijan Republic, popularized by scholars such as Hasan bey Zardabi and Mammad agha Shahtakhtinski. Despite major differences, they all aimed primarily at making it easy for semi-literate masses to read and understand literature. They all criticized the overuse of Persian, Arabic, and European elements in both colloquial and literary language and called for a simpler and more popular style. The Soviet Union promoted the development of the language but set it back considerably with two successive script changes – from the Persian to Latin and then to the Cyrillic script – while Iranian Azerbaijanis continued to use the Persian script as they always had. Despite the wide use of Azerbaijani in the Azerbaijan Soviet Socialist Republic, it became the official language of Azerbaijan only in 1956. After independence, the Republic of Azerbaijan decided to switch back to a modified Latin script. Azerbaijani literature The development of Azerbaijani literature is closely associated with Anatolian Turkish, written in Perso-Arabic script. Examples of its detachment date to the 14th century or earlier. Kadi Burhan al-Din, Hasanoghlu, and Imadaddin Nasimi helped to establish Azerbaiijani as a literary language in the 14th century through poetry and other works. One ruler of the Qara Qoyunlu state, Jahanshah, wrote poems in Azerbaijani language with the nickname "Haqiqi". Sultan Yaqub, a ruler of the Aq Qoyunlu state, wrote poems in the Azerbaijani language. The ruler and poet Ismail I wrote under the pen name Khatā'ī (which means "sinner" in Persian) during the fifteenth century. During the 16th century, the poet, writer and thinker Fuzûlî wrote mainly in Azerbaijani but also translated his poems into Arabic and Persian. Starting in the 1830s, several newspapers were published in Iran during the reign of the Qajar dynasty, but it is unknown whether any of these newspapers were written in Azerbaijani. In 1875, Akinchi (Əkinçi / اکينچی) ("The Ploughman") became the first Azerbaijani newspaper to be published in the Russian Empire. It was started by Hasan bey Zardabi, a journalist and education advocate. Mohammad-Hossein Shahriar is an important figure in Azerbaijani poetry. His most important work is Heydar Babaya Salam and it is considered to be a pinnacle of Azerbaijani literature and gained popularity in the Turkic-speaking world. It was translated into more than 30 languages. In the mid-19th century, Azerbaijani literature was taught at schools in Baku, Ganja, Shaki, Tbilisi, and Yerevan. Since 1845, it has also been taught in the Saint Petersburg State University in Russia. In 2018, Azerbaijani language and literature programs are offered in the United States at several universities, including Indiana University, UCLA, and University of Texas at Austin. The vast majority, if not all Azerbaijani language courses teach North Azerbaijani written in the Latin script and not South Azerbaijani written in the Perso-Arabic script. Modern literature in the Republic of Azerbaijan is primarily based on the Shirvani dialect, while in the Iranian Azerbaijan region (historic Azerbaijan) it is based on the Tabrizi one. Lingua franca An Azerbaijani koine served as a lingua franca throughout most parts of Transcaucasia except the Black Sea coast, in southern Dagestan, and all over Iran from the 16th to the early 20th centuries, alongside cultural, administrative, court literature, and most importantly official language of all these regions, Persian. From the early 16th century up to the course of the 19th century, these regions and territories were all ruled by the Safavids, Afsharids, Zands, and Qajars until the cession of Transcaucasia proper and Dagestan by Qajar Iran to the Russian Empire per the 1813 Treaty of Gulistan and the 1828 Treaty of Turkmenchay. Per the 1829 Caucasus School Statute, Azerbaijani (Tatar) was taught in all district schools of Ganja, Shusha, Nukha (present-day Shaki), Shamakhi, Quba, Baku, Derbent, Yerevan, Nakhchivan, Akhaltsikhe, and Lankaran. Dialects Azerbaijani is one of the Oghuz languages within the Turkic language family. Ethnologue lists North Azerbaijani (spoken mainly in the Republic of Azerbaijan and Russia) and South Azerbaijani (spoken in Iran, Iraq, and Syria) as two groups within the Azerbaijani macrolanguage with "significant differences in phonology, lexicon, morphology, syntax, and loanwords" between the two. The International Organization for Standardization (ISO) considers Northern and Southern Azerbaijani to be distinct languages. Linguists Mohammad Salehi and Aydin Neysani write that "there is a high degree of mutual intelligibility" between North and South Azerbaijani. Svante Cornell wrote in his 2001 book Small Nations and Great Powers that "it is certain that Russian and Iranian words (sic), respectively, have entered the vocabulary on either side of the Araxes river, but this has not occurred to an extent that it could pose difficulties for communication". There are numerous dialects, with 21 North Azerbaijani dialects and 11 South Azerbaijani dialects identified by Ethnologue. Three varieties have been accorded ISO 639-3 language codes: North Azerbaijani, South Azerbaijani and Qashqai. The Glottolog 4.1 database classifies North Azerbaijani, with 20 dialects, and South Azerbaijani, with 13 dialects, under the Modern Azeric family, a branch of Central Oghuz. In the northern dialects of the Azerbaijani language, linguists find traces of the influence of the Khazar language. According to Encyclopedia Iranica: We may distinguish the following Azeri dialects: (1) eastern group: Derbent (Darband), Kuba, Shemakha (Šamāḵī), Baku, Salyani (Salyānī), and Lenkoran (Lankarān), (2) western group: Kazakh (not to be confounded with the Kipchak-Turkic language of the same name), the dialect of the Ayrïm (Āyrom) tribe (which, however, resembles Turkish), and the dialect spoken in the region of the Borchala river; (3) northern group: Zakataly, Nukha, and Kutkashen; (4) southern group: Yerevan (Īravān), Nakhichevan (Naḵjavān), and Ordubad (Ordūbād); (5) central group: Ganja (Kirovabad) and Shusha; (6) North Iraqi dialects; (7) Northwest Iranian dialects: Tabrīz, Reżāʾīya (Urmia), etc., extended east to about Qazvīn; (8) Southeast Caspian dialect (Galūgāh). Optionally, we may adjoin as Azeri (or "Azeroid") dialects: (9) East Anatolian, (10) Qašqāʾī, (11) Aynallū, (12) Sonqorī, (13) dialects south of Qom, (14) Kabul Afšārī. North Azerbaijani, or Northern Azerbaijani, is the official language of the Republic of Azerbaijan. It is closely related to modern-day Istanbul Turkish, the official language of Turkey. It is also spoken in southern Dagestan, along the Caspian coast in the southern Caucasus Mountains and in scattered regions throughout Central Asia. As of 2011[update], there are some 9.23 million speakers of North Azerbaijani including 4 million monolingual speakers (many North Azerbaijani speakers also speak Russian, as is common throughout former USSR countries). The Shirvan dialect as spoken in Baku is the basis of standard Azerbaijani. Since 1992, it has been officially written with a Latin script in the Republic of Azerbaijan, but the older Cyrillic script was still widely used in the late 1990s. Ethnologue lists 21 North Azerbaijani dialects: "Quba, Derbend, Baku, Shamakhi, Salyan, Lenkaran, Qazakh, Airym, Borcala, Terekeme, Qyzylbash, Nukha, Zaqatala (Mugaly), Qabala, Nakhchivan, Ordubad, Ganja, Shusha (Karabakh), Karapapak, Kutkashen, Kuba". South Azerbaijani, or Iranian Azerbaijani,[b] is widely spoken in Iranian Azerbaijan and, to a lesser extent, in neighboring regions of Turkey and Iraq, with smaller communities in Syria. In Iran, the Persian word for Azerbaijani is borrowed as Torki "Turkic". In Iran, it is spoken mainly in East Azerbaijan, West Azerbaijan, Ardabil and Zanjan. It is also spoken in Tehran and across the Tehran Province, as Azerbaijanis form by far the largest minority in the city and the wider province, comprising about 1⁄6 of its total population. The CIA World Factbook reports that in 2010, the percentage of Iranian Azerbaijani speakers was at around 16 percent of the Iranian population, or approximately 13 million people worldwide, and ethnic Azeris form by far the second largest ethnic group of Iran, thus making the language also the second most spoken language in the nation. Ethnologue reports 10.9 million Azerbaijani-speakers in Iran in 2016 and 13,823,350 worldwide. Dialects of South Azerbaijani include: Classification Russian comparatist Oleg Mudrak [ru] calls the Turkmen language the closest relative of Azerbaijani. Speakers of Turkish and Azerbaijani can, to an extent, communicate with each other as both languages have substantial similarity. However, it is easier for many Azerbaijani speakers to understand Turkish than it is for Turkish speakers to understand Azerbaijani. Turkish soap operas are very popular with Azeris in both Iran and Azerbaijan. Reza Shah Pahlavi of Iran (who spoke South Azerbaijani) met with Mustafa Kemal Atatürk of Turkey (who spoke Turkish) in 1934; the two were filmed speaking their respective languages to each other and communicated effectively. In a 2011 study, 30 Turkish participants were tested to determine how well they understood written and spoken Azerbaijani. It was found that even though Turkish and Azerbaijani are typologically similar languages, on the part of Turkish speakers the intelligibility is not as high as is estimated. In a 2017 study, Iranian Azerbaijanis scored in average 56% of receptive intelligibility in spoken Turkish. Azerbaijani exhibits a similar stress pattern to Turkish but simpler in some respects. Azerbaijani is a strongly stressed and partially stress-timed language, unlike Turkish which is weakly stressed and syllable-timed.[citation needed] Below are some cognates with different spelling in Azerbaijani and Turkish: The 1st person personal pronoun is mən in Azerbaijani just as men in Turkmen, whereas it is ben in Turkish. The same is true for demonstrative pronouns bu, where sound b is replaced with sound m. For example: bunun>munun/mının, muna/mına, munu/munı, munda/mında, mundan/mından. This is observed in the Turkmen literary language as well, where the demonstrative pronoun bu undergoes some changes just as in: munuñ, munı, muña, munda, mundan, munça. b>m replacement is encountered in many dialects of the Turkmen language and may be observed in such words as: boyun>moyın in Yomut – Gunbatar dialect, büdüremek>müdüremek in Ersari and Stavropol Turkmens' dialects, bol>mol in Karakalpak Turkmens' dialects, buzav>mizov in Kirac dialects. Here are some words from the Swadesh list to compare Azerbaijani with Turkmen: Azerbaijani dialects share paradigms of verbs in some tenses with the Chuvash language, on which linguists also rely in the study and reconstruction of the Khazar language. Phonology Azerbaijani phonotactics is similar to that of other Oghuz Turkic languages, except: Works on Azerbaijani dialectology use the following notations for dialectal consonants: Examples: The vowels of the Azerbaijani are, in alphabetical order, a /ɑ/, e /e/, ə /æ/, ı /ɯ/, i /i/, o /o/, ö /œ/, u /u/, ü /y/. The typical phonetic quality of South Azerbaijani vowels is as follows: The modern Azerbaijani Latin alphabet contains the digraphs ov and öv to represent diphthongs present in the language, and the pronunciation of diphthongs is today accepted as the norm in the orthophony of Azerbaijani. Despite this, the number and even the existence of diphthongs in Azerbaijani has been disputed, with some linguists, such as Abdulazal Damirchizade [az], arguing that they are non-phonemic. Damirchizade's view was challenged by others, such as Aghamusa Akhundov [az], who argued that Damirchizade was taking orthography as the basis of his judgement, rather than its phonetic value. According to Akhundov, Azerbaijani contains two diphthongs, /ou̯/ and /œy̯/, represented by ov and öv in the alphabet, both of which are phonemic due to their contrast with /o/ and /œ/, represented by o and ö. In some cases, a non-syllabic /v/ can also be pronounced after the aforementioned diphthongs, to form /ou̯v/ and /œy̯v/, the rules of which are as follows: Modern linguists who have examined Azerbaijani's vowel system almost unanimously have recognised that diphthongs are phonetically produced in speech. Writing systems Before 1929, Azerbaijani was written only in the Perso-Arabic alphabet, an impure abjad that does not represent all vowels (without diacritical marks). In Iran, the process of standardization of orthography started with the publication of Azerbaijani magazines and newspapers such as Varlıq (وارلیق 'Existence') from 1979. Azerbaijani-speaking scholars and literarians showed great interest in involvement in such ventures and in working towards the development of a standard writing system. These effort culminated in language seminars being held in Tehran, chaired by the founder of Varlıq, Javad Heyat, in 2001 where a document outlining the standard orthography and writing conventions were published for the public. This standard of writing is today canonized by a Persian–Azeri Turkic dictionary in Iran titled Loghatnāme-ye Torki-ye Āzarbāyjāni. Between 1929 and 1938, a Latin alphabet was in use for North Azerbaijani, although it was different from the one used now. From 1938 to 1991, the Cyrillic script was used. Lastly, in 1991, the current Latin alphabet was introduced, although the transition to it has been rather slow. For instance, until an Aliyev decree on the matter in 2001, newspapers would routinely write headlines in the Latin script, leaving the stories in Cyrillic. The transition has also resulted in some misrendering of İ as Ì. In Dagestan, Azerbaijani is still written in Cyrillic script. The Azerbaijani Latin alphabet is based on the Turkish Latin alphabet. In turn, the Turkish Latin alphabet was based on former Azerbaijani Latin alphabet because of their linguistic connections and mutual intelligibility. The letters Әə, Xx, and Qq are available only in Azerbaijani for sounds which do not exist as separate phonemes in Turkish. Northern Azerbaijani, unlike Turkish, respells foreign names to conform with Latin Azerbaijani spelling, e.g. Bush is spelled Buş and Schröder becomes Şröder. Hyphenation across lines directly corresponds to spoken syllables, except for geminated consonants which are hyphenated as two separate consonants as morphonology considers them two separate consonants back to back but enunciated in the onset of the latter syllable as a single long consonant, as in other Turkic languages.[citation needed] Vocabulary Some samples include: Secular: Invoking deity: Azerbaijani has informal and formal ways of saying things. This is because there is a strong tu-vous distinction in Turkic languages like Azerbaijani and Turkish (as well as in many other languages). The informal "you" is used when talking to close friends, relatives, animals or children. The formal "you" is used when talking to someone who is older than the speaker or to show respect (to a professor, for example). As in many Turkic languages, personal pronouns can be omitted, and they are only added for emphasis. Since 1992, North Azerbaijani has used a phonetic writing system, so pronunciation is easy: most words are pronounced exactly as they are spelled. However, the combination qq in words is pronounced [kɡ], as the first voiced velar stop is devoiced when it is geminated, such as in çaqqal, pronounced [t͡ʃɑkɡɑl]. /t͡ʃæhɾɑjɯ/ /bænœy̑ʃæji/ The numbers 11–19 are constructed as on bir and on iki, literally meaning "ten-one, ten-two" and so on up to on doqquz ("ten-nine"). Greater numbers are constructed by combining in tens and thousands larger to smaller in the same way, without using a conjunction in between. Sample text Article 1 of the Universal Declaration of Human Rights: بُتون إنسانلَر لَیاقَت و حُقوقلَرینه گوره آزاد و بَرابَر طوغُلورلَر. اونلَرِݣ شعورلَری و وِجدانلَری وار و بِر بِرلَرینه مُناسِبَتده قَرداشلِق روحنده طاورانمهلیدِرلَر بۆتون اینسانلار لیاقت و حۆقوقلارینا گؤره آزاد و برابر دوْغولورلار. اوٓنلارین شۆعورلاری و ویجدانلاری وار و بیر-بیرلرینه مۆناسیبتده قارداشلیق روحوندا داورانمالیدیرلار Butun insanlar ləjakət və hukykları̡na ƣɵrə azad və bərabər dogylyrlar. Onları̡ŋ зuyrları̡ və vicdanları̡ var və bir-birlərinə munasibətdə kardaзlı̡k ryhynda davranmalı̡dı̡rlar. Bytyn insanlar ləjaqət və hyquqlarьna gɵrə azad və вəraвər doƣulurlar. Onlarьŋ şyurlarь və viçdanlarь var və вir-вirlərinə mynasiвətdə qardaşlьq ruhunda davranmalьdьrlar. Бүтүн инсанлар ләягәт вә һүгугларына ҝөрә азад вә бәрабәр доғулурлар. Онларын шүурлары вә виҹданлары вар вә бир-бирләринә мүнасибәтдә гардашлыг руһунда давранмалыдырлар. Бүтүн инсанлар ләјагәт вә һүгугларына ҝөрә азад вә бәрабәр доғулурлар. Онларын шүурлары вә виҹданлары вар вә бир-бирләринә мүнасибәтдә гардашлыг руһунда давранмалыдырлар. Bütün insanlar läyaqät vä hüquqlarına görä azad vä bärabär doğulurlar. Onların şüurları vä vicdanları var vä bir-birlärinä münasibätdä qardaşlıq ruhunda davranmalıdırlar. Bütün insanlar ləyaqət və hüquqlarına görə azad və bərabər doğulurlar. Onların şüurları və vicdanları var və bir-birlərinə münasibətdə qardaşlıq ruhunda davranmalıdırlar. [byˈt̪ʏ̃n̪ ʔɪ̃n̪s̪ɑ̝̃n̪ˈɫ̪ɑ̝ɾ l̪æ̝jɑ̝ːˈgæ̝t̪ væ̝ ɦygugl̪ɑ̝ɾɯ̞̃ˈn̪ɑ̝ ɟœ̝ˈɾæ̝ ʔɑ̞ːˈz̪ɑ̝t̪ væ̝ bæ̝ɾɑ̝ːˈbæ̝ɾ d̪o̞ɣʊɫ̪ʊɾˈɫ̪ɑ̝ɾ ‖ ʔõ̞n̪ɫ̪ɑ̝ˈɾɯ̞̃n̪ ʃyʔʊɾɫ̪ɑ̝ˈɾɯ̞ væ̝ vid͡ʒd̪ɑ̝̃n̪ɫ̪ɑ̝ˈɾɯ̞ ʋɑ̝ɾ væ̝ ˌbɪɾ‿bɪɾl̪æ̝ɾɪ̃ˈn̪æ̝ mʏ̃n̪ɑ̝ːs̪ibæ̝t̪̚ˈd̪æ̝ gɑ̝ɾd̪ɑ̝ʃˈɫ̪ɯ̞χ ɾuːɦʊ̃n̪ˈd̪ɑ̝ d̪ɑ̝ʋɾɑ̝̃n̪mɑ̝ɫ̪ɯ̞d̪ɯ̞ˈɫ̪ɑ̝ɾ ‖] All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood. Notes References Bibliography Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Joke#cite_note-FOOTNOTEBeard2014186–188-12] | [TOKENS: 8460] |
Contents Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Balochi_language] | [TOKENS: 2364] |
Contents Balochi language Medieval Jalal Khan . Mir Chakar Rind . Mir Gwahram Khan Lashari . Khanate of Kalat . Nasir Khan I Ahmadzai . Mohammad Khan Baloch . Hammal Jiand . Hoth Khan Baloch . Banadi Shehak . Mehrab Khan II of Kalat . Sanjrani Chiefdom Amir Nasir Khan Talpur . Mirani . Langah sultanate . Talpur dynasty . Brahui Confederacy Balochi (بلۏچی, romanized: Balòci) is a Northwestern Iranian language, spoken by Baloch in the Balochistan region of Pakistan, Iran and Afghanistan. In addition, there are speakers in Oman, the Arab states of the Persian Gulf, Turkmenistan, East Africa and in diaspora communities in other parts of the world. The total number of speakers, according to Ethnologue, is 8.8 million. Of these, 6.28 million are in Pakistan. Balochi varieties constitute a dialect continuum and collectively at least have 10 million native speakers. The main varieties of Balochi are Eastern (Soleimani), Southern (Makrani) and Western (Rakhshani). The Koroshi dialect is a dialect of the Balochi language, spoken mainly in the provinces of Fars and Hormozgan. According to Brian Spooner, Literacy for most Baloch-speakers is not in Balochi, but in Urdu in Pakistan and Persian in Afghanistan and Iran. Even now, very few Baloch read Balochi in any of the countries, regardless of the alphabet in which it is printed. Balochi belongs to the Western Iranian subgroup, and its original homeland is suggested to be around the central Caspian region. Classification Balochi is an Indo-European language, spoken by the Baloch and belonging to the Indo-Iranian branch of the family. As an Iranian language, it is classified in the Northwestern group. Glottolog classifies four different varieties, namely Koroshi, Southern Balochi and Western Balochi (grouped under a "Southern-Western Balochi" branch), and Eastern Balochi, all under the "Balochic" group. According to the research of Carina Jahani, ISO 639-3 groups Southern, Eastern, and Western Baloch under the Balochi macrolanguage, keeping Koroshi separate. Dialects These dialects are broadly categorized into three main groups: Koroshi is also classified as Balochi. Elfenbein divides the dialects of the Balochi language into six categories: Rakhshani (subdialects: Kalati and Sarhaddi), Panjguri, Saravani, Lashari, Kechi, and Coastal Dialects. Rakhshani Panjguri It includes most of the Kharan region, with the kech River forming its southern border and the Rakhshan River its northern border, and Kolwa located to its east. Saravani Saravan and its surrounding areas, with Khash as its northern border and Espidan as its western border. In later works, Elfenbein, Iranshahr, and Bampur are also considered to be within the Saravani dialect area. Kechi Kich region in Balochistan, including Turbat. Lashari centered on the village of Lashar, south of Iranshahr where Balochi close to Persian and Baskardi. Coastal dialects Including Qasr-e Qand, Nikshahr, Rask and the southern coastal areas of Balochistan from near Bandar Abbas to Karachi Port, including the ports of Chahbahar, Gwadar, Pasni. There are two main dialects: the dialect of the Mandwani (northern) tribes and the dialect of the Domki (southern) tribes. The dialectal differences are not very significant. One difference is that grammatical terminations in the northern dialect are less distinct compared with those in the southern tribes. An isolated dialect is Koroshi, which is spoken in the Qashqai tribal confederation in the Fars province. Koroshi distinguishes itself in grammar and lexicon among Balochi varieties. The Balochi Academy Sarbaz has designed a standard alphabet for Balochi.[better source needed] Uppsala University offers a course titled Balochi A, which provides basic knowledge of the phonetics and syntax of the Balochi language. Carina Jahani is a prominent Swedish Iranologist and professor of Iranian languages at Uppsala University, deeply researching in the study and preservation of the Balochi language. Phonology The Balochi vowel system has at least eight vowels: five long and three short.[page needed] These are /aː/, /eː/, /iː/, /oː/, /uː/, /a/, /i/ and /u/. The short vowels have more centralized phonetic quality than the long vowels. The variety spoken in Karachi also has nasalized vowels, most importantly /ẽː/ and /ãː/.[page needed] In addition to these eight vowels, Balochi has two vowel glides, that is /aw/ and /aj/. The following table shows consonants which are common to both Western (Northern) and Southern Balochi.[page needed] The consonants /s/, /z/, /n/, /ɾ/ and /l/ are articulated as alveolar in Western Balochi. The plosives /t/ and /d/ are dental in both dialects. The symbol ń is used to denote nasalization of the preceding vowel. In addition, /f/ occurs in a few words in Southern Balochi. /x/ (voiceless velar fricative) in some loanwords in Southern Balochi corresponding to /χ/ (voiceless uvular fricative) in Western Balochi; and /ɣ/ (voiced velar fricative) in some loanwords in Southern Balochi corresponding to /ʁ/ (voiced uvular fricative) in Western Balochi. In Eastern Balochi, it is noted that the stop and glide consonants may also occur as aspirated allophones in word initial position as [pʰ tʰ ʈʰ t͡ʃʰ kʰ] and [wʱ]. Allophones of stops in postvocalic position include for voiceless stops, [f θ x] and for voiced stops [β ð ɣ]. /n l/ are also dentalized as [n̪ l̪]. Difference between a question and a statement is marked with the tone, when there is no question word. Rising tone marks the question and falling tone the statement. Statements and questions with a question word are characterized by falling intonation at the end of the sentence. Questions without a question word are characterized by rising intonation at the end of the sentence. Both coordinate and subordinate clauses that precede the final clause in the sentence have rising intonation. The final clause in the sentence has falling intonation. Grammar The normal word order is subject–object–verb. Like many other Indo-Iranian languages, Balochi also features split ergativity. The subject is marked as nominative except for the past tense constructions where the subject of a transitive verb is marked as oblique and the verb agrees with the object. Balochi, like many Western Iranian languages, has lost the Old Iranian gender distinctions. Much of the Balochi number system is identical to Persian. According to Mansel Longworth Dames, Balochi writes the first twelve numbers as follows: Writing system Balochi was not a written language before the 19th century, and the Persian script was used to write Balochi wherever necessary. However, Balochi was still spoken at the Baloch courts.[citation needed] British colonial officers first wrote Balochi with the Latin script. Following the creation of Pakistan, Baloch scholars adopted the Persian alphabet. The first collection of poetry in Balochi, Gulbang by Mir Gul Khan Nasir, was published in 1951 and incorporated the Arabic Script. It was much later that Sayad Zahoor Shah Hashemi wrote a comprehensive guidance on the usage of Arabic script and standardized it as the Balochi Orthography in Pakistan and Iran. This earned him the title of the 'Father of Balochi'. His guidelines are widely used in Eastern and Western Balochistan. In Afghanistan, Balochi is still written in a modified Arabic script based on Persian. In 2002, a conference was held to help standardize the script that would be used for Balochi. The following alphabet was used by Syed Zahoor Shah Hashmi in his lexicon of Balochi Sayad Ganj (سید گنج) (lit. Sayad's Treasure). Until the creation of the Balochi Standard Alphabet, it was by far the most widely used alphabet for writing Balochi, and is still used very frequently. آ، ا، ب، پ، ت، ٹ، ج، چ، د، ڈ، ر، ز، ژ، س، ش، ک، گ، ل، م، ن، و، ھ ہ، ء، ی ے The Balochi Standard Alphabet, standardized by Balochi Academy Sarbaz, consists of 29 letters. It is an extension of the Perso-Arabic script and borrows a few glyphs from Urdu. It is also sometimes referred to as Balo-Rabi or Balòrabi. Today, it is the preferred script to use in a professional setting and by educated folk. The following Latin-based alphabet was adopted by the International Workshop on "Balochi Roman Orthography" (University of Uppsala, Sweden, 28–30 May 2000). a á b c d ď e f g ĝ h i í j k l m n o p q r ř s š t ť u ú v w x y z ž ay aw (33 letters and 2 digraphs) In 1933, the Soviet Union adopted a Latin-based alphabet for Balochi as follows: The alphabet was used for several texts, including children's books, newspapers, and ideological works. In 1938, however, the official use of Balochi was discontinued. In 1989, Mammad Sherdil, a teacher from the Turkmen SSR, approached Balochi language researcher Sergei Axenov with the idea of creating a Cyrillic-based alphabet for Balochi. Before this, the Cyrillic script was already used for writing Balochi and was used in several publications but the alphabet was not standardized. In 1990, the alphabet was finished. It included the following letters: The project was approved with some minor changes (қ, ꝑ, and ы were removed due to the rarity of those sounds in Balochi, and о̄ was added). From 1992 to 1993, several primary school textbooks were printed in this script. In the early 2000s, the script fell out of use. References Bibliography Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Cold_reading] | [TOKENS: 2311] |
Contents Cold reading Cold reading is a set of techniques used by mentalists, psychics, fortune-tellers, and mediums. Without prior knowledge, a practiced cold-reader can quickly obtain a great deal of information by analyzing the person's body language, age, clothing or fashion, hairstyle, gender, sexual orientation, religion, ethnicity, level of education, manner of speech, place of origin, etc. during a line of questioning. Cold readings commonly employ high-probability guesses, quickly picking up on signals as to whether their guesses are in the right direction or not. The reader then emphasizes and reinforces any accurate connections while quickly moving on from missed guesses. Psychologists believe that this appears to work because of the Barnum effect and due to confirmation biases within people. Basic procedure Before starting the actual reading, the reader will typically try to elicit cooperation from the subject, saying something such as, "I often see images that are a bit unclear and which may sometimes mean more to you than to me; if you help, we can together uncover new things about you." One of the most crucial elements of a convincing cold reading is a subject eager to make connections or reinterpret vague statements in any way that will help the reader appear to make specific predictions or intuitions. While the reader will do most of the talking, it is the subject who provides the meaning. After determining that the subject is cooperative, the reader will make a number of probing statements or questions, typically using variations of the methods noted below. The subject will then reveal further information with their replies (whether verbal or non-verbal) and the cold reader can continue from there, pursuing promising lines of inquiry and quickly abandoning or avoiding unproductive ones. In general, while revelations seem to come from the reader, most of the facts and statements come from the subject, which are then refined and restated by the reader so as to reinforce the idea that the reader got something correct. Subtle cues such as changes in facial expression or body language can indicate whether a particular line of questioning is effective or not. Combining the techniques of cold reading with information obtained covertly (also called "hot reading") can leave a strong impression that the reader knows or has access to a great deal of information about the subject. Because the majority of time during a reading is spent dwelling on the "hits" the reader obtains, while the time spent recognizing "misses" is minimized, the effect gives an impression that the cold reader knows far more about the subject than an ordinary stranger could. James Underdown from Center for Inquiry and Independent Investigations Group said, "In the context of a studio audience full of people, cold reading is not very impressive." Underdown explains cold reading from a mathematical viewpoint. A typical studio audience consists of approximately 200 people, divided up into three sections. A conservative estimate assumes each person knows 150 people. Underdown says: This means that when John Edward or James Van Praagh asks the question "Who's Margaret?" he is hoping there is a Margaret in the 10,000 people in the database of that section. If there is no answer, they open the question up to the whole audience's database of over 30,000 people! Would it be surprising for there to be a dozen Margarets in such a large sample? Mentalist Mark Edward relates from personal experience as a "psychic performer" how powerful a hit can be when someone in a large audience "claims" a phrase such as a "clown in a graveyard" statement. Edward describes a mental image of a clown placing flowers on graves and adds, "Does that mean anything to someone?" whereupon a woman stands up and claims that he is speaking directly to her. She remembers it as Edward specifically stating that she knew a man who dressed as a clown and placed flowers on graves in her hometown. Edward reports that it took some convincing to get her to understand that he was not directly talking to her, but had thrown the statement out to the entire audience of 300 people. She made the connection, and because it seemed so personal, and the situation so odd, she felt that he was talking to her directly. Specific techniques "Shotgunning" is a commonly used cold reading technique. This technique is named after the manner in which a shotgun fires a cluster of small projectiles in the hope that one or more of them will strike the target. The cold reader slowly offers a huge quantity of very general information, often to an entire audience (some of which is very likely to be correct, near correct or, at the very least, provocative or evocative to someone present), observes their subjects' reactions (especially their body language), and then narrows the scope, acknowledging particular people or concepts and refining the original statements according to those reactions to promote an emotional response. A majority of people in a room will, at some point for example, have lost an older relative or known at least one person with a common name like "Mike" or "John". Shotgunning might include a series of vague statements such as: The Forer effect relies in part on the eagerness of people to fill in details and make connections between what is said and some aspect of their own lives, often searching their entire life's history to find some connection, or reinterpreting statements in a number of different possible ways so as to make it apply to themselves. "Barnum statements", named after P. T. Barnum, the American showman, are statements that seem personal, yet apply to many people. And while seemingly specific, such statements are often open-ended or give the reader the maximum amount of "wiggle room" in a reading. They are designed to elicit identifying responses from people. The statements can then be developed into longer and more sophisticated paragraphs and seem to reveal great amounts of detail about a person. A talented and charismatic reader can sometimes even bully a subject into admitting a connection, demanding over and over that they acknowledge a particular statement as having some relevance and maintaining that they are just not thinking hard enough, or are repressing some important memory. Statements of this type might include: Regarding the last statement, if the subject is old enough, their father is quite likely to have died, and this statement would easily apply to a large number of medical conditions including heart disease, pneumonia, diabetes, emphysema, cirrhosis of the liver, kidney failure, most types of cancer, as well as any cause of death in which cardiac arrest precedes death, or damage to the brainstem responsible for cardiopulmonary function. Warm reading is a performance tool used by professional mentalists and psychic scam artists. While hot reading is the use of foreknowledge and cold reading works on reacting to the subject's responses, warm reading refers to the judicious use of Barnum effect statements. When these psychological tricks are used properly, the statements give the impression that the mentalist, or psychic scam artist, is intuitively perceptive and psychically gifted. In reality, the statements fit nearly all of humanity, regardless of gender, personal opinions, age, epoch, culture, or nationality. Michael Shermer gives the example of jewelry worn by those in mourning. Most people in this situation will be wearing or carrying an item of jewelry with some connection to the person they have lost, but if asked directly in the context of a psychic reading whether they have such an item, the client may be shocked and assume that the reader learned the information directly from the deceased loved one. Robert Todd Carroll notes in The Skeptic's Dictionary that some would consider this to be cold reading. The rainbow ruse is a crafted statement which simultaneously awards the subject a specific personality trait, as well as the opposite of that trait. With such a phrase, a cold reader can "cover all possibilities" and appear to have made an accurate deduction in the mind of the subject, despite the fact that a rainbow ruse statement is vague and contradictory. This technique is used since personality traits are not quantifiable, and also because nearly everybody has experienced both sides of a particular emotion at some time in their lives. Statements of this type include: A cold reader can choose from a variety of personality traits, think of its opposite, and then bind the two together in a phrase, vaguely linked by factors such as mood, time, or potential. Contrasting claims of performers The mentalist branch of the stage-magician community approves of "reading" as long as it is presented strictly as an artistic entertainment and one is not pretending to be psychic. Some performers who use cold reading are honest about their use of the technique. Lynne Kelly, Kari Coleman, Ian Rowland, and Derren Brown have used these techniques at either private fortune-telling sessions or open forum "talking with the dead" sessions in the manner of those who claim to be genuine mediums. Only after receiving acclaim and applause from their audience do they reveal that they needed no psychic power for the performance, only a sound knowledge of psychology and cold reading. In an episode of his Trick of the Mind series broadcast in March 2006, Derren Brown showed how easily people can be influenced through cold reading techniques by repeating Bertram Forer's famous demonstration of the personal validation fallacy, or Forer effect. Sitter misremembering In a detailed review of four sittings conducted by medium Tyler Henry, Edward and Susan Gerbic reviewed all statements made by him on the TV show Hollywood Medium. In their opinion not one statement made by Henry was accurate, yet each sitter felt that their reading was highly successful. In interviews with each sitter after their sitting, all four claimed specific statements made by Henry, but, after reviewing the show, it was shown that he had not made those statements. Each sitter had misremembered what Henry said. One of many examples of this was when Henry, during a session with celebrity Ross Mathews, stated "Bambi, why am I connecting to Bambi?" Mathews stated that his father, who was a hunter, would not shoot deer because of the movie Bambi. In the post-interview, Mathews stated that "It was weird that Henry knew that my father would not shoot deer because of Bambi", demonstrating that Mathews did not remember that he, not Henry, had supplied the connection to his father. Gerbic has pointed out the broader issue of the human brain attempting to make connections that then make it appear that the psychic was correct. She lists this among a number of techniques or situations that psychics take advantage of. Subconscious cold reading Former New Age practitioner Karla McLaren has spoken of the importance of reducing the appearance of unusual expertise that might create a power differential; posing observations as questions rather than facts. This attempt to be polite, she realized, actually invited the other person, as McLaren has said, to "lean into the reading" and give her more pertinent information. After some people have performed hundreds of readings, their skills may improve to the point where they may start believing they can read minds. They may ask themselves if their success is because of psychology, intuition or a psychic ability. This point of thought is known by some skeptics of the paranormal as the "transcendental temptation". Magic historian and occult investigator Milbourne Christopher has warned that the transcendental choice may lead one unknowingly into a belief in the occult and a deterioration of reason. In media See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Jewish_exodus_from_Arab_and_Muslim_lands] | [TOKENS: 18689] |
Contents Jewish exodus from the Muslim world Approximately 900,000 Jews migrated, fled, or were expelled from Muslim-majority countries throughout Africa and Asia in the 20th century, primarily as a consequence of the establishment of the State of Israel. Large-scale migrations were also organized, sponsored, and facilitated by Zionist organizations such as Mossad LeAliyah Bet, the Jewish Agency, and the Hebrew Immigrant Aid Society. The mass movement mainly transpired from 1948 to the early 1970s, with one final exodus of Iranian Jews occurring shortly after the Islamic Revolution in 1979–1980. An estimated 650,000 (72%) of these Jews resettled in Israel. A number of small-scale Jewish migrations began across the Middle East in the early 20th century, with the only substantial aliyot (Jewish immigrations to the Land of Israel) coming from Yemen and Syria. Few Jews from Muslim countries immigrated during the British Mandate for Palestine. Prior to Israel's independence in 1948, approximately 800,000 Jews were living on lands that now make up the Arab world. Of these, just under two-thirds lived in the French- and Italian-controlled regions of North Africa, 15–20% lived in the Kingdom of Iraq, approximately 10% lived in the Kingdom of Egypt, and approximately 7% lived in the Aden Colony, Aden Protectorate and the Kingdom of Yemen. A further 200,000 Jews lived in the Imperial State of Iran and the Republic of Turkey. The first large-scale exoduses took place in the late 1940s and early 1950s, primarily from Iraq, Yemen, and Libya. In these cases, over 90% of the Jewish population left, leaving their assets and properties behind. Between 1948 and 1951, 250,000 Jews immigrated to Israel from Arab countries. In response, the Israeli government implemented policies to accommodate 600,000 immigrants over four years, doubling the country's Jewish population. Reactions in the Knesset were mixed; in addition to some Israeli officials, there were those within the Jewish Agency who opposed promoting a large-scale emigration movement among Jews whose lives were not in immediate danger. Later waves peaked at different times in different regions over the subsequent decades. The exodus from Egypt peaked in 1956, following the Suez Crisis; emigrations from other North African countries peaked in the 1960s. Lebanon's Jewish population temporarily increased due to an influx of Jews from other Arab countries, before it dwindled by the mid-1970s. 600,000 Jews from Arab and Muslim countries had relocated to Israel by 1972, while another 300,000 migrated to France, the United States and Canada. Today, the descendants of Jews who immigrated to Israel from other Middle Eastern lands (known as Mizrahi Jews and Sephardic Jews) constitute more than half of all Israelis. The Jewish Agency for Israel estimated that the total number of Jews in Arab and Muslim countries in 2023 was 27,000, with Turkey having 14,200 Jewish residents and Iran having 9,100. The reasons for the exoduses include: pull factors such as the desire to fulfill Zionism, better economic prospects and security, and the Israeli government's "One Million Plan" to accommodate Jewish immigrants from Arab- and Muslim-majority countries; and push factors such as violent and other forms of antisemitism in the Arab world, political instability, poverty, and expulsion. The history of the exodus has been politicized, given its proposed relevance to the historical narrative of the Arab–Israeli conflict. Those who view the Jewish exodus as analogous to the 1948 Palestinian expulsion and flight generally emphasize the push factors and consider those who left to have been refugees, while those who oppose that view generally emphasize the pull factors and consider the Jews to have been willing immigrants. North Africa In the 19th century, Francization of Jews in the French colonial North Africa, due to the work of organizations such as the Alliance Israelite Universelle and French policies such as the Algerian citizenship decree of 1870, resulted in a separation of the community from the local Muslims. France began its conquest of Algeria in 1830. The following century had a profound influence on the status of the Algerian Jews; following the 1870 Crémieux Decree, they were elevated from the protected minority dhimmi status to French citizens. The decree began a wave of Pied-Noir-led anti-Jewish protests (such as the 1897 anti-Jewish riots in Oran), which the Muslim community did not participate in, to the disappointment of the European agitators. Though there were also cases of Muslim-led anti-Jewish riots, such as in Constantine in 1934 when 34 Jews were killed. Neighbouring Husainid Tunisia began to come under European influence in the late 1860s and became a French protectorate in 1881. Since the 1837 accession of Ahmed Bey, and continued by his successor Muhammed Bey, Tunisia's Jews were elevated within Tunisia society with improved freedom and security, which was confirmed and safeguarded during the French protectorate." Around a third of Tunisian Jews took French citizenship during the protectorate. Morocco, which had remained independent during the 19th century, became a French protectorate in 1912. However, during less than half a century of colonization, the equilibrium between Jews and Muslims in Morocco was upset, and the Jewish community was again positioned between the colonisers and the Muslim majority. French penetration into Morocco between 1906 and 1912 created significant Morocco Muslim resentment, resulting in nationwide protests and military unrest. During the period a number of anti-European or anti-French protests extended to include anti-Jewish manifestations, such as in Casablanca, Oujda and Fes in 1907–08 and later in the 1912 Fes riots. The situation in colonial Libya was similar; as in the French North African countries, the Italian influence in Libya was welcomed by the Jewish community, increasing their separation from the non-Jewish Libyans. The Alliance Israélite Universelle, founded in France in 1860, set up schools in Algeria, Morocco and Tunisia as early as 1863. During World War II, Morocco, Algeria, Tunisia and Libya came under Nazi or Vichy French occupation and their Jews were subject to various forms of persecution. In Libya, the Axis powers established labor camps to which many Jews were forcibly deported. In other areas Nazi propaganda targeted Arab populations to incite them against British or French rule. National Socialist propaganda contributed to the transfer of racial antisemitism to the Arab world and is likely to have unsettled Jewish communities. An anti-Jewish riot took place in Casablanca in 1942 in the wake of Operation Torch, where a local mob attacked the Jewish mellah. (Mellah is the Moroccan name for a Jewish ghetto.) However, according to the Hebrew University of Jerusalem's Dr. Haim Saadon, "Relatively good ties between Jews and Muslims in North Africa during World War II stand in stark contrast to the treatment of their co-religionists by gentiles in Europe." From 1943 until the mid-1960s, the American Jewish Joint Distribution Committee was an important foreign organization driving change and modernization in the North African Jewish community. It had initially become involved in the region whilst carrying out relief work during World War II. The migration of Moroccan Jews to Israel was sponsored, facilitated and administered by Zionist organizations, notably through Cadima (1949–1956) and Operation Yachin (1961–1964). As in Tunisia and Algeria, Moroccan Jews did not face large scale expulsion or outright asset confiscation or any similar government persecution during the period of exile, and Zionist agents were relatively allowed freedom of action to encourage emigration. In Morocco, the Vichy regime during World War II passed discriminatory laws against Jews; for example, Jews were no longer able to get any form of credit, Jews who had homes or businesses in European neighborhoods were expelled, and quotas were imposed limiting the percentage of Jews allowed to practice professions such as law and medicine to no more than two percent.[disputed – discuss] King Mohammed V expressed his personal distaste for these laws, assuring Moroccan Jewish leaders that he would never lay a hand "upon either their persons or property". While there is no concrete evidence of him actually taking any actions to defend Morocco's Jews, it has been argued that he may have worked on their behalf behind the scenes. In June 1948, soon after Israel was established and in the midst of the first Arab–Israeli war, violent anti-Jewish riots broke out in Oujda and Djerada, leading to deaths of 44 Jews. In 1948–49, after the massacres, 18,000 Moroccan Jews left the country for Israel. Later, however, Jewish migration from Morocco slowed to a few thousand a year. Following Moroccan independence in 1956, a new wave of Moroccan Jews emigrated from the country towards South America, Israel, France, and Spain. The Caisse d’Aide aux Immigrants Marocains or Cadima (Hebrew: קדימה, 'forward') was the clandestine Zionist apparatus that arranged and oversaw the mass migration of Moroccan Jews to Israel from 1949 to 1956, during the final years of French colonial rule in Morocco.: 164 Cadima was administered by Jewish Agency and Mossad Le'Aliyah agents sent from Israel, with assistance from local Moroccan Zionists. It was based out of an office in Casablanca and operated cells in large cities as well as a transit camp along the road to al-Jadida, from which Jewish migrants would depart for Israel via Marseille. Through the early 1950s, Zionist organizations encouraged immigration, particularly in the poorer south of the country, seeing Moroccan Jews as valuable contributors to the Jewish State: The more I visited in these (Berber) villages and became acquainted with their Jewish inhabitants, the more I was convinced that these Jews constitute the best and most suitable human element for settlement in Israel's absorption centers. There were many positive aspects which I found among them: first and foremost, they all know (their agricultural) tasks, and their transfer to agricultural work in Israel will not involve physical and mental difficulties. They are satisfied with few (material needs), which will enable them to confront their early economic problems. — Yehuda Grinker, The Emigration of Atlas Jews to Israel Incidents of anti-Jewish violence continued through the 1950s, although French officials later stated that Moroccan Jews "had suffered comparatively fewer troubles than the wider European population" during the struggle for independence. In August 1953, riots broke out in the city of Oujda and resulted in the death of four Jews, including an 11-year-old girl. In the same month, French security forces prevented a mob from breaking into the Jewish mellah of Rabat. In 1954, a nationalist event in the town of Petitjean (known today as Sidi Kacem) turned into an anti-Jewish riot and resulted in the death of 6 Jewish merchants from Marrakesh. However, according to Francis Lacoste, French Resident-General in Morocco, "the ethnicity of the Petitjean victims was coincidental, terrorism rarely targeted Jews, and fears about their future were unwarranted." In 1955, a mob broke into the Jewish mellah in Mazagan (known today as El Jadida) and caused its 1,700 Jewish residents to flee to the European quarters of the city. The houses of some 200 Jews were too badly damaged during the riots for them to return. In 1954, Mossad had established an undercover base in Morocco, sending agents and emissaries within a year to appraise the situation and organize continuous emigration. The operations were composed of five branches: self-defense, information and intelligence, illegal immigration, establishing contact, and public relations. Mossad chief Isser Harel visited the country in 1959 and 1960, reorganized the operations, and created a clandestine militia named the "Misgeret" ("framework"). Jewish emigration to Israel jumped from 8,171 people in 1954 to 24,994 in 1955, increasing further in 1956. Between 1955 and independence in 1956, 60,000 Jews emigrated. On 7 April 1956, Morocco attained independence. Jews occupied several political positions, including three parliamentary seats and the cabinet position of Minister of Posts and Telegraphs. However, that minister, Leon Benzaquen, did not survive the first cabinet reshuffling, and no Jew was appointed again to a cabinet position. Although the relations with the Jewish community at the highest levels of government were cordial, these attitudes were not shared by the lower ranks of officialdom, which exhibited attitudes that ranged from traditional contempt to outright hostility. Morocco's increasing identification with the Arab world, and pressure on Jewish educational institutions to Arabize and conform culturally added to the fears of Moroccan Jews. Between 1956 and 1961, emigration to Israel was prohibited by law; clandestine emigration continued, and a further 18,000 Jews left Morocco. On 10 January 1961 the Egoz, a Mossad-leased ship carrying Jews attempting to emigrate undercover, sank off the northern coast of Morocco. According to Tad Szulc, the Misgeret commander in Morocco, Alex Gattmon, decided to precipitate a crisis on the back of the tragedy, consistent with Mossad Director Isser Harel's scenario that "a wedge had to be forced between the royal government and the Moroccan Jewish community and that anti-Hassan nationalists had to be used as leverage as well if a compromise over emigration was ever to be attained". A pamphlet agitating for illegal emigration, supposedly by an underground Zionist organization, was printed by Mossad and distributed throughout Morocco, causing the government to "hit the roof". These events prompted King Mohammed V to allow Jewish emigration, and over the three following years, more than 70,000 Moroccan Jews left the country, primarily as a result of Operation Yachin.[citation needed] In June 1961, reports surfaced regarding the continued removal of Jewish officials from prominent positions within the Moroccan government. M. Zaoui, the director of Conservation Fonciere in the Moroccan Finance Ministry, was dismissed without a specified reason.[citation needed] The extremist Muslim journal Al Oumal then launched a campaign against him, accusing him of Zionist affiliations. Earlier in the year, Meyer Toledano had also been removed from his role as judicial counselor to the Moroccan Foreign Ministry. Simultaneously, uneasiness arose among Moroccan Jews as they examined the 17 articles of the new "Fundamental Law" signed by King Hassan on 2 June. Article 15, in particular, raised concerns, emphasizing Morocco's commitment to the Arab League and the intention to strengthen ties with it. Although the new law did not revoke the equal rights of Jews and Muslims in Morocco, it notably omitted the term "Jew," and the first two articles underscored Morocco as an Arab and Muslim country with Islam as the official state religion. Operation Yachin was fronted by the New York-based Hebrew Immigrant Aid Society (HIAS), who financed approximately $50 million of costs. HIAS provided an American cover for underground Israeli agents in Morocco, whose functions included organizing emigration, arming of Jewish Moroccan communities for self-defense and negotiations with the Moroccan government. By 1963, the Moroccan Interior Minister Colonel Oufkir and Mossad chief Meir Amit agreed to swap Israeli training of Moroccan security services and some covert military assistance for intelligence on Arab affairs and continued Jewish emigration. By 1967, only 50,000 Jews remained. The 1967 Six-Day War led to increased Arab–Jewish tensions worldwide, including in Morocco, and significant Jewish emigration out of the country continued. By the early 1970s, the Jewish population of Morocco fell to 25,000; however, most of the emigrants went to France, Belgium, Spain, and Canada, rather than Israel. According to Esther Benbassa, the migration of Jews from the North African countries was prompted by uncertainty about the future. In 1948, 250,000–265,000 Jews lived in Morocco. By 2001, an estimated 5,230 remained.[citation needed] Despite their dwindling numbers, Jews continue to play a notable role in Morocco; the King retains a Jewish senior adviser, André Azoulay, and Jewish schools and synagogues receive government subsidies. Despite this, Jewish targets have sometimes been attacked (notably the 2003 bombing attacks on a Jewish community center in Casablanca), and there is sporadic antisemitic rhetoric from radical Islamist groups. Tens of thousands of Israeli Jews with Moroccan heritage visit Morocco every year, especially around Rosh Hashana or Passover, although few have taken up the late King Hassan II's offer to return and settle in Morocco.[citation needed] As in Tunisia and Morocco, Algerian Jews did not face large scale expulsion or outright asset confiscation or any similar government persecution during the period of exile, and Zionist agents were relatively allowed freedom of action to encourage emigration. Jewish emigration from Algeria was part of a wider ending of French colonial control and the related social, economic and cultural changes. The Israeli government had been successful in encouraging Moroccan and Tunisian Jews to emigrate to Israel, but were less so in Algeria. Despite offers of visa and economic subsidies, only 580 Jews moved from Algeria to Israel in 1954–55. Emigration peaked during the Algerian War of 1954–1962, during which thousands of Muslims, Christians and Jews left the country, particularly the Pied-Noir community. In 1956, Mossad agents worked underground to organize and arm the Jews of Constantine, who comprised approximately half the Jewish population of the country. In Oran, a Jewish counter-insurgency movement was thought to have been trained by former members of Irgun. As of the last French census in Algeria, taken on 1 June 1960, there were 1,050,000 non-Muslim civilians in Algeria, constituting 10 percent of the total population; this included 130,000 Algerian Jews. After Algeria became independent in 1962, about 800,000 Pieds-Noirs (including Jews) were evacuated to mainland France while about 200,000 chose to remain in Algeria. Of the latter, there were still about 100,000 in 1965 and about 50,000 by the end of the 1960s. As the Algerian Revolution intensified from the late 1950s onward, most of Algeria's 140,000 Jews began to leave. The community had lived mainly in Algiers and Blida, Constantine, and Oran.[citation needed] Between late 1961 and late summer 1962, 130,000 of Algeria's approximately 140,000 Jews left for France, while about 10,000 of them emigrated to Israel. Their "repatriation" represents a unique case in the history of Jewish migration given that even though they were psychologically uprooted, they "returned" to France as citizens and not as refugees. The Great Synagogue of Algiers was consequently abandoned after 1994.[citation needed] Jewish migration from North Africa to France led to the rejuvenation of the French Jewish community, which is now the third largest in the world.[citation needed] As in Morocco and Algeria, Tunisian Jews did not face large scale expulsion or outright asset confiscation or any similar government persecution during the period of exile, and Jewish emigration societes were relatively allowed freedom of action to encourage emigration. In 1948, approximately 105,000 Jews lived in Tunisia. About 1,500 remain today,[when?][vague] mostly in Djerba, Tunis, and Zarzis. Following Tunisia's independence from France in 1956 emigration of the Jewish population to Israel and France accelerated. After attacks in 1967, Jewish emigration both to Israel and France accelerated. There were also attacks in 1982, in 1985 following Israel's Operation Wooden Leg, and most recently in 2002 when a bombing in Djerba took 21 lives (most of them German tourists) near the local synagogue, a terrorist attack claimed by Al-Qaeda.[citation needed] According to Maurice Roumani, a Libyan emigrant who was previously the executive director of WOJAC, the most important factors that influenced the Libyan Jewish community to emigrate were "the scars left from the last years of the Italian occupation and the entry of the British Military in 1943 accompanied by the Jewish Palestinian soldiers". Zionist emissaries, so-called shlichim, had begun arriving in Libya in the early 1940s, with the intention to "transform the community and transfer it to Palestine". In 1943, Mossad LeAliyah Bet began to send emissaries to prepare the infrastructure for the emigration of the Libyan Jewish community. In 1942, German troops fighting the Allies in North Africa occupied the Jewish quarter of Benghazi, plundering shops and deporting more than 2000 Jews across the desert. Sent to work in labor camps like Giado, more than one-fifth of that group of Jews perished. At the time, most Libyan Jews lived in the cities of Tripoli and Benghazi; there were smaller numbers in Bayda and Misrata. Following the Allied victory at the Battle of El Agheila in December 1942, German and Italian troops were driven out of Libya. The British assigned as garrison in Cyrenaica the Palestine Regiment. This unit later became the core of the Jewish Brigade, which was later also stationed in Tripolitania. The pro-Zionist soldiers encouraged the spread of Zionism throughout the local Jewish population. Following the liberation of North Africa by Allied forces, antisemitic incitements were still widespread. The most severe racial violence between the start of World War II and the establishment of Israel erupted in Tripoli in November 1945. Over a period of several days more than 140 Jews (including 36 children) were killed, hundreds were injured, 4,000 were displaced and 2,400 were reduced to poverty. Five synagogues in Tripoli and four in provincial towns were destroyed, and over 1000 Jewish residences and commercial buildings were plundered in Tripoli alone. Gil Shefler writes that "As awful as the pogrom in Libya was, it was still a relatively isolated occurrence compared to the mass murders of Jews by locals in Eastern Europe." The same year, violent anti-Jewish violence also occurred in Cairo, which resulted in 10 Jewish victims. In 1948, about 38,000 Jews lived in Libya. The pogroms continued in June 1948, when 15 Jews were killed and 280 Jewish homes destroyed. In November 1948, a few months after the events in Tripoli, the American consul in Tripoli, Orray Taft Jr., reported that: "There is reason to believe that the Jewish Community has become more aggressive as the result of the Jewish victories in Palestine. There is also reason to believe that the community here is receiving instructions and guidance from the State of Israel. Whether or not the change in attitude is the result of instructions or a progressive aggressiveness is hard to determine. Even with the aggressiveness or perhaps because of it, both Jewish and Arab leaders inform me that the inter-racial relations are better now than they have been for several years and that understanding, tolerance and cooperation are present at any top level meeting between the leaders of the two communities." Immigration to Israel began in 1949, following the establishment of a Jewish Agency for Israel office in Tripoli. According to Harvey E. Goldberg, "a number of Libyan Jews" believe that the Jewish Agency was behind the riots, given that the riots helped them achieve their goal. Between the establishment of the State of Israel in 1948 and Libyan independence in December 1951 over 30,000 Libyan Jews emigrated to Israel. On 31 December 1958, the President of the Executive Council of Tripolitania ordered the dissolution of the Jewish Community Council and the appointment of a Muslim commissioner nominated by the Government. A law issued in 1961 required Libyan citizenship for the possession and transfer of property in Libya, a requirement that was met by only six Libyan Jews. Jews were banned from voting, attaining public offices and from serving in the army or in police.[citation needed] In 1967, during the Six-Day War, the Jewish population of over 4,000 was again subjected to riots in which 18 were killed and many more injured. The pro-Western Libyan government of King Idris tried unsuccessfully to maintain law and order. On 17 June 1967, Lillo Arbib, leader of the Jewish community in Libya, sent a formal request to Libyan prime minister Hussein Maziq requesting that the government "allow Jews so desiring to leave the country for a time, until tempers cool and the Libyan population understands the position of Libyan Jews, who have always been and will continue to be loyal to the State, in full harmony and peaceful coexistence with the Arab citizens at all times." According to David Harris, the executive director of the Jewish advocacy organization AJC, the Libyan government "faced with a complete breakdown of law and order ... urged the Jews to leave the country temporarily", permitting them each to take one suitcase and the equivalent of $50. Through an airlift and the aid of several ships, over 4,000 Libyan Jews were evacuated to Italy by the Italian Navy, where they were assisted by the Jewish Agency for Israel. Of the Jews evacuated, 1,300 subsequently immigrated to Israel, 2,200 remained in Italy, and most of the rest went to the United States. A few scores remained in Libya. Some Libyan Jews who had been evacuated temporarily returned to Libya between 1967 and 1969 in an attempt to recover lost property. In September 1967 only 100 Jews remained in Libya, falling to less than 40 five years later in 1972 and just 16 by 1977. On 21 July 1970 the Libyan government issued a law which confiscated assets of the Jews who had previously left Libya, issuing in their stead 15-year bonds. However, when the bonds matured in 1985 no compensation was paid. Libyan leader Muammar Gaddafi later justified this on the grounds that "the alignment of the Jews with Israel, the Arab nations' enemy, has forfeited their right to compensation." Although the main synagogue in Tripoli was renovated in 1999, it has not reopened for services. In 2002, Esmeralda Meghnagi, who was thought to be the last Jew in Libya, died. However, that same year, it was discovered that Rina Debach, an 80-year old Jewish woman who was thought to be dead by her family in Rome, was still alive and living in a nursing home in the country. With her subsequent departure for Rome, there were no more Jews left in Libya. Israel is home to a significant population of Jews of Libyan descent, who maintain their unique traditions. Jews of Libyan descent also make up a significant part of the Italian Jewish community. About 30% of the registered Jewish population of Rome is of Libyan origin. Middle East The British mandate over Iraq came to an end in June 1930, and in October 1932 the country became independent. The Iraqi government response to the demand of Assyrian autonomy (the Assyrians being the indigenous Eastern Aramaic-speaking Semitic descendants of the ancient Assyrians and Mesopotamians, and largely affiliated to the Assyrian Church of the East, Chaldean Catholic Church and Syriac Orthodox Church), turned into a bloody massacre of Assyrian villagers by the Iraqi army in August 1933. This event was the first sign to the Jewish community that minority rights were meaningless under the Iraqi monarchy. King Faisal, known for his liberal policies, died in September 1933, and was succeeded by Ghazi, his nationalistic anti-British son. Ghazi began promoting Arab nationalist organizations, headed by Syrian and Palestinian exiles. With the 1936–39 Arab revolt in Palestine, they were joined by rebels, such as the Grand Mufti of Jerusalem. The exiles preached pan-Arab ideology and fostered anti-Zionist propaganda. Under Iraqi nationalists, Nazi propaganda began to infiltrate the country, as Nazi Germany was anxious to expand its influence in the Arab world. Dr. Fritz Grobba, who resided in Iraq since 1932, began to vigorously and systematically disseminate hateful propaganda against Jews. Among other things, Arabic translation of Mein Kampf was published and Radio Berlin had begun broadcasting in Arabic language. Anti-Jewish policies had been implemented since 1934, and the confidence of Jews was further shaken by the growing crisis in Palestine in 1936. Between 1936 and 1939 ten Jews were murdered and on eight occasions bombs were thrown on Jewish locations. In 1941, immediately following the British victory in the Anglo-Iraqi War, riots known as the Farhud broke out in Baghdad in the power vacuum following the collapse of the pro-Axis government of Rashid Ali al-Gaylani while the city was in a state of instability. 180 Jews were killed and another 240 wounded; 586 Jewish-owned businesses were looted and 99 Jewish houses were destroyed. In some accounts the Farhud marked the turning point for Iraq's Jews. Other historians, however, see the pivotal moment for the Iraqi Jewish community much later, between 1948 and 1951, since Jewish communities prospered along with the rest of the country throughout most of the 1940s, and many Jews who left Iraq following the Farhud returned to the country shortly thereafter and permanent emigration did not accelerate significantly until 1950–51. Either way, the Farhud is broadly understood to mark the start of a process of politicization of the Iraqi Jews in the 1940s, primarily among the younger population, especially as a result of the impact it had on hopes of long term integration into Iraqi society. In the direct aftermath of the Farhud, many joined the Iraqi Communist Party in order to protect the Jews of Baghdad, yet they did not want to leave the country and rather sought to fight for better conditions in Iraq itself. At the same time the Iraqi government that had taken over after the Farhud reassured the Iraqi Jewish community, and normal life soon returned to Baghdad, which saw a marked betterment of its economic situation during World War II. Shortly after the Farhud in 1941, Mossad LeAliyah Bet sent emissaries to Iraq to begin to organize emigration to Israel, initially by recruiting people to teach Hebrew and hold lectures on Zionism. In 1942, Shaul Avigur, head of Mossad LeAliyah Bet, entered Iraq undercover in order to survey the situation of the Iraqi Jews with respect to immigration to Israel. During the 1942–43, Avigur made four further trips to Baghdad to arrange the required Mossad machinery, including a radio transmitter for sending information to Tel Aviv, which remained in use for 8 years. In late 1942, one of the emissaries explained the size of their task of converting the Iraqi community to Zionism, writing that "we have to admit that there is not much point in [organizing and encouraging emigration]. ... We are today eating the fruit of many years of neglect, and what we didn't do can't be corrected now through propaganda and creating one-day-old enthusiasm." It was not until 1947 that legal and illegal departures from Iraq to Israel began. Around 8,000 Jews left Iraq between 1919 and 1948, with another 2,000 leaving between mid-1948 to mid-1950. In 1948, there were approximately 150,000 Jews in Iraq. The community was concentrated in Baghdad and Basra. A few months before the UN vote on partition of Palestine, Iraq's prime minister Nuri al-Said told British diplomat Douglas Busk that he had nothing against the Iraqi Jews who were a long established and useful community. However, if the United Nations solution was not satisfactory, the Arab League might decide on severe measures against the Jews in Arab countries, and he would be unable to resist the proposal. In a speech at the General Assembly Hall at Flushing Meadow, New York, on Friday, 28 November 1947, Iraq's Foreign Minister, Fadel Jamall, included the following statement: "Partition imposed against the will of the majority of the people will jeopardize peace and harmony in the Middle East. Not only the uprising of the Arabs of Palestine is to be expected, but the masses in the Arab world cannot be restrained. The Arab–Jewish relationship in the Arab world will greatly deteriorate. There are more Jews in the Arab world outside of Palestine than there are in Palestine. In Iraq alone, we have about one hundred and fifty thousand Jews who share with Moslems and Christians all the advantages of political and economic rights. Harmony prevails among Moslems, Christians and Jews. But any injustice imposed upon the Arabs of Palestine will disturb the harmony among Jews and non-Jews in Iraq; it will breed inter-religious prejudice and hatred." On 19 February 1949, al-Said acknowledged the bad treatment that the Jews had been victims of in Iraq during the recent months. He warned that unless Israel would behave itself, events might take place concerning the Iraqi Jews. Al-Said's threats had no impact at the political level on the fate of the Jews but were widely published in the media. In 1948, the country was placed under martial law, and the penalties for Zionism were increased. Courts martial were used to intimidate wealthy Jews, Jews were again dismissed from civil service, quotas were placed on university positions, Jewish businesses were boycotted (E. Black, p. 347) and Shafiq Ades, one of the most important Jewish businessmen in the country (who was non-Zionist) was arrested and publicly hanged for allegedly selling goods to Israel, shocking the community.[citation needed] The Jewish community's general sentiment was that if a man as well connected and powerful as Ades could be eliminated by the state, other Jews would not be protected any longer. Additionally, like most Arab League states, Iraq forbade any legal emigration of its Jews after the 1948 war on the grounds that they might go to Israel and could strengthen that state. At the same time, increasing government oppression of the Jews fueled by anti-Israeli sentiment together with public expressions of antisemitism created an atmosphere of fear and uncertainty. However, by 1949 Jews were escaping Iraq at about a rate of 1,000 a month. At the time, the British believed that the Zionist underground was agitating in Iraq in order to assist US fund-raising and to "offset the bad impression caused by the Jewish attitudes to Arab refugees". The Iraqi government took in only 5,000 of the approximately 700,000 Palestinians who became refugees in 1948–49, "despite British and American efforts to persuade Iraq" to admit more. In January 1949, the pro-British Iraqi Prime Minister Nuri al-Said discussed the idea of deporting Iraqi Jews to Israel with British officials, who explained that such a proposal would benefit Israel and adversely affect Arab countries. According to Meir-Glitzenstein, such suggestions were "not intended to solve either the problem of the Palestinian Arab refugees or the problem of the Jewish minority in Iraq, but to torpedo plans to resettle Palestinian Arab refugees in Iraq". In July 1949 the British government proposed to Nuri al-Said a population exchange in which Iraq would agree to settle 100,000 Palestinian refugees in Iraq; Nuri stated that if a fair arrangement could be agreed, "the Iraqi government would permit a voluntary move by Iraqi Jews to Palestine." The Iraqi-British proposal was reported in the press in October 1949. On 14 October 1949 Nuri al-Said raised the exchange of population concept with the economic mission survey. At the Jewish Studies Conference in Melbourne in 2002, Philip Mendes summarised the effect of al-Said's vacillations on Jewish expulsion as: "In addition, the Iraqi Prime Minister Nuri al-Said tentatively canvassed and then shelved the possibility of expelling the Iraqi Jews, and exchanging them for an equal number of Palestinian Arabs." In March 1950, Iraq reversed their earlier ban on Jewish emigration to Israel and passed a law of one-year duration allowing Jews to emigrate on the condition of relinquishing their Iraqi citizenship. According to Abbas Shiblak, many scholars state that this was a result of American, British and Israeli political pressure on Tawfiq al-Suwaidi's government, with some studies suggesting there were secret negotiations. According to Ian Black, the Iraqi government was motivated by "economic considerations, chief of which was that almost all the property of departing Jews reverted to the state treasury" and also that "Jews were seen as a restive and potentially troublesome minority that the country was best rid of." Israel mounted an operation called "Operation Ezra and Nehemiah" to bring as many of the Iraqi Jews as possible to Israel. The Zionist movement at first tried to regulate the amount of registrants until issues relating to their legal status were clarified. Later, it allowed everyone to register. Two weeks after the law went into force, the Iraqi interior minister demanded a CID investigation over why Jews were not registering.[citation needed] A few hours after the movement allowed registration, four Jews were injured in a bomb attack at a café in Baghdad. Immediately following the March 1950 Denaturalisation Act, the emigration movement faced significant challenges. Initially, local Zionist activists forbade the Iraqi Jews from registering for emigration with the Iraqi authorities, because the Israeli government was still discussing absorption planning. However, on 8 April, a bomb exploded in a Jewish cafe in Baghdad, and a meeting of the Zionist leadership later that day agreed to allow registration without waiting for the Israeli government; a proclamation encouraging registration was made throughout Iraq in the name of the State of Israel. However, at the same time immigrants were also entering Israel from Poland and Romania, countries in which Prime Minister David Ben-Gurion assessed there was a risk that the communist authorities would soon "close their gates", and Israel therefore delayed the transportation of Iraqi Jews. As a result, by September 1950, while 70,000 Jews had registered to leave, many selling their property and losing their jobs, only 10,000 had left the country. According to Esther Meir-Glitzenstein, "The thousands of poor Jews who had left or been expelled from the peripheral cities, and who had gone to Baghdad to wait for their opportunity to emigrate, were in an especially bad state. They were housed in public buildings and were being supported by the Jewish community. The situation was intolerable." The delay became a significant problem for the Iraqi government of Nuri al-Said (who replaced Tawfiq al-Suwaidi in mid-September 1950), as the large number of Jews "in limbo" created problems politically, economically and for domestic security. "Particularly infuriating" to the Iraqi government was the fact that the source of the problem was the Israeli government. As a result of these developments, al-Said was determined to drive the Jews out of his country as quickly as possible. On 21 August 1950 al-Said threatened to revoke the license of the company transporting the Jewish exodus if it did not fulfill its daily quota of 500 Jews,[failed verification] and in September 1950, he summoned a representative of the Jewish community and warned the Jewish community of Baghdad to make haste; otherwise, he would take the Jews to the borders himself. Two months before the law expired, after about 85,000 Jews had registered, a bombing campaign began against the Jewish community of Baghdad. The Iraqi government convicted and hanged a number of suspected Zionist agents for perpetrating the bombings, but the issue of who was responsible remains a subject of scholarly dispute. All but a few thousand of the remaining Jews then registered for emigration. In all, about 120,000 Jews left Iraq. Historian Esther Meir-Glitzenstein disputed the claim that these bombings were the primary motive for the emigration of Iraqi Jews, noting that most accounts by these Jews did not mention the bombings as a cause for immigration. According to Gat, it is highly likely that one of Nuri as-Said's motives in trying to expel large numbers of Jews was the desire to aggravate Israel's economic problems (he had declared as such to the Arab world), although Nuri was well aware that the absorption of these immigrants was the policy on which Israel based its future. The Iraqi Minister of Defence told the U.S. ambassador that he had reliable evidence that the emigrating Jews were involved in activities injurious to the state and were in contact with communist agents. Between April 1950 and June 1951, Jewish targets in Baghdad were struck five times. Iraqi authorities then arrested 3 Jews, claiming they were Zionist activists, and sentenced two — Shalom Salah Shalom and Yosef Ibrahim Basri—to death. The third man, Yehuda Tajar, was sentenced to 10 years in prison. In May and June 1951, arms caches were discovered that allegedly belonged to the Zionist underground, allegedly supplied by the Yishuv after the Farhud of 1941.[citation needed] There has been much debate as to whether the bombs were planted by the Mossad to encourage Iraqi Jews to emigrate to Israel or if they were planted by Muslim extremists to help drive out the Jews. This has been the subject of lawsuits and inquiries in Israel. The emigration law was to expire in March 1951, one year after the law was enacted. On 10 March 1951, 64,000 Iraqi Jews were still waiting to emigrate, the government enacted a new law blocking the assets of Jews who had given up their citizenship, and extending the emigration period. The bulk of the Jews leaving Iraq did so via Israeli airlifts named Operation Ezra and Nehemiah with special permission from the Iraqi government. A small Jewish community remained in Iraq following Operation Ezra and Nehemiah. Restrictions were placed on them after the Ba'ath Party came to power in 1963, and following the Six-Day War, persecution greatly increased. Jews had their property expropriated and bank accounts frozen, their ability to do business was restricted, they were dismissed from public positions, and were placed under house arrest for extended periods of time. In 1968, scores of Jews were imprisoned on charges of spying for Israel. In 1969, about 50 were executed following show trials, most infamously in a mass public hanging of 14 men including 9 Jews, and a hundred thousand Iraqis marched past the bodies in a carnival-like atmosphere. Jews began sneaking across the border to Iran, from where they proceeded to Israel or the UK. In the early 1970s, the Iraqi government permitted Jewish emigration and the majority of the remaining community left Iraq. By 2003, it was estimated that this once-thriving community had been reduced to 35 Jews in Baghdad and a handful more in Kurdish areas of the country. Although there was a small indigenous community, most Jews in Egypt in the early twentieth century were recent immigrants to the country,[failed verification] who did not share the Arabic language and culture. Many were members of the highly diverse Mutamassirun community, which included other groups such as Greeks, Armenians, Syrian Christians and Italians, in addition to the British and French colonial authorities. Until the late 1930s, the Jews, both indigenous and new immigrants, like other minorities tended to apply for foreign citizenship in order to benefit from a foreign protection. The Egyptian government made it very difficult for non-Muslim foreigners to become naturalized. The poorer Jews, most of them indigenous and Oriental Jews, were left stateless, although they were legally eligible for Egyptian nationality. The drive to Egyptianize public life and the economy harmed the minorities, but the Jews had more strikes against them than the others. In the agitation against the Jews of the late thirties and the forties, the Jew was seen as an enemy The Jews were attacked because of their real or alleged links to Zionism. Jews were not discriminated because of their religion or race, like in Europe, but for political reasons. The Egyptian Prime Minister Mahmoud an-Nukrashi Pasha told the British ambassador: "All Jews were potential Zionists [and] ... anyhow all Zionists were Communists." On 24 November 1947, the head of the Egyptian delegation to the United Nations General Assembly, Muhammad Hussein Heykal Pasha, said, "the lives of 1,000,000 Jews in Moslem countries would be jeopardized by the establishment of a Jewish state." On 24 November 1947, Dr Heykal Pasha said: "if the U.N decide to amputate a part of Palestine in order to establish a Jewish state, ... Jewish blood will necessarily be shed elsewhere in the Arab world ... to place in certain and serious danger a million Jews." Mahmud Bey Fawzi (Egypt) said: "Imposed partition was sure to result in bloodshed in Palestine and in the rest of the Arab world." The exodus of the foreign mutamassirun ("Egyptianized") community, which included a significant number of Jews, began following the First World War, and by the end of the 1960s the entire mutamassirun was effectively eliminated. According to Andrew Gorman, this was primarily a result of the "decolonization process and the rise of Egyptian nationalism". The exodus of Egyptian Jews was impacted by the 1945 Anti-Jewish Riots in Egypt, though such emigration was not significant as the government stamped the violence out and the Egyptian Jewish community leaders were supportive of King Farouk. In 1948, approximately 75,000 Jews lived in Egypt. Around 20,000 Jews left Egypt during 1948–49 following the events of the 1948 Arab–Israeli War (including the 1948 Cairo bombings). A further 5,000 left between 1952 and 1956, in the wake of the Egyptian Revolution of 1952 and later the false flag Lavon Affair. The Israeli invasion as part of the Suez Crisis caused a significant upsurge in emigration, with 14,000 Jews leaving in less than six months between November 1956 and March 1957, and 19,000 further emigrating over the next decade. In October 1956, when the Suez Crisis erupted, the position of the mutamassirun, including the Jewish community, was significantly impacted. 1,000 Jews were arrested and 500 Jewish businesses were seized by the government. A statement branding the Jews as "Zionists and enemies of the state" was read out in the mosques of Cairo and Alexandria. Jewish bank accounts were confiscated and many Jews lost their jobs. Lawyers, engineers, doctors and teachers were not allowed to work in their professions. Thousands of Jews were ordered to leave the country and told that they may be sent to concentration camps if they stayed. They were allowed to take only one suitcase and a small sum of cash, and forced to sign declarations "donating" their property to the Egyptian government. Foreign observers reported that members of Jewish families were taken hostage, apparently to insure that those forced to leave did not speak out against the Egyptian government. Jews were expelled or left, forced out by the anti-Jewish feeling in Egypt. Some 25,000 Jews, almost half of the Jewish community left, mainly for Europe, the United States, South America and Israel, after being forced to sign declarations that they were leaving voluntarily, and agreed with the confiscation of their assets. Similar measures were enacted against British and French nationals in retaliation for the invasion. By 1957 the Jewish population of Egypt had fallen to 15,000. In 1960, the American embassy in Cairo wrote of Egyptian Jews that: "There is definitely a strong desire among most Jews to emigrate, but this is prompted by the feeling that they have limited opportunity, or from fear for the future, rather than by any direct or present tangible mistreatment at the hands of the government." In 1967, Jews were detained and tortured, and Jewish homes were confiscated. The vast majority of Jewish men without foreign passports were detained. Following the Six Day War, the community fell to 2,500 members and by the 1970s practically ceased to exist, with the exception of a few remaining families. As of 2015, an estimated 30 Jews remained in Egypt, most of them elderly. The Yemeni exodus began in 1881, seven months prior to the more well-known First Aliyah from Eastern Europe. The exodus came about as a result of European Jewish investment in the Mutasarrifate of Jerusalem, which created jobs for labouring Jews alongside local Muslim labour thereby providing an economic incentive for emigration. This was aided by the reestablishment of Ottoman control over the Yemen Vilayet allowing freedom of movement within the empire, and the opening of the Suez canal, which reduced the cost of travelling considerably. Between 1881 and 1948, 15,430 Jews had immigrated to Palestine legally. In 1942, prior to the formulation of the One Million Plan, David Ben-Gurion described his intentions with respect to such potential policy to a meeting of experts and Jewish leaders, stating that "It is a mark of great failure by Zionism that we have not yet eliminated the Yemen exile [diaspora]." If one includes Aden, there were about 63,000 Jews in Yemen in 1948. In 1947, rioters killed at least 80 Jews in Aden, a British colony in southern Yemen. In 1948 the new Zaydi Imam Ahmad bin Yahya unexpectedly allowed his Jewish subjects to leave Yemen, and tens of thousands poured into Aden. The Israeli government's Operation Magic Carpet evacuated around 44,000 Jews from Yemen to Israel in 1949 and 1950. Emigration continued until 1962, when the civil war in Yemen broke out. A small community remained until 1976, though it has mostly immigrated from Yemen since. In March 2016, the Jewish population in Yemen was estimated to be about 50. The area now known as Lebanon and Syria was the home of one of the oldest Jewish communities in the world, dating back to at least 300 BCE. In November 1945, fourteen Jews were killed in anti-Jewish riots in Tripoli. Unlike in other Arab countries, the Lebanese Jewish community did not face grave peril during the 1948 Arab–Israel War and was reasonably protected by governmental authorities. Lebanon was also the only Arab country that saw a post-1948 increase in its Jewish population, principally due to the influx of Jews coming from Syria and Iraq. The 1932 national census puts the country's Jewish population at around 3,500. In 1948, there were approximately 5,200 Jews in Lebanon. Their number increased after the first Arab-Israeli war to roughly 9000 in 1951, including an estimated 2,000 Jewish asylum seekers. The largest communities of Jews in Lebanon were in Beirut, and the villages near Mount Lebanon, Deir al Qamar, Barouk, Bechamoun, and Hasbaya. While the French mandate saw a general improvement in conditions for Jews, the Vichy regime placed restrictions on them. The Jewish community actively supported Lebanese independence after World War II and had mixed attitudes toward Zionism.[citation needed] However, negative attitudes toward Jews increased after 1948, and, by 1967, most Lebanese Jews had emigrated—to Israel, the United States, Canada, and France. In 1971, Albert Elia, the 69-year-old Secretary-General of the Lebanese Jewish community, was kidnapped in Beirut by Syrian agents and imprisoned under torture in Damascus, along with Syrian Jews who had attempted to flee the country. A personal appeal by the U.N. High Commissioner for Refugees, Prince Sadruddin Aga Khan, to the late President Hafez al-Assad failed to secure Elia's release. The remaining Jewish community was particularly hard hit by the civil war in Lebanon, and by the mid-1970s, the community collapsed. In the 1980s, Hezbollah kidnapped several Lebanese Jewish businessmen, and in the 2004 elections, only one Jew voted in the municipal elections. There are now only between 20 and 40 Jews living in Lebanon. In 1947, rioters in Aleppo burned the city's Jewish quarter and killed 75 people. As a result, nearly half of the Jewish population of Aleppo opted to leave the city, initially to neighbouring Lebanon. In 1948, there were approximately 30,000 Jews in Syria. In 1949, following defeat in the Arab–Israeli War, the CIA-backed March 1949 Syrian coup d'état installed Husni al-Za'im as the President of Syria. Za'im permitted the emigration of large numbers of Syrian Jews, and 5,000 left to Israel. The subsequent Syrian governments placed severe restrictions on the Jewish community, including barring emigration. In 1948, the government banned the sale of Jewish property and in 1953 all Jewish bank accounts were frozen. The Syrian secret police closely monitored the Jewish community. Over the following years, many Jews managed to escape, and the work of supporters, particularly Judy Feld Carr, in smuggling Jews out of Syria, and bringing their plight to the attention of the world, raised awareness of their situation. Although the Syrian government attempted to stop Syrian Jews from exporting their assets, the American consulate in Damascus noted in 1950 that "the majority of Syrian Jews have managed to dispose of their property and to emigrate to Lebanon, Italy, and Israel". In November 1954, the Syrian government temporarily lifted its ban on Jewish emigration. The various restrictions that the Syrian government placed on the Jewish population were severe. Jews were legally barred from working for the government or for banks, obtaining driver's licenses, having telephones in their homes or business premises, or purchasing property.[citation needed] In March 1964, the Syrian government issued a decree prohibiting Jews from traveling more than three miles from the limits of their hometowns. In 1967, in the aftermath of the Six-Day War, antisemitic riots broke out in Damascus and Aleppo. Jews were allowed to leave their homes only for few hours daily. Many Jews found it impossible to pursue their business ventures because the larger community was boycotting their products. In 1970, Israel launched Operation Blanket, a covert military and intelligence operation to evacuate Syrian Jews, managing to bring a few dozen young Jews to Israel. Clandestine Jewish emigration continued, as Jews attempted to sneak across the borders into Lebanon or Turkey, often with the help of smugglers, and make contact with Israeli agents or local Jewish communities. In 1972, demonstrations were held by 1,000 Syrian Jews in Damascus, after four Jewish women were killed as they attempted to flee Syria. The protest surprised Syrian authorities, who closely monitored Jewish community, eavesdropped on their telephone conversations, and tampered with their mail. Following the Madrid Conference of 1991, the United States put pressure on the Syrian government to ease its restrictions on Jews, and during Passover in 1992, the government of Syria began granting exit visas to Jews on condition that they did not emigrate to Israel. At that time, the country had several thousand Jews. The majority left for the United States—most to join the large Syrian Jewish community in South Brooklyn, New York—although some went to France and Turkey, and 1262 Syrian Jews who wanted to immigrate to Israel were brought there in a two-year covert operation. In 2004, the Syrian government attempted to establish better relations with its emigrants, and a delegation of a dozen Jews of Syrian origin visited Syria in the spring of that year. As of December 2014, only 17 Jews remain in Syria, according to Rabbi Avraham Hamra; nine men and eight women, all over 60 years of age. Following the 1948 Arab–Israeli War and the 1949 Armistice Agreements, all Jewish communities in Transjordan, the Jordanian-annexed West Bank, and the Egyptian-occupied Gaza Strip were depopulated. The communities and localities affected included the Jerusalem Jewish Quarter, Hebron, Ein Tzurim, Masu'ot Yitzhak, Revadim, Beit HaArava, Kalya, Kfar Etzion, Atarot, Kfar Darom, Neve Yaakov, and Tel Or. In many cases, these depopulations represented final stages of earlier evacuations begun in response to both the 1929 Palestine riots and 1936–1939 Arab Revolt. The Hebron Jewish Community, already having lost a majority of its population as a result of mandatory British evacuation following the 1929 Hebron Massacre, lost its sole remaining Jewish resident Ya'akov Ben Shalom Ezra during the war. Kfar Darom, the last of the Gaza Jewish communities following mandatory evacuations in 1929, was itself ultimately abandoned following a three-month siege by the Egyptian army in 1948. In the case of Dead Sea-region kibbutzim of Beit HaArava and Kalya, negotiations with Transjordan's King Abdullah were conducted in an attempt for residents to remain. When those talks failed, the villagers fled by boat to an Israeli military post at Mount Sodom. Judean settlements Kfar Etzion, a kibbutz established southwest of Bethlehem, and Jerusalem adjascent Atarot and Neve Yaakov fared less peacefully during the conflict. All three villages were besieged by a combined force of Arab Legion and local irregulars, resulting in complete evacuation of Atarot and Neve Yaakov, and massacre of 127 of Etzion's defending force and citizens. The village of Tel Or had the distinction of being the only Jewish locality permitted in Transjordan proper at the time. Established in 1930 in the vicinity of the Naharayim hydroelectric power plant, the village of was built as a housing compound for Jewish crews operating the power plant, and their families. Following a prolonged battle between Yishuv forces and the Arab Legion in the area, the residents of Tel Or were given an ultimatum to surrender or leave the village.[citation needed] The largest depopulation during the war occurred in Jerusalem's Jewish Quarter, where its entire population of about 2,000 Jews were besieged and ultimately forced to leave en masse. The defenders surrendered on 28 May 1948. The Jordanian commander is reported to have told his superiors: "For the first time in 1,000 years not a single Jew remains in the Jewish Quarter. Not a single building remains intact. This makes the Jews' return here impossible." Bahrain's tiny Jewish community, mostly the Jewish descendants of immigrants who entered the country in the early 20th century from Iraq, numbered between 600 and 1,500 in 1948. In the wake of 29 November 1947 U.N. Partition vote, demonstrations against the vote in the Arab world were called for 2–5 December. The first two days of demonstrations in Bahrain saw rock-throwing against Jews, but on 5 December, mobs in the capital of Manama looted Jewish homes and shops, destroyed the synagogue, beat any Jews they could find, and murdered one elderly woman. As a result, many Bahraini Jews fled Bahrain. Some remained behind, but after riots broke out following the Six-Day War, the majority left. Bahraini Jews emigrated mainly to Israel (where a particularly large number settled in Pardes Hanna-Karkur), the United Kingdom, and the United States. As of 2006, only 36 Jews remained. The exodus of Iran's Jews refers to the emigration of Persian Jews from Pahlavi Iran in the 1950s and a later migration wave from Iran during and after the Iranian Revolution of 1979. At the time of Israeli independence in 1948, there were an estimated 140,000 to 150,000 Jews in Iran. Between 1948 and 1953, about one-third of Iranian Jews immigrated to Israel. Between 1948 and 1978, an estimated 70,000 Iranian Jews immigrated to Israel. In 1979, the year of the Islamic Revolution, there were about 80,000 Jews in Iran. In the aftermath of the revolution, emigration reduced the community to less than 20,000. The migration of Persian Jews after Iranian Revolution was mainly due to fear of religious persecution, economic hardships and insecurity after the deposition of the Shah regime and consequent internal violence and the Iran–Iraq War. In the years following the Islamic Revolution, about 31,000 Jews emigrated from Iran, of whom about 36,000 went to the United States, 20,000 to Israel, and 5000 to Europe.[citation needed] While antisemitism in Iran is not as severe as in Europe and the Arab world, the strong anti-Zionist policy of the Islamic Republic of Iran created a tense and uncomfortable situation for Iranian Jews, who became vulnerable to accusations of alleged collaboration with Israel. In total, more than 80% of Iranian Jews fled or migrated from the country between 1979 and 2006. When the Republic of Turkey was established in 1923, Aliyah was not particularly popular among Turkish Jewry; migration from Turkey to Palestine was minimal in the 1920s. During 1923–1948, approximately 7,300 Jews emigrated from Turkey to Palestine. After the 1934 Thrace pogroms following the 1934 Turkish Resettlement Law, immigration to Palestine increased; it is estimated that 521 Jews left for Palestine from Turkey in 1934 and 1,445 left in 1935. Immigration to Palestine was organized by the Jewish Agency and the Palestine Aliya Anoar Organization. The Varlık Vergisi, a capital tax established in 1942, was also significant in encouraging emigration from Turkey to Palestine; between 1943 and 1944, 4,000 Jews emigrated." The Jews of Turkey reacted very favorably to the creation of the State of Israel. Between 1948 and 1951, 34,547 Jews immigrated to Israel, nearly 40% of the Jewish population at the time. Immigration was stunted for several months in November 1948, when Turkey suspended migration permits as a result of pressure from Arab countries. In March 1949, the suspension was removed when Turkey officially recognized Israel, and emigration continued, with 26,000 emigrating within the same year. The migration was entirely voluntary, and was primary driven by economic factors given the majority of emigrants were from the lower classes. In fact, the migration of Jews to Israel is the second largest mass emigration wave out of Turkey, the first being the population exchange between Greece and Turkey. After 1951, emigration of Jews from Turkey to Israel slowed materially. In the mid-1950s, 10% of those who had moved to Israel returned to Turkey. A new synagogue, the Neve Şalom, was constructed in Istanbul in 1951. Generally, Turkish Jews in Israel have integrated well into society and are not distinguishable from other Israelis. However, they maintain their Turkish culture and connection to Turkey, and are strong supporters of close relations between Israel and Turkey. Even though historically speaking populist antisemitism was rarer in the Ottoman Empire and Anatolia than in Europe,[citation needed] historic antisemitism still existed in the empire, started from the maltreatment of Jewish Yishuv prior to World War I, but most notably, the 1917 Tel Aviv and Jaffa deportation, which was considered as the first anti-Semitic act by the empire. Since the establishment of the state of Israel in 1948, there has been a rise in anti-Semitism. On the night of 6–7 September 1955, the Istanbul pogrom was unleashed. Although primarily aimed at the city's Greek population, the Jewish and Armenian communities of Istanbul were also targeted to a degree. The caused damage was mainly material — more than 4,000 shops and 1,000 houses belonging to Greeks, Armenians and Jews were destroyed - but it deeply shocked minorities throughout the country Since 1986, increased attacks on Jewish targets throughout Turkey impacted the security of the community, and urged many to emigrate. The Neve Shalom Synagogue in Istanbul has been attacked by Islamic militants three times. On 6 September 1986, Arab terrorists gunned down 22 Jewish worshippers and wounded 6 during Shabbat services at Neve Shalom. This attack was blamed on the Palestinian militant Abu Nidal. In 1992, the Lebanon-based Shi'ite Muslim group of Hezbollah carried out a bombing against the synagogue, but nobody was injured. The synagogue was hit again during the 2003 Istanbul bombings alongside the Bet Israel Synagogue, killing 20 and injuring over 300 people, both Jews and Muslims.[citation needed] With the increasing anti-Israeli and anti-Jewish attitudes in modern Turkey, especially under the Turkish government of Recep Tayyip Erdoğan, the country's Jewish community, while still believed to be the largest among Muslim countries, declined from about 26,000 in 2010 to about 17,000–18,000 in 2016. Other Muslim-majority countries The Afghan Jewish community declined from about 40,000 in the early 20th century to 5,000 by 1934 due to persecution. Many Afghan Jews fled to Persia, although some came to Palestine. Following the Kazakh famine of 1930–1933, a significant number of Bukharan Jews crossed the border into the Kingdom of Afghanistan as part of the wider famine-related refugee crisis; leaders of the communities petitioned Jewish communities in Europe and the United States for support. In total, some 60000 refugees had fled from the Soviet Union and reached Afghanistan. In 1932, Mohammed Nadir Shah signed a border treaty with the Soviets in order to prevent asylum seekers from crossing into Afghanistan from Soviet Central Asia. Later that year, Afghanistan began deporting Soviet-origin refugees either back to the Soviet Union or to specified territories in China. Soviet Jews who were already present in Afghanistan with the intent to flee further south were detained in Kabul, and all Soviet Jews who were apprehended at the border were immediately deported. All Soviet citizens, including these Bukharan Jews, were suspected by both the Afghan and British government officials of conducting espionage with the intention to disseminate Bolshevik propaganda. From September 1933, many of these ex-Soviet Jewish refugees in northern Afghanistan were forcibly relocated to major cities such as Kabul and Herat, but continued to live in under restrictions on work and trade. Whilst it has been claimed that the November 1933 assassination of Mohammad Nadir Shah made the situation worse, this is likely to have had only limited impact. In 1935, the Jewish Telegraph Agency reported that "Ghetto rules" had been imposed on Afghan Jews, requiring them to wear particular clothes, that Jewish women stay out of markets, that no Jews live within certain distances of mosques and that Jews did not ride horses. From 1935 to 1941, under Prime Minister Mohammad Hashim Khan (uncle of the King) Germany was the most influential country in Afghanistan. The Nazis regarded the Afghans (like the Iranians) as Aryans. In 1938, it was reported that Jews were only allowed to work as shoe-polishers. Contact with Afghanistan was difficult at this time and with many Jews facing persecution around the world, reports reached the outside world after a delay and were rarely researched thoroughly. Jews were allowed to emigrate in 1951 and most moved to Israel and the United States. By 1969, some 300 remained, and most of these left after the Soviet invasion of 1979, leaving 10 Afghan Jews in 1996, most of them in Kabul. As of 2007, more than 10,000 Jews of Afghan descent were living in Israel and over 200 families of Afghan Jews lived in New York City. In 2001 it was reported that two Jews were left in Afghanistan, Ishaq Levin and Zablon Simintov, and that they did not talk to each other. Levin died in 2005, leaving Simintov as the last Jew living in Afghanistan. Simintov left on 7 September 2021, leaving no known Jews in the country. Penang was historically home to a Jewish community of Baghdadi origin that dated back to colonial times. Much of this community emigrated overseas in the decades following World War II, and the last Jewish resident of Penang died in 2011, making this community extinct. At the time of Pakistani independence in 1947, some 1,300 Jews remained in Karachi, many of them Bene Israel Jews, observing Sephardic Jewish rites. A small Ashkenazi population was also present in the city. Some Karachi streets still bear names that hark back to a time when the Jewish community was more prominent; such as Ashkenazi Street, Abraham Reuben Street (named after the former member of the Karachi Municipal Corporation), Ibn Gabirol Street, and Moses Ibn Ezra Street—although some streets have been renamed, they are still locally referred to by their original names. Bani Israel Graveyard - a small Jewish cemetery - still exists in the vast Mewa Shah Graveyard near the shrine of a Sufi saint.[citation needed] The neighbourhood of Baghdadi in Lyari Town is named for the Baghdadi Jews who once lived there. A community of Bukharan Jews was also found in the city of Peshawar, where many buildings in the old city feature a Star of David as exterior decor as a sign of the Hebrew origins of its owners. Members of the community settled in the city as merchants as early as the 17th century, although the bulk arrived as refugees fleeing the advance of the Russian Empire into Bukhara, and later the Russian Revolution in 1917. Today, there are virtually no Jewish communities remaining in Karachi or Peshawar.[citation needed] The exodus of Jews from Pakistan to Bombay and other cities in India came just prior to the creation of Israel in 1948, when anti-Israeli sentiments rose. By 1953, fewer than 500 Jews were reported to reside in all of Pakistan. Anti-Israeli sentiment and violence often flared during ensuing conflicts in the Middle East, resulting in a further movement of Jews out of Pakistan. Presently, a large number of Jews from Karachi live in the city of Ramla in Israel. The Jewish community in Sudan was concentrated in the capital Khartoum, and had been established in the late 19th century. At its peak between 1930 and 1950, the community had about 800 to 1,000 members, mainly Jews of Sephardi and Mizrahi backgrounds from North Africa, Syria, and Iraq, though some came from Europe in the 1930s. The community had constructed a synagogue a club at its peak. Between 1948 and 1956, some members of the community left the country. Following independence in 1956 hostility against the Jewish community began to grow, and from 1957 many Sudanese Jews began to leave for Israel, the United States, and Europe, particularly the UK and Switzerland. By the early 1960s the Sudanese Jewish community had been greatly depleted. In 1967, anti-semitic attacks began to appear in Sudanese newspapers following the Six-Day War, advocating the torture and murder of prominent Jewish community leaders, and there was a mass arrest of Jewish men. Jewish emigration intensified as a result. The last Jews of Sudan left the country in the early 1970s. About 500 Sudanese Jews went to Israel and the rest to Europe and the US.[citation needed] The Jewish population in East Bengal was 200 at the time of the Partition of India in 1947. They included a Baghdadi Jewish merchant community that settled in Dhaka during the 17th century. A prominent Jew in East Pakistan was Mordecai Cohen, who was a Bengali and English newsreader on East Pakistan Television. By the late 1960s, much of the Jewish community had left for Calcutta. Table of the Jewish population in Muslim countries In 1948, there were between 758,000 and 881,000 Jews (see table below) living in communities throughout the Arab world. Today, there are fewer than 8,600. In some Arab states, such as Libya, which was about 3% Jewish, the Jewish community no longer exists; in other Arab countries, only a few dozen to a few hundred Jews remain.[citation needed] Absorption Of the 900,000 Jewish emigrants, around 650,000 emigrated to Israel, and 235,000 to France. The remainder went to other countries in Europe as well as to the Americas. About two thirds of the exodus was from the North Africa region, of which Morocco's Jews went mostly to Israel, Algeria's Jews went mostly to France, and Tunisia's Jews departed for both countries. The majority of Jews in Arab countries eventually immigrated to the modern State of Israel. Hundreds of thousands of Jews were temporarily settled in the numerous immigrant camps throughout the country. Those were later transformed into ma'abarot (transit camps), where tin dwellings were provided to house up to 220,000 residents. The ma'abarot existed until 1963. The population of transition camps was gradually absorbed and integrated into Israeli society. Many of the North African and Middle-Eastern Jews had a hard time adjusting to the new dominant culture, change of lifestyle and there were claims of discrimination.[citation needed] France was a major destination. About 50% (300,000 people) of modern French Jews have roots from North Africa. In total, it is estimated that between 1956 and 1967, about 235,000 North African Jews from Algeria, Tunisia and Morocco immigrated to France due to the decline of the French Empire and following the Six-Day War. The United States was a destination of many Egyptian, Lebanese and Syrian Jews. Advocacy groups Advocacy groups acting on behalf of Jews from Arab countries include: WOJAC, JJAC and JIMENA have been active in recent years in presenting their views to various governmental bodies in the US, Canada and UK, among others, as well as appearing before the United Nations Human Rights Council. Views on the exodus In 2003, H.Con.Res. 311 was introduced into the House of Representatives by congresswoman Ileana Ros-Lehtinen. In 2004 simple resolutions H.Res. 838 and S.Res. 325 were issued into the House of Representatives and Senate by Jerrold Nadler and Rick Santorum, respectively. In 2007, simple resolutions H.Res. 185 and S.Res. 85 were issued into the House of Representatives and Senate. The resolutions had been written together with lobbyist group Justice for Jews from Arab Countries, whose founder Stanley Urman described the resolution in 2009 as "perhaps our most significant accomplishment". The House of Representatives resolution was sponsored by Jerrold Nadler, who followed the resolutions in 2012 with House Bill H.R. 6242. The 2007–08 resolutions proposed that any "comprehensive Middle East peace agreement to be credible and enduring, the agreement must address and resolve all outstanding issues relating to the legitimate rights of all refugees, including Jews, Christians and other populations displaced from countries in the Middle East", and encourages President Barack Obama and his administration to mention Jewish and other refugees when mentioning Palestinian refugees at international forums. The 2012 bill, which was moved to committee, proposed to recognize the plight of "850,000 Jewish refugees from Arab countries", as well as other refugees, such as Christians from the Middle East, North Africa, and the Persian Gulf. Jerrold Nadler explained his view in 2012 that "the suffering and terrible injustices visited upon Jewish refugees in the Middle East needs to be acknowledged. It is simply wrong to recognize the rights of Palestinian refugees without recognizing the rights of nearly 1 million Jewish refugees who suffered terrible outrages at the hands of their former compatriots." Critics have suggested the campaign is simply an anti-Palestinian "tactic", which Michael Fischbach explains as "a tactic to help the Israeli government deflect Palestinian refugee claims in any final Israeli–Palestinian peace deal, claims that include Palestinian refugees' demand for the 'right of return' to their pre-1948 homes in Israel." The issue of comparison of the Jewish exodus with the Palestinian exodus was raised by the Israeli Foreign Ministry as early as 1961. In 2012, a special campaign on behalf of the Jewish refugees from Arab countries was established and gained momentum. The campaign urges the creation of an international fund that would compensate both Jewish and Palestinian Arab refugees, and would document and research the plight of Jewish refugees from Arab countries. In addition, the campaign plans to create a national day of recognition in Israel to remember the 850,000 Jewish refugees from Arab countries, as well as to build a museum that would document their history, cultural heritage, and collect their testimony. On 21 September 2012, a special event was held at the United Nations to highlight the issue of Jewish refugees from Arab countries. Israeli ambassador Ron Prosor asked the United Nations to "establish a center of documentation and research" that would document the "850,000 untold stories" and "collect the evidence to preserve their history", which he said was ignored for too long. Israeli Deputy Foreign Minister Danny Ayalon said that "We are 64 years late, but we are not too late." Diplomats from approximately two dozen countries and organizations, including the United States, the European Union, Germany, Canada, Spain, and Hungary attended the event. In addition, Jews from Arab countries attended and spoke at the event. In response to the Palestinian Nakba narrative, the term "Jewish Nakba" is sometimes used to refer to the exodus of Jews from Arab countries in the years and decades following the creation of the State of Israel. Israeli columnist Ben Dror Yemini, himself a Mizrahi Jew, wrote: However, there is another Nakba: the Jewish Nakba. During those same years [the 1940s], there was a long line of slaughters, of pogroms, of property confiscation and of deportations against Jews in Islamic countries. This chapter of history has been left in the shadows. The Jewish Nakba was worse than the Palestinian Nakba. The only difference is that the Jews did not turn that Nakba into their founding ethos. To the contrary. Professor Ada Aharoni, chairman of The World Congress of the Jews from Egypt, argues in an article entitled "What about the Jewish Nakba?" that exposing the truth about the exodus of the Jews from Arab states could facilitate a genuine peace process, since it would enable Palestinians to realize they were not the only ones who suffered, and thus their sense of "victimization and rejectionism" will decline. Additionally, Canadian MP and international human rights lawyer Irwin Cotler has referred to the "double Nakba". He criticizes the Arab states' rejectionism of the Jewish state, their subsequent invasion to destroy the newly formed nation, and the punishment meted out against their local Jewish populations: The result was, therefore, a double Nakba: not only of Palestinian-Arab suffering and the creation of a Palestinian refugee problem, but also, with the assault on Israel and on Jews in Arab countries, the creation of a second, much less known, group of refugees—Jewish refugees from Arab countries. Iraqi-born Ran Cohen, a former member of the Knesset, said: "I have this to say: I am not a refugee. I came at the behest of Zionism, due to the pull that this land exerts, and due to the idea of redemption. Nobody is going to define me as a refugee." Yemeni-born Yisrael Yeshayahu, former Knesset speaker, Labor Party, stated: "We are not refugees. [Some of us] came to this country before the state was born. We had messianic aspirations." And Iraqi-born Shlomo Hillel, also a former speaker of the Knesset, Labor Party, claimed: "I do not regard the departure of Jews from Arab lands as that of refugees. They came here because they wanted to, as Zionists." Historian Tom Segev stated: "Deciding to emigrate to Israel was often a very personal decision. It was based on the particular circumstances of the individual's life. They were not all poor, or 'dwellers in dark caves and smoking pits'. Nor were they always subject to persecution, repression or discrimination in their native lands. They emigrated for a variety of reasons, depending on the country, the time, the community, and the person." Iraqi-born Israeli historian Avi Shlaim, speaking of the wave of Iraqi Jewish migration to Israel, concludes that, even though Iraqi Jews were "victims of the Israeli-Arab conflict", Iraqi Jews are not refugees, saying "nobody expelled us from Iraq, nobody told us that we were unwanted." He restated that case in a review of Martin Gilbert's book, In Ishmael's House. Yehuda Shenhav has criticized the analogy between Jewish emigration from Arab countries and the Palestinian exodus. He also says "The unfounded, immoral analogy between Palestinian refugees and Mizrahi immigrants needlessly embroils members of these two groups in a dispute, degrades the dignity of many Mizrahi Jews, and harms prospects for genuine Jewish-Arab reconciliation." He has stated that "the campaign's proponents hope their efforts will prevent conferral of what is called a 'right of return' on Palestinians, and reduce the size of the compensation Israel is liable to be asked to pay in exchange for Palestinian property appropriated by the state guardian of 'lost' assets." Ella Shohat has described the Zionist master narrative of the migration of Jews from Muslim lands to Israel as a discourse in which "European Zionism 'saved' Sephardi Jews from the harsh rule of their Arab 'captors'" and "took them out of 'primitive conditions' of poverty and superstition and ushered them gently into a modern Western society characterized by tolerance, democracy, and 'humane values.'" She cites the impression of Israeli journalist Arye Gelblum [he] in Haaretz in 1949: This is immigration of a race we have not yet known in the country .... We are dealing with people whose primitivism is at a peak, whose level of knowledge is one of virtually absolute ignorance, and worse, who have little talent for understanding anything intellectual. Generally, they are only slightly better than the general level of the Arabs, Negroes, and Berbers in the same regions. In any case, they are at an even lower level than what we knew with regard to the former Arabs of Eretz Israel ... . These Jews also lack roots in Judaism, as they are totally subordinated to the play of savage and primitive instincts... As with the Africans you will find card games for money, drunkenness and prostitution. Most of them have serious eye, skin and sexual diseases, without mentioning robberies and thefts. Chronic laziness and hatred for work, there is nothing safe about this asocial element... "Aliyat HaNoar" [the official organization dealing with young immigrants] refuses to receive Moroccan children and the Kibbutzim will not hear of their absorption among them. Israeli historian Yehoshua Porath has rejected the comparison, arguing that while there is a superficial similarity, the ideological and historical significance of the two population movements are entirely different. Porath points out that the immigration of Jews from Arab countries to Israel, expelled or not, was the "fulfilment of a national dream". He also argues that the achievement of this Zionist goal was only made possible through the endeavors of the Jewish Agency's agents, teachers, and instructors working in various Arab countries since the 1930s. Porath contrasts this with the Palestinian Arabs' flight of 1948 as completely different. He describes the outcome of the Palestinian's flight as an "unwanted national calamity" that was accompanied by "unending personal tragedies". The result was "the collapse of the Palestinian community, the fragmentation of a people, and the loss of a country that had in the past been mostly Arabic-speaking and Islamic. " Alon Liel, a former director-general of the Foreign Ministry says that many Jews escaped from Arab countries, but he does not call them "refugees". On 21 September 2012, at a United Nations conference, the issue of Jewish refugees from Arab countries was criticized by Hamas spokesman, Sami Abu Zuhri, who stated that the Jewish refugees from Arab countries were in fact responsible for the Palestinian displacement and that "those Jews are criminals rather than refugees." In regard to the same conference, Palestinian politician Hanan Ashrawi has argued that Jews from Arab lands are not refugees at all and that Israel is using their claims in order to counterbalance to those of Palestinian refugees against it. Ashrawi said that "If Israel is their homeland, then they are not 'refugees'; they are emigrants who returned either voluntarily or due to a political decision." Property losses and compensation In Libya, Iraq and Egypt many Jews lost vast portions of their wealth and property as part of the exodus because of severe restrictions on moving their wealth out of the country. In other countries in North Africa, the situation was more complex. For example, in Morocco emigrants were not allowed to take more than $60 worth of Moroccan currency with them, although generally they were able to sell their property prior to leaving, and some were able to work around the currency restrictions by exchanging cash into jewelry or other portable valuables. This led some scholars to speculate the Moroccan and Algerian Jewish populations, comprising a large percentage of the exodus, on the whole did not suffer large property losses. Yemeni Jews were usually able to sell what property they possessed prior to departure, although not always at market rates. Various estimates of the value of property abandoned by the Jewish exodus have been published, with wide variety in the quoted figures from a few billion dollars to hundreds of billions. The World Organization of Jews from Arab Countries (WOJAC) estimated in 2006, that Jewish property abandoned in Arab countries would be valued at more than $100 billion, later revising their estimate in 2007 to $300 billion. They also estimated Jewish-owned real-estate left behind in Arab lands at 100,000 square kilometers (four times the size of the state of Israel). The type and extent of linkage between the Jewish exodus from Arab countries and the 1948 Palestinian exodus has also been the source of controversy. Jewish advocacy group JJAC has suggested that there are strong ties between the two processes and that decoupling the two issues is unjust.[better source needed] Holocaust restitution expert Sidney Zabludoff, writing for the Israeli-advocacy group Jerusalem Center for Public Affairs, suggests that the losses sustained by the Jews who fled Arab countries since 1947 amounts to $700 million at period prices based on an estimated per capita wealth of $700 multiplied by one million refugees, equating to $6 billion today, assuming that the entire exodus left all of their wealth behind. The official position of the Israeli government is that Jews from Arab countries are considered refugees, and it considers their rights to property left in countries of origin as valid and existent. In 2008, the Orthodox Sephardi party, Shas, announced its intention to seek compensation for Jewish refugees from Arab states. In 2009, Israeli lawmakers introduced a bill into the Knesset to make compensation for Jews from Arab and Muslim countries an integral part of any future peace negotiations by requiring compensation on behalf of current Jewish Israeli citizens, who were expelled from Arab countries after Israel was established in 1948 and leaving behind a significant amount of valuable property. In February 2010, the bill passed its first reading. The bill was sponsored by MK Nissim Ze'ev (Shas) and follows a resolution passed in the United States House of Representatives in 2008, calling for refugee recognition to be extended to Jews and Christians similar to that extended to Palestinians in the course of Middle East peace talks. Films and documentaries Memorialization in Israel 9 May 2021, the first physical memorialization in Israel of the Departure and Expulsion of Jews from Arab Lands and Iran was placed on the Sherover Promenade in Jerusalem. It is titled the Departure and Expulsion Memorial following the Knesset law for the annual recognition of the Jewish experience held annually on 30 November. The text on the Memorial reads; With the birth of the State of Israel, over 850,000 Jews were forced from Arab Lands and Iran. The desperate refugees were welcomed by Israel. By Act of the Knesset: 30 Nov, annually, is the Departure and Expulsion Memorial Day. Memorial donated by the Jewish American Society for Historic Preservation, With support from the World Sephardi Federation, City of Jerusalem and the Jerusalem Foundation The sculpture is the interpretive work of Sam Philipe, a fifth-generation Jerusalemite. See also References Further reading Egypt Iran Iraq Yemen Other |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-11] | [TOKENS: 10728] |
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References |
======================================== |
[SOURCE: https://techcrunch.com/2026/02/15/hollywood-isnt-happy-about-the-new-seedance-2-0-video-generator/] | [TOKENS: 1061] |
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Hollywood isn’t happy about the new Seedance 2.0 video generator Hollywood organizations are pushing back against a new AI video model called Seedance 2.0, which they say has quickly become a tool for “blatant” copyright infringement. ByteDance, the Chinese company that recently finalized a deal to sell TikTok’s U.S. operations (it retains a stake in the new joint venture), launched Seedance 2.0 earlier this week. According to the Wall Street Journal, the updated model is currently available to Chinese users of ByteDance’s Jianying app, and the company says it will soon be available to global users of its CapCut app. Similar to tools such as OpenAI’s Sora, Seedance allows users to create videos (currently limited to 15 seconds in length) by just entering a text prompt. And like Sora, Seedance quickly drew criticism for an apparent lack of guardrails around the ability to create videos using the likeness of real people, as well as studios’ intellectual property. After one X user posted a brief video showing Tom Cruise fighting Brad Pitt, which they said was created by “a 2 line prompt in seedance 2,” “Deadpool” screenwriter Rhett Reese responded, “I hate to say it. It’s likely over for us.” The Motion Picture Association soon issued a statement from CEO Charles Rivkin demanding that ByteDance “immediately cease its infringing activity.” “In a single day, the Chinese AI service Seedance 2.0 has engaged in unauthorized use of U.S. copyrighted works on a massive scale,” Rivkin said. “By launching a service that operates without meaningful safeguards against infringement, ByteDance is disregarding well-established copyright law that protects the rights of creators and underpins millions of American jobs.” The Human Artistry Campaign — an initiative backed by Hollywood unions and trade groups — condemned Seedance 2.0 as “an attack on every creator around the world,” while the actors’ union SAG-AFTRA said it “stands with the studios in condemning the blatant infringement enabled by Bytedance’s new AI video model Seedance 2.0.” Seedance videos have apparently featured Disney-owned characters such as Spider-Man, Darth Vader, and Grogu, better known as Baby Yoda, prompting the company to take legal action. Axios reports that Disney has sent a cease-and-desist letter accusing ByteDance of a “virtual smash-and-grab of Disney’s IP” and claiming the Chinese company is “hijacking Disney’s characters by reproducing, distributing, and creating derivative works featuring those characters.” Disney isn’t necessarily opposed to working with AI companies — while it has reportedly sent a cease-and-desist letter to Google over similar issues, it has signed a three-year licensing deal with OpenAI. Variety reports that Paramount followed suit by sending ByteDance a cease-and-desist letter on Saturday. The letter claimed that “much of the content that the Seed Platforms produce contains vivid depictions of Paramount’s famous and iconic franchises and characters” and that this content “is often indistinguishable, both visually and audibly” from Paramount’s films and TV shows. TechCrunch has reached out to ByteDance for comment. This post was originally published on February 14, 2026. It has been updated to include information about Paramount’s cease-and-desist letter. Topics Anthony Ha is TechCrunch’s weekend editor. Previously, he worked as a tech reporter at Adweek, a senior editor at VentureBeat, a local government reporter at the Hollister Free Lance, and vice president of content at a VC firm. He lives in New York City. You can contact or verify outreach from Anthony by emailing anthony.ha@techcrunch.com. Save up to $680 on your pass before February 27.Meet investors. Discover your next portfolio company. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what’s next. Don’t miss these one-time savings. Most Popular FBI says ATM ‘jackpotting’ attacks are on the rise, and netting hackers millions in stolen cash Meta’s own research found parental supervision doesn’t really help curb teens’ compulsive social media use How Ricursive Intelligence raised $335M at a $4B valuation in 4 months After all the hype, some AI experts don’t think OpenClaw is all that exciting OpenClaw creator Peter Steinberger joins OpenAI Hollywood isn’t happy about the new Seedance 2.0 video generator The great computer science exodus (and where students are going instead) © 2025 TechCrunch Media LLC. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Mars#cite_note-Souami_Souchay_2012-5] | [TOKENS: 11899] |
Contents Mars Mars is the fourth planet from the Sun. It is also known as the "Red Planet", for its orange-red appearance. Mars is a desert-like rocky planet with a tenuous atmosphere that is primarily carbon dioxide (CO2). At the average surface level the atmospheric pressure is a few thousandths of Earth's, atmospheric temperature ranges from −153 to 20 °C (−243 to 68 °F), and cosmic radiation is high. Mars retains some water, in the ground as well as thinly in the atmosphere, forming cirrus clouds, fog, frost, larger polar regions of permafrost and ice caps (with seasonal CO2 snow), but no bodies of liquid surface water. Its surface gravity is roughly a third of Earth's or double that of the Moon. Its diameter, 6,779 km (4,212 mi), is about half the Earth's, or twice the Moon's, and its surface area is the size of all the dry land of Earth. Fine dust is prevalent across the surface and the atmosphere, being picked up and spread at the low Martian gravity even by the weak wind of the tenuous atmosphere. The terrain of Mars roughly follows a north-south divide, the Martian dichotomy, with the northern hemisphere mainly consisting of relatively flat, low lying plains, and the southern hemisphere of cratered highlands. Geologically, the planet is fairly active with marsquakes trembling underneath the ground, but also hosts many enormous volcanoes that are extinct (the tallest is Olympus Mons, 21.9 km or 13.6 mi tall), as well as one of the largest canyons in the Solar System (Valles Marineris, 4,000 km or 2,500 mi long). Mars has two natural satellites that are small and irregular in shape: Phobos and Deimos. With a significant axial tilt of 25 degrees, Mars experiences seasons, like Earth (which has an axial tilt of 23.5 degrees). A Martian solar year is equal to 1.88 Earth years (687 Earth days), a Martian solar day (sol) is equal to 24.6 hours. Mars formed along with the other planets approximately 4.5 billion years ago. During the martian Noachian period (4.5 to 3.5 billion years ago), its surface was marked by meteor impacts, valley formation, erosion, the possible presence of water oceans and the loss of its magnetosphere. The Hesperian period (beginning 3.5 billion years ago and ending 3.3–2.9 billion years ago) was dominated by widespread volcanic activity and flooding that carved immense outflow channels. The Amazonian period, which continues to the present, is the currently dominating and remaining influence on geological processes. Because of Mars's geological history, the possibility of past or present life on Mars remains an area of active scientific investigation, with some possible traces needing further examination. Being visible with the naked eye in Earth's sky as a red wandering star, Mars has been observed throughout history, acquiring diverse associations in different cultures. In 1963 the first flight to Mars took place with Mars 1, but communication was lost en route. The first successful flyby exploration of Mars was conducted in 1965 with Mariner 4. In 1971 Mariner 9 entered orbit around Mars, being the first spacecraft to orbit any body other than the Moon, Sun or Earth; following in the same year were the first uncontrolled impact (Mars 2) and first successful landing (Mars 3) on Mars. Probes have been active on Mars continuously since 1997. At times, more than ten probes have simultaneously operated in orbit or on the surface, more than at any other planet beyond Earth. Mars is an often proposed target for future crewed exploration missions, though no such mission is currently planned. Natural history Scientists have theorized that during the Solar System's formation, Mars was created as the result of a random process of run-away accretion of material from the protoplanetary disk that orbited the Sun. Mars has many distinctive chemical features caused by its position in the Solar System. Elements with comparatively low boiling points, such as chlorine, phosphorus, and sulfur, are much more common on Mars than on Earth; these elements were probably pushed outward by the young Sun's energetic solar wind. After the formation of the planets, the inner Solar System may have been subjected to the so-called Late Heavy Bombardment. About 60% of the surface of Mars shows a record of impacts from that era, whereas much of the remaining surface is probably underlain by immense impact basins caused by those events. However, more recent modeling has disputed the existence of the Late Heavy Bombardment. There is evidence of an enormous impact basin in the Northern Hemisphere of Mars, spanning 10,600 by 8,500 kilometres (6,600 by 5,300 mi), or roughly four times the size of the Moon's South Pole–Aitken basin, which would be the largest impact basin yet discovered if confirmed. It has been hypothesized that the basin was formed when Mars was struck by a Pluto-sized body about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet. A 2023 study shows evidence, based on the orbital inclination of Deimos (a small moon of Mars), that Mars may once have had a ring system 3.5 billion years to 4 billion years ago. This ring system may have been formed from a moon, 20 times more massive than Phobos, orbiting Mars billions of years ago; and Phobos would be a remnant of that ring. Epochs: The geological history of Mars can be split into many periods, but the following are the three primary periods: Geological activity is still taking place on Mars. The Athabasca Valles is home to sheet-like lava flows created about 200 million years ago. Water flows in the grabens called the Cerberus Fossae occurred less than 20 million years ago, indicating equally recent volcanic intrusions. The Mars Reconnaissance Orbiter has captured images of avalanches. Physical characteristics Mars is approximately half the diameter of Earth or twice that of the Moon, with a surface area only slightly less than the total area of Earth's dry land. Mars is less dense than Earth, having about 15% of Earth's volume and 11% of Earth's mass, resulting in about 38% of Earth's surface gravity. Mars is the only presently known example of a desert planet, a rocky planet with a surface akin to that of Earth's deserts. The red-orange appearance of the Martian surface is caused by iron(III) oxide (nanophase Fe2O3) and the iron(III) oxide-hydroxide mineral goethite. It can look like butterscotch; other common surface colors include golden, brown, tan, and greenish, depending on the minerals present. Like Earth, Mars is differentiated into a dense metallic core overlaid by less dense rocky layers. The outermost layer is the crust, which is on average about 42–56 kilometres (26–35 mi) thick, with a minimum thickness of 6 kilometres (3.7 mi) in Isidis Planitia, and a maximum thickness of 117 kilometres (73 mi) in the southern Tharsis plateau. For comparison, Earth's crust averages 27.3 ± 4.8 km in thickness. The most abundant elements in the Martian crust are silicon, oxygen, iron, magnesium, aluminum, calcium, and potassium. Mars is confirmed to be seismically active; in 2019, it was reported that InSight had detected and recorded over 450 marsquakes and related events. Beneath the crust is a silicate mantle responsible for many of the tectonic and volcanic features on the planet's surface. The upper Martian mantle is a low-velocity zone, where the velocity of seismic waves is lower than surrounding depth intervals. The mantle appears to be rigid down to the depth of about 250 km, giving Mars a very thick lithosphere compared to Earth. Below this the mantle gradually becomes more ductile, and the seismic wave velocity starts to grow again. The Martian mantle does not appear to have a thermally insulating layer analogous to Earth's lower mantle; instead, below 1050 km in depth, it becomes mineralogically similar to Earth's transition zone. At the bottom of the mantle lies a basal liquid silicate layer approximately 150–180 km thick. The Martian mantle appears to be highly heterogenous, with dense fragments up to 4 km across, likely injected deep into the planet by colossal impacts ~4.5 billion years ago; high-frequency waves from eight marsquakes slowed as they passed these localized regions, and modeling indicates the heterogeneities are compositionally distinct debris preserved because Mars lacks plate tectonics and has a sluggishly convecting interior that prevents complete homogenization. Mars's iron and nickel core is at least partially molten, and may have a solid inner core. It is around half of Mars's radius, approximately 1650–1675 km, and is enriched in light elements such as sulfur, oxygen, carbon, and hydrogen. The temperature of the core is estimated to be 2000–2400 K, compared to 5400–6230 K for Earth's solid inner core. In 2025, based on data from the InSight lander, a group of researchers reported the detection of a solid inner core 613 kilometres (381 mi) ± 67 kilometres (42 mi) in radius. Mars is a terrestrial planet with a surface that consists of minerals containing silicon and oxygen, metals, and other elements that typically make up rock. The Martian surface is primarily composed of tholeiitic basalt, although parts are more silica-rich than typical basalt and may be similar to andesitic rocks on Earth, or silica glass. Regions of low albedo suggest concentrations of plagioclase feldspar, with northern low albedo regions displaying higher than normal concentrations of sheet silicates and high-silicon glass. Parts of the southern highlands include detectable amounts of high-calcium pyroxenes. Localized concentrations of hematite and olivine have been found. Much of the surface is deeply covered by finely grained iron(III) oxide dust. The Phoenix lander returned data showing Martian soil to be slightly alkaline and containing elements such as magnesium, sodium, potassium and chlorine. These nutrients are found in soils on Earth, and are necessary for plant growth. Experiments performed by the lander showed that the Martian soil has a basic pH of 7.7, and contains 0.6% perchlorate by weight, concentrations that are toxic to humans. Streaks are common across Mars and new ones appear frequently on steep slopes of craters, troughs, and valleys. The streaks are dark at first and get lighter with age. The streaks can start in a tiny area, then spread out for hundreds of metres. They have been seen to follow the edges of boulders and other obstacles in their path. The commonly accepted hypotheses include that they are dark underlying layers of soil revealed after avalanches of bright dust or dust devils. Several other explanations have been put forward, including those that involve water or even the growth of organisms. Environmental radiation levels on the surface are on average 0.64 millisieverts of radiation per day, and significantly less than the radiation of 1.84 millisieverts per day or 22 millirads per day during the flight to and from Mars. For comparison the radiation levels in low Earth orbit, where Earth's space stations orbit, are around 0.5 millisieverts of radiation per day. Hellas Planitia has the lowest surface radiation at about 0.342 millisieverts per day, featuring lava tubes southwest of Hadriacus Mons with potentially levels as low as 0.064 millisieverts per day, comparable to radiation levels during flights on Earth. Although Mars has no evidence of a structured global magnetic field, observations show that parts of the planet's crust have been magnetized, suggesting that alternating polarity reversals of its dipole field have occurred in the past. This paleomagnetism of magnetically susceptible minerals is similar to the alternating bands found on Earth's ocean floors. One hypothesis, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands suggest plate tectonic activity on Mars four billion years ago, before the planetary dynamo ceased to function and the planet's magnetic field faded. Geography and features Although better remembered for mapping the Moon, Johann Heinrich von Mädler and Wilhelm Beer were the first areographers. They began by establishing that most of Mars's surface features were permanent and by more precisely determining the planet's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars. Features on Mars are named from a variety of sources. Albedo features are named for classical mythology. Craters larger than roughly 50 km are named for deceased scientists and writers and others who have contributed to the study of Mars. Smaller craters are named for towns and villages of the world with populations of less than 100,000. Large valleys are named for the word "Mars" or "star" in various languages; smaller valleys are named for rivers. Large albedo features retain many of the older names but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian "continents" and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The permanent northern polar ice cap is named Planum Boreum. The southern cap is called Planum Australe. Mars's equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mädler and Beer selected a line for their first maps of Mars in 1830. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen by Merton E. Davies, Harold Masursky, and Gérard de Vaucouleurs for the definition of 0.0° longitude to coincide with the original selection. Because Mars has no oceans, and hence no "sea level", a zero-elevation surface had to be selected as a reference level; this is called the areoid of Mars, analogous to the terrestrial geoid. Zero altitude was defined by the height at which there is 610.5 Pa (6.105 mbar) of atmospheric pressure. This pressure corresponds to the triple point of water, and it is about 0.6% of the sea level surface pressure on Earth (0.006 atm). For mapping purposes, the United States Geological Survey divides the surface of Mars into thirty cartographic quadrangles, each named for a classical albedo feature it contains. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023. The vast upland region Tharsis contains several massive volcanoes, which include the shield volcano Olympus Mons. The edifice is over 600 km (370 mi) wide. Because the mountain is so large, with complex structure at its edges, giving a definite height to it is difficult. Its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over 21 km (13 mi), a little over twice the height of Mauna Kea as measured from its base on the ocean floor. The total elevation change from the plains of Amazonis Planitia, over 1,000 km (620 mi) to the northwest, to the summit approaches 26 km (16 mi), roughly three times the height of Mount Everest, which in comparison stands at just over 8.8 kilometres (5.5 mi). Consequently, Olympus Mons is either the tallest or second-tallest mountain in the Solar System; the only known mountain which might be taller is the Rheasilvia peak on the asteroid Vesta, at 20–25 km (12–16 mi). The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. It is possible that, four billion years ago, the Northern Hemisphere of Mars was struck by an object one-tenth to two-thirds the size of Earth's Moon. If this is the case, the Northern Hemisphere of Mars would be the site of an impact crater 10,600 by 8,500 kilometres (6,600 by 5,300 mi) in size, or roughly the area of Europe, Asia, and Australia combined, surpassing Utopia Planitia and the Moon's South Pole–Aitken basin as the largest impact crater in the Solar System. Mars is scarred by 43,000 impact craters with a diameter of 5 kilometres (3.1 mi) or greater. The largest exposed crater is Hellas, which is 2,300 kilometres (1,400 mi) wide and 7,000 metres (23,000 ft) deep, and is a light albedo feature clearly visible from Earth. There are other notable impact features, such as Argyre, which is around 1,800 kilometres (1,100 mi) in diameter, and Isidis, which is around 1,500 kilometres (930 mi) in diameter. Due to the smaller mass and size of Mars, the probability of an object colliding with the planet is about half that of Earth. Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter. Martian craters can[discuss] have a morphology that suggests the ground became wet after the meteor impact. The large canyon, Valles Marineris (Latin for 'Mariner Valleys, also known as Agathodaemon in the old canal maps), has a length of 4,000 kilometres (2,500 mi) and a depth of up to 7 kilometres (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 kilometres (277 mi) long and nearly 2 kilometres (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 kilometres (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement. Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the volcano Arsia Mons. The caves, named after loved ones of their discoverers, are collectively known as the "seven sisters". Cave entrances measure from 100 to 252 metres (328 to 827 ft) wide and they are estimated to be at least 73 to 96 metres (240 to 315 ft) deep. Because light does not reach the floor of most of the caves, they may extend much deeper than these lower estimates and widen below the surface. "Dena" is the only exception; its floor is visible and was measured to be 130 metres (430 ft) deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface. Martian geysers (or CO2 jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms – are the two most visible types of features ascribed to these eruptions. Similarly sized dust will settle from the thinner Martian atmosphere sooner than it would on Earth. For example, the dust suspended by the 2001 global dust storms on Mars only remained in the Martian atmosphere for 0.6 years, while the dust from Mount Pinatubo took about two years to settle. However, under current Martian conditions, the mass movements involved are generally much smaller than on Earth. Even the 2001 global dust storms on Mars moved only the equivalent of a very thin dust layer – about 3 μm thick if deposited with uniform thickness between 58° north and south of the equator. Dust deposition at the two rover sites has proceeded at a rate of about the thickness of a grain every 100 sols. Atmosphere Mars lost its magnetosphere 4 billion years ago, possibly because of numerous asteroid strikes, so the solar wind interacts directly with the Martian ionosphere, lowering the atmospheric density by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected ionized atmospheric particles trailing off into space behind Mars, and this atmospheric loss is being studied by the MAVEN orbiter. Compared to Earth, the atmosphere of Mars is quite rarefied. Atmospheric pressure on the surface today ranges from a low of 30 Pa (0.0044 psi) on Olympus Mons to over 1,155 Pa (0.1675 psi) in Hellas Planitia, with a mean pressure at the surface level of 600 Pa (0.087 psi). The highest atmospheric density on Mars is equal to that found 35 kilometres (22 mi) above Earth's surface. The resulting mean surface pressure is only 0.6% of Earth's 101.3 kPa (14.69 psi). The scale height of the atmosphere is about 10.8 kilometres (6.7 mi), which is higher than Earth's 6 kilometres (3.7 mi), because the surface gravity of Mars is only about 38% of Earth's. The atmosphere of Mars consists of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 μm in diameter which give the Martian sky a tawny color when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it. Despite repeated detections of methane on Mars, there is no scientific consensus as to its origin. One suggestion is that methane exists on Mars and that its concentration fluctuates seasonally. The existence of methane could be produced by non-biological process such as serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars, or by Martian life. Compared to Earth, its higher concentration of atmospheric CO2 and lower surface pressure may be why sound is attenuated more on Mars, where natural sources are rare apart from the wind. Using acoustic recordings collected by the Perseverance rover, researchers concluded that the speed of sound there is approximately 240 m/s for frequencies below 240 Hz, and 250 m/s for those above. Auroras have been detected on Mars. Because Mars lacks a global magnetic field, the types and distribution of auroras there differ from those on Earth; rather than being mostly restricted to polar regions as is the case on Earth, a Martian aurora can encompass the planet. In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25 times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month. Mars has seasons, alternating between its northern and southern hemispheres, similar to on Earth. Additionally the orbit of Mars has, compared to Earth's, a large eccentricity and approaches perihelion when it is summer in its southern hemisphere and winter in its northern, and aphelion when it is winter in its southern hemisphere and summer in its northern. As a result, the seasons in its southern hemisphere are more extreme and the seasons in its northern are milder than would otherwise be the case. The summer temperatures in the south can be warmer than the equivalent summer temperatures in the north by up to 30 °C (54 °F). Martian surface temperatures vary from lows of about −110 °C (−166 °F) to highs of up to 35 °C (95 °F) in equatorial summer. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure (about 1% that of the atmosphere of Earth), and the low thermal inertia of Martian soil. The planet is 1.52 times as far from the Sun as Earth, resulting in just 43% of the amount of sunlight. Mars has the largest dust storms in the Solar System, reaching speeds of over 160 km/h (100 mph). These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase global temperature. Seasons also produce dry ice covering polar ice caps. Hydrology While Mars contains water in larger amounts, most of it is dust covered water ice at the Martian polar ice caps. The volume of water ice in the south polar ice cap, if melted, would be enough to cover most of the surface of the planet with a depth of 11 metres (36 ft). Water in its liquid form cannot persist on the surface due to Mars's low atmospheric pressure, which is less than 1% that of Earth. Only at the lowest of elevations are the pressure and temperature high enough for liquid water to exist for short periods. Although little water is present in the atmosphere, there is enough to produce clouds of water ice and different cases of snow and frost, often mixed with snow of carbon dioxide dry ice. Landforms visible on Mars strongly suggest that liquid water has existed on the planet's surface. Huge linear swathes of scoured ground, known as outflow channels, cut across the surface in about 25 places. These are thought to be a record of erosion caused by the catastrophic release of water from subsurface aquifers, though some of these structures have been hypothesized to result from the action of glaciers or lava. One of the larger examples, Ma'adim Vallis, is 700 kilometres (430 mi) long, much greater than the Grand Canyon, with a width of 20 kilometres (12 mi) and a depth of 2 kilometres (1.2 mi) in places. It is thought to have been carved by flowing water early in Mars's history. The youngest of these channels is thought to have formed only a few million years ago. Elsewhere, particularly on the oldest areas of the Martian surface, finer-scale, dendritic networks of valleys are spread across significant proportions of the landscape. Features of these valleys and their distribution strongly imply that they were carved by runoff resulting from precipitation in early Mars history. Subsurface water flow and groundwater sapping may play important subsidiary roles in some networks, but precipitation was probably the root cause of the incision in almost all cases. Along craters and canyon walls, there are thousands of features that appear similar to terrestrial gullies. The gullies tend to be in the highlands of the Southern Hemisphere and face the Equator; all are poleward of 30° latitude. A number of authors have suggested that their formation process involves liquid water, probably from melting ice, although others have argued for formation mechanisms involving carbon dioxide frost or the movement of dry dust. No partially degraded gullies have formed by weathering and no superimposed impact craters have been observed, indicating that these are young features, possibly still active. Other geological features, such as deltas and alluvial fans preserved in craters, are further evidence for warmer, wetter conditions at an interval or intervals in earlier Mars history. Such conditions necessarily require the widespread presence of crater lakes across a large proportion of the surface, for which there is independent mineralogical, sedimentological and geomorphological evidence. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water. The chemical signature of water vapor on Mars was first unequivocally demonstrated in 1963 by spectroscopy using an Earth-based telescope. In 2004, Opportunity detected the mineral jarosite. This forms only in the presence of acidic water, showing that water once existed on Mars. The Spirit rover found concentrated deposits of silica in 2007 that indicated wet conditions in the past, and in December 2011, the mineral gypsum, which also forms in the presence of water, was found on the surface by NASA's Mars rover Opportunity. It is estimated that the amount of water in the upper mantle of Mars, represented by hydroxyl ions contained within Martian minerals, is equal to or greater than that of Earth at 50–300 parts per million of water, which is enough to cover the entire planet to a depth of 200–1,000 metres (660–3,280 ft). On 18 March 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 centimetres (24 in), during the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain. In September 2015, NASA announced that they had found strong evidence of hydrated brine flows in recurring slope lineae, based on spectrometer readings of the darkened areas of slopes. These streaks flow downhill in Martian summer, when the temperature is above −23 °C, and freeze at lower temperatures. These observations supported earlier hypotheses, based on timing of formation and their rate of growth, that these dark streaks resulted from water flowing just below the surface. However, later work suggested that the lineae may be dry, granular flows instead, with at most a limited role for water in initiating the process. A definitive conclusion about the presence, extent, and role of liquid water on the Martian surface remains elusive. Researchers suspect much of the low northern plains of the planet were covered with an ocean hundreds of meters deep, though this theory remains controversial. In March 2015, scientists stated that such an ocean might have been the size of Earth's Arctic Ocean. This finding was derived from the ratio of protium to deuterium in the modern Martian atmosphere compared to that ratio on Earth. The amount of Martian deuterium (D/H = 9.3 ± 1.7 10−4) is five to seven times the amount on Earth (D/H = 1.56 10−4), suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the former presence of an ocean. Other scientists caution that these results have not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water. Near the northern polar cap is the 81.4 kilometres (50.6 mi) wide Korolev Crater, which the Mars Express orbiter found to be filled with approximately 2,200 cubic kilometres (530 cu mi) of water ice. In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior (which is 12,100 cubic kilometers). During observations from 2018 through 2021, the ExoMars Trace Gas Orbiter spotted indications of water, probably subsurface ice, in the Valles Marineris canyon system. Orbital motion Mars's average distance from the Sun is roughly 230 million km (143 million mi), and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. The gravitational potential difference and thus the delta-v needed to transfer between Mars and Earth is the second lowest for Earth. The axial tilt of Mars is 25.19° relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day, the orientation of the north pole of Mars is close to the star Deneb. Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury has a larger orbital eccentricity. It is known that in the past, Mars has had a much more circular orbit. At one point, 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. Mars's cycle of eccentricity is 96,000 Earth years compared to Earth's cycle of 100,000 years. Mars has its closest approach to Earth (opposition) in a synodic period of 779.94 days. It should not be confused with Mars conjunction, where the Earth and Mars are at opposite sides of the Solar System and form a straight line crossing the Sun. The average time between the successive oppositions of Mars, its synodic period, is 780 days; but the number of days between successive oppositions can range from 764 to 812. The distance at close approach varies between about 54 and 103 million km (34 and 64 million mi) due to the planets' elliptical orbits, which causes comparable variation in angular size. At their furthest Mars and Earth can be as far as 401 million km (249 million mi) apart. Mars comes into opposition from Earth every 2.1 years. The planets come into opposition near Mars's perihelion in 2003, 2018 and 2035, with the 2020 and 2033 events being particularly close to perihelic opposition. The mean apparent magnitude of Mars is +0.71 with a standard deviation of 1.05. Because the orbit of Mars is eccentric, the magnitude at opposition from the Sun can range from about −3.0 to −1.4. The minimum brightness is magnitude +1.86 when the planet is near aphelion and in conjunction with the Sun. At its brightest, Mars (along with Jupiter) is second only to Venus in apparent brightness. Mars usually appears distinctly yellow, orange, or red. When farthest away from Earth, it is more than seven times farther away than when it is closest. Mars is usually close enough for particularly good viewing once or twice at 15-year or 17-year intervals. Optical ground-based telescopes are typically limited to resolving features about 300 kilometres (190 mi) across when Earth and Mars are closest because of Earth's atmosphere. As Mars approaches opposition, it begins a period of retrograde motion, which means it will appear to move backwards in a looping curve with respect to the background stars. This retrograde motion lasts for about 72 days, and Mars reaches its peak apparent brightness in the middle of this interval. Moons Mars has two relatively small (compared to Earth's) natural moons, Phobos (about 22 km (14 mi) in diameter) and Deimos (about 12 km (7.5 mi) in diameter), which orbit at 9,376 km (5,826 mi) and 23,460 km (14,580 mi) around the planet. The origin of both moons is unclear, although a popular theory states that they were asteroids captured into Martian orbit. Both satellites were discovered in 1877 by Asaph Hall and were named after the characters Phobos (the deity of panic and fear) and Deimos (the deity of terror and dread), twins from Greek mythology who accompanied their father Ares, god of war, into battle. Mars was the Roman equivalent to Ares. In modern Greek, the planet retains its ancient name Ares (Aris: Άρης). From the surface of Mars, the motions of Phobos and Deimos appear different from that of the Earth's satellite, the Moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbit – where the orbital period would match the planet's period of rotation – rises as expected in the east, but slowly. Because the orbit of Phobos is below a synchronous altitude, tidal forces from Mars are gradually lowering its orbit. In about 50 million years, it could either crash into Mars's surface or break up into a ring structure around the planet. The origin of the two satellites is not well understood. Their low albedo and carbonaceous chondrite composition have been regarded as similar to asteroids, supporting a capture theory. The unstable orbit of Phobos would seem to point toward a relatively recent capture. But both have circular orbits near the equator, which is unusual for captured objects, and the required capture dynamics are complex. Accretion early in the history of Mars is plausible, but would not account for a composition resembling asteroids rather than Mars itself, if that is confirmed. Mars may have yet-undiscovered moons, smaller than 50 to 100 metres (160 to 330 ft) in diameter, and a dust ring is predicted to exist between Phobos and Deimos. A third possibility for their origin as satellites of Mars is the involvement of a third body or a type of impact disruption. More-recent lines of evidence for Phobos having a highly porous interior, and suggesting a composition containing mainly phyllosilicates and other minerals known from Mars, point toward an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's satellite. Although the visible and near-infrared (VNIR) spectra of the moons of Mars resemble those of outer-belt asteroids, the thermal infrared spectra of Phobos are reported to be inconsistent with chondrites of any class. It is also possible that Phobos and Deimos were fragments of an older moon, formed by debris from a large impact on Mars, and then destroyed by a more recent impact upon the satellite. More recently, a study conducted by a team of researchers from multiple countries suggests that a lost moon, at least fifteen times the size of Phobos, may have existed in the past. By analyzing rocks which point to tidal processes on the planet, it is possible that these tides may have been regulated by a past moon. Human observations and exploration The history of observations of Mars is marked by oppositions of Mars when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars, which are distinguished because Mars is close to perihelion, making it even closer to Earth. The ancient Sumerians named Mars Nergal, the god of war and plague. During Sumerian times, Nergal was a minor deity of little significance, but, during later times, his main cult center was the city of Nineveh. In Mesopotamian texts, Mars is referred to as the "star of judgement of the fate of the dead". The existence of Mars as a wandering object in the night sky was also recorded by the ancient Egyptian astronomers and, by 1534 BCE, they were familiar with the retrograde motion of the planet. By the period of the Neo-Babylonian Empire, the Babylonian astronomers were making regular records of the positions of the planets and systematic observations of their behavior. For Mars, they knew that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. They invented arithmetic methods for making minor corrections to the predicted positions of the planets. In Ancient Greece, the planet was known as Πυρόεις. Commonly, the Greek name for the planet now referred to as Mars, was Ares. It was the Romans who named the planet Mars, for their god of war, often represented by the sword and shield of the planet's namesake. In the fourth century BCE, Aristotle noted that Mars disappeared behind the Moon during an occultation, indicating that the planet was farther away. Ptolemy, a Greek living in Alexandria, attempted to address the problem of the orbital motion of Mars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection later called the Almagest (from the Arabic for "greatest"), which became the authoritative treatise on Western astronomy for the next fourteen centuries. Literature from ancient China confirms that Mars was known by Chinese astronomers by no later than the fourth century BCE. In the East Asian cultures, Mars is traditionally referred to as the "fire star" (火星) based on the Wuxing system. In 1609 Johannes Kepler published a 10 year study of Martian orbit, using the diurnal parallax of Mars, measured by Tycho Brahe, to make a preliminary calculation of the relative distance to the planet. From Brahe's observations of Mars, Kepler deduced that the planet orbited the Sun not in a circle, but in an ellipse. Moreover, Kepler showed that Mars sped up as it approached the Sun and slowed down as it moved farther away, in a manner that later physicists would explain as a consequence of the conservation of angular momentum.: 433–437 In 1610 the first use of a telescope for astronomical observation, including Mars, was performed by Italian astronomer Galileo Galilei. With the telescope the diurnal parallax of Mars was again measured in an effort to determine the Sun-Earth distance. This was first performed by Giovanni Domenico Cassini in 1672. The early parallax measurements were hampered by the quality of the instruments. The only occultation of Mars by Venus observed was that of 13 October 1590, seen by Michael Maestlin at Heidelberg. By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. On 5 September 1877, a perihelic opposition to Mars occurred. The Italian astronomer Giovanni Schiaparelli used a 22-centimetre (8.7 in) telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which, with the possible exception of the natural canyon Valles Marineris, were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means "channels" or "grooves", was popularly mistranslated in English as "canals". Influenced by the observations, the orientalist Percival Lowell founded an observatory which had 30- and 45-centimetre (12- and 18-in) telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894, and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were independently observed by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time. The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summers) in combination with the canals led to speculation about life on Mars, and it was a long-held belief that Mars contained vast seas and vegetation. As bigger telescopes were used, fewer long, straight canali were observed. During observations in 1909 by Antoniadi with an 84-centimetre (33 in) telescope, irregular patterns were observed, but no canali were seen. The first spacecraft from Earth to visit Mars was Mars 1 of the Soviet Union, which flew by in 1963, but contact was lost en route. NASA's Mariner 4 followed and became the first spacecraft to successfully transmit from Mars; launched on 28 November 1964, it made its closest approach to the planet on 15 July 1965. Mariner 4 detected the weak Martian radiation belt, measured at about 0.1% that of Earth, and captured the first images of another planet from deep space. Once spacecraft visited the planet during the 1960s and 1970s, many previous concepts of Mars were radically broken. After the results of the Viking life-detection experiments, the hypothesis of a dead planet was generally accepted. The data from Mariner 9 and Viking allowed better maps of Mars to be made. Until 1997 and after Viking 1 shut down in 1982, Mars was only visited by three unsuccessful probes, two flying past without contact (Phobos 1, 1988; Mars Observer, 1993), and one (Phobos 2 1989) malfunctioning in orbit before reaching its destination Phobos. In 1997 Mars Pathfinder became the first successful rover mission beyond the Moon and started together with Mars Global Surveyor (operated until late 2006) an uninterrupted active robotic presence at Mars that has lasted until today. It produced complete, extremely detailed maps of the Martian topography, magnetic field and surface minerals. Starting with these missions a range of new improved crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars, with successful missions by the NASA (United States), Jaxa (Japan), ESA, United Kingdom, ISRO (India), Roscosmos (Russia), the United Arab Emirates, and CNSA (China) to study the planet's surface, climate, and geology, uncovering the different elements of the history and dynamic of the hydrosphere of Mars and possible traces of ancient life. As of 2023[update], Mars is host to ten functioning spacecraft. Eight are in orbit: 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, ExoMars Trace Gas Orbiter, the Hope orbiter, and the Tianwen-1 orbiter. Another two are on the surface: the Mars Science Laboratory Curiosity rover and the Perseverance rover. Collected maps are available online at websites including Google Mars. NASA provides two online tools: Mars Trek, which provides visualizations of the planet using data from 50 years of exploration, and Experience Curiosity, which simulates traveling on Mars in 3-D with Curiosity. Planned missions to Mars include: As of February 2024[update], debris from these types of missions has reached over seven tons. Most of it consists of crashed and inactive spacecraft as well as discarded components. In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Key areas include establishing telecommunications, payload delivery and surface imaging. Habitability and habitation During the late 19th century, it was widely accepted in the astronomical community that Mars had life-supporting qualities, including the presence of oxygen and water. However, in 1894 W. W. Campbell at Lick Observatory observed the planet and found that "if water vapor or oxygen occur in the atmosphere of Mars it is in quantities too small to be detected by spectroscopes then available". That observation contradicted many of the measurements of the time and was not widely accepted. Campbell and V. M. Slipher repeated the study in 1909 using better instruments, but with the same results. It was not until the findings were confirmed by W. S. Adams in 1925 that the myth of the Earth-like habitability of Mars was finally broken. However, even in the 1960s, articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars. The current understanding of planetary habitability – the ability of a world to develop environmental conditions favorable to the emergence of life – favors planets that have liquid water on their surface. Most often this requires the orbit of a planet to lie within the habitable zone, which for the Sun is estimated to extend from within the orbit of Earth to about that of Mars. During perihelion, Mars dips inside this region, but Mars's thin (low-pressure) atmosphere prevents liquid water from existing over large regions for extended periods. The past flow of liquid water demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface may have been too salty and acidic to support regular terrestrial life. The environmental conditions on Mars are a challenge to sustaining organic life: the planet has little heat transfer across its surface, it has poor insulation against bombardment by the solar wind due to the absence of a magnetosphere and has insufficient atmospheric pressure to retain water in a liquid form (water instead sublimes to a gaseous state). Mars is nearly, or perhaps totally, geologically dead; the end of volcanic activity has apparently stopped the recycling of chemicals and minerals between the surface and interior of the planet. Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there remains unknown. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites and had positive results, including a temporary increase in CO2 production on exposure to water and nutrients. This sign of life was later disputed by scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A 2014 analysis of Martian meteorite EETA79001 found chlorate, perchlorate, and nitrate ions in sufficiently high concentrations to suggest that they are widespread on Mars. UV and X-ray radiation would turn chlorate and perchlorate ions into other, highly reactive oxychlorines, indicating that any organic molecules would have to be buried under the surface to survive. Small quantities of methane and formaldehyde detected by Mars orbiters are both claimed to be possible evidence for life, as these chemical compounds would quickly break down in the Martian atmosphere. Alternatively, these compounds may instead be replenished by volcanic or other geological means, such as serpentinite. Impact glass, formed by the impact of meteors, which on Earth can preserve signs of life, has also been found on the surface of the impact craters on Mars. Likewise, the glass in impact craters on Mars could have preserved signs of life, if life existed at the site. The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available. Several plans for a human mission to Mars have been proposed, but none have come to fruition. The NASA Authorization Act of 2017 directed NASA to study the feasibility of a crewed Mars mission in the early 2030s; the resulting report concluded that this would be unfeasible. In addition, in 2021, China was planning to send a crewed Mars mission in 2033. Privately held companies such as SpaceX have also proposed plans to send humans to Mars, with the eventual goal to settle on the planet. As of 2024, SpaceX has proceeded with the development of the Starship launch vehicle with the goal of Mars colonization. In plans shared with the company in April 2024, Elon Musk envisions the beginning of a Mars colony within the next twenty years. This would be enabled by the planned mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. The moon Phobos has been proposed as an anchor point for a space elevator. Besides national space agencies and space companies, groups such as the Mars Society and The Planetary Society advocate for human missions to Mars. In culture Mars is named after the Roman god of war (Greek Ares), but was also associated with the demi-god Heracles (Roman Hercules) by ancient Greek astronomers, as detailed by Aristotle. This association between Mars and war dates back at least to Babylonian astronomy, in which the planet was named for the god Nergal, deity of war and destruction. It persisted into modern times, as exemplified by Gustav Holst's orchestral suite The Planets, whose famous first movement labels Mars "The Bringer of War". The planet's symbol, a circle with a spear pointing out to the upper right, is also used as a symbol for the male gender. The symbol dates from at least the 11th century, though a possible predecessor has been found in the Greek Oxyrhynchus Papyri. The idea that Mars was populated by intelligent Martians became widespread in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". In the present day, high-resolution mapping of the surface of Mars has revealed no artifacts of habitation, but pseudoscientific speculation about intelligent life on Mars still continues. Reminiscent of the canali observations, these speculations are based on small scale features perceived in the spacecraft images, such as "pyramids" and the "Face on Mars". In his book Cosmos, planetary astronomer Carl Sagan wrote: "Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears." The depiction of Mars in fiction has been stimulated by its dramatic red color and by nineteenth-century scientific speculations that its surface conditions might support not just life but intelligent life. This gave way to many science fiction stories involving these concepts, such as H. G. Wells's The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth; Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization; as well as Edgar Rice Burroughs's series Barsoom, C. S. Lewis's novel Out of the Silent Planet (1938), and a number of Robert A. Heinlein stories before the mid-sixties. Since then, depictions of Martians have also extended to animation. A comic figure of an intelligent Martian, Marvin the Martian, appeared in Haredevil Hare (1948) as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. After the Mariner and Viking spacecraft had returned pictures of Mars as a lifeless and canal-less world, these ideas about Mars were abandoned; for many science-fiction authors, the new discoveries initially seemed like a constraint, but eventually the post-Viking knowledge of Mars became itself a source of inspiration for works like Kim Stanley Robinson's Mars trilogy. See also Notes References Further reading External links Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of". |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-159] | [TOKENS: 12858] |
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Horizon] | [TOKENS: 3513] |
Contents Horizon Most commonly, the horizon is the border between the surface of a celestial body and its sky when viewed from the perspective of an observer on or above the surface of the celestial body. This concept is further refined as - There is also an imaginary astronomical, celestial, or theoretical horizon, part of the horizontal coordinate system, which is an infinite eye-level plane perpendicular to a line that runs (a) from the center of a celestial body (b) through the observer and (c) out to space (see graphic above). It is used to calculate "horizon dip," which is the difference between the astronomical horizon and the sea horizon measured in arcs. Horizon dip is one factor taken into account in navigation by the stars. In perspective drawing, the horizon line (also referred to as "eye-level") is the point of view from which the drawn scene is presented. It is an imaginary horizontal line across the scene. The line may be above, level with, or below the center of the drawing, corresponding to looking down, straight at, or up to the drawn scene. Vanishing lines run from the foreground to one or more vanishing points on the horizon line. Etymology The word horizon derives from the Greek ὁρίζων κύκλος (horízōn kýklos) 'separating circle', where ὁρίζων is from the verb ὁρίζω (horízō) '(I) divide, (I) separate', which in turn derives from ὅρος (hóros) 'boundary, landmark'. True horizon The true horizon surrounds the observer and it is typically assumed to be a circle, drawn on the surface of a perfectly spherical model of the relevant celestial body, i.e., a small circle of the local osculating sphere. With respect to Earth, the center of the true horizon is below the observer and below sea level. Its radius or horizontal distance from the observer varies slightly from day to day due to atmospheric refraction, which is greatly affected by weather conditions. Also, the higher the observer's eyes are from sea level, the farther away the horizon is from the observer. For instance, in standard atmospheric conditions, for an observer with eye level above sea level by 1.8 metres (6 ft), the horizon is at a distance of about 4.8 kilometres (3 mi). When observed from very high standpoints, such as a space station, the horizon is much farther away and it encompasses a much larger area of Earth's surface. In this case, the horizon would no longer be a perfect circle, not even a plane curve such as an ellipse, especially when the observer is above the equator, as the Earth's surface can be better modeled as an oblate ellipsoid than as a sphere. The distance to the true (geometric) horizon (not accounting for atmospheric refraction) from an observer at height h {\displaystyle h} above the surface of a celestial body assumed to be perfectly spherical can be calculated using the formula: d = 2 R h + h 2 {\displaystyle d={\sqrt {2Rh+h^{2}}}} Where: Assuming no atmospheric refraction and a spherical Earth with radius R=6,371 kilometres (3,959 mi): On terrestrial planets and other solid celestial bodies with negligible atmospheric effects, the distance to the horizon for a "standard observer" varies as the square root of the planet's radius. Thus, the horizon on Mercury is 62% as far away from the observer as it is on Earth, on Mars the figure is 73%, on the Moon the figure is 52%, on Mimas the figure is 18%, and so on. If the Earth is assumed to be a featureless sphere (rather than an oblate spheroid) with no atmospheric refraction, then the distance to the horizon can be calculated. using the Pythagorean theorem. At the horizon, the line of sight is a tangent to the Earth and is also perpendicular to Earth's radius. This sets up a right triangle, with the sum of the radius and the height as the hypotenuse. With referring to the second figure at the right leads to the following: which may be solved to yield where R is the radius of the Earth (R and h must be in the same units). For example, if a satellite is at a height of 2000 km, the distance to the horizon is 5,430 kilometres (3,370 mi); neglecting the second term in parentheses would give a distance of 5,048 kilometres (3,137 mi), a 7% error. If the observer is close to the surface of the Earth, then h is a negligible fraction of R and can be disregarded the term (2R + h), and the formula becomes- Using kilometres for d and R, and metres for h, and taking the radius of the Earth as 6371 km, the distance to the horizon is Using imperial units, with d and R in statute miles (as commonly used on land), and h in feet, the distance to the horizon is If d is in nautical miles, and h in feet, the constant factor is about 1.06, which is close enough to 1 that it is often ignored, giving: These formulas may be used when h is much smaller than the radius of the Earth (6371 km or 3959 mi), including all views from any mountaintops, airplanes, or high-altitude balloons. With the constants as given, both the metric and imperial formulas are precise to within 1% (see the next section for how to obtain greater precision). If h is significant with respect to R, as with most satellites, then the approximation is no longer valid, and the exact formula is required. Another relationship involves the great-circle distance s along the arc over the curved surface of the Earth to the horizon; this is more directly comparable to the geographical distance on a map. It can be formulated in terms of γ in radians, then Solving for s gives The distance s can also be expressed in terms of the line-of-sight distance d; from the second figure at the right, substituting for γ and rearranging gives The distances d and s are nearly the same when the height of the object is negligible compared to the radius (that is, h ≪ R). When the observer is elevated, the horizon zenith angle can be greater than 90°. The maximum visible zenith angle occurs when the ray is tangent to Earth's surface; from triangle OCG in the figure at right, where h {\displaystyle h} is the observer's height above the surface and γ {\displaystyle \gamma } is the angular dip of the horizon. It is related to the horizon zenith angle z {\displaystyle z} by: For a non-negative height h {\displaystyle h} , the angle z {\displaystyle z} is always ≥ 90°. To compute the greatest distance DBL at which an observer B can see the top of an object L above the horizon, simply add the distances to the horizon from each of the two points: For example, for an observer B with a height of hB=1.70 m standing on the ground, the horizon is DB=4.65 km away. For a tower with a height of hL=100 m, the horizon distance is DL=35.7 km. Thus an observer on a beach can see the top of the tower as long as it is not more than DBL=40.35 km away. Conversely, if an observer on a boat (hB=1.7 m) can just see the tops of trees on a nearby shore (hL=10 m), the trees are probably about DBL=16 km away. Referring to the figure at the right, and using the approximation above, the top of the lighthouse will be visible to a lookout in a crow's nest at the top of a mast of the boat if where DBL is in kilometres and hB and hL are in metres. As another example, suppose an observer, whose eyes are two metres above the level ground, uses binoculars to look at a distant building which he knows to consist of thirty stories, each 3.5 metres high. He counts the stories he can see and finds there are only ten. So twenty stories or 70 metres of the building are hidden from him by the curvature of the Earth. From this, he can calculate his distance from the building: which comes to about 35 kilometres. It is similarly possible to calculate how much of a distant object is visible above the horizon. Suppose an observer's eye is 10 metres above sea level, and he is watching a ship that is 20 km away. His horizon is: kilometres from him, which comes to about 11.3 kilometres away. The ship is a further 8.7 km away. The height of a point on the ship that is just visible to the observer is given by: which comes to almost exactly six metres. The observer can therefore see that part of the ship that is more than six metres above the level of the water. The part of the ship that is below this height is hidden from him by the curvature of the Earth. In this situation, the ship is said to be hull-down. Refracted horizon Historically, the distance to the refracted horizon has long been vital to survival and successful navigation, especially at sea, because it determined an observer's maximum range of vision and thus of communication, with all the obvious consequences for safety and the transmission of information that this range implied. This importance lessened with the development of the radio and the telegraph, but even today, when flying an aircraft under visual flight rules, a technique called attitude flying is used to control the aircraft, where the pilot uses the visual relationship between the aircraft's nose and the horizon to control the aircraft. Pilots can also retain their spatial orientation by referring to the horizon. Due to atmospheric refraction the distance to the visible horizon is further than the distance based on a simple geometric calculation. If the ground (or water) surface is colder than the air above it, a cold, dense layer of air forms close to the surface, causing light to be refracted downward as it travels, and therefore, to some extent, to go around the curvature of the Earth. The reverse happens if the ground is hotter than the air above it, as often happens in deserts, producing mirages. As an approximate compensation for refraction, surveyors measuring distances longer than 100 meters subtract 14% from the calculated curvature error and ensure lines of sight are at least 1.5 metres from the ground, to reduce random errors created by refraction. If the Earth were an airless world like the Moon, the above calculations would be accurate. However, Earth has an atmosphere of air, whose density and refractive index vary considerably depending on the temperature and pressure. This makes the air refract light to varying extents, affecting the appearance of the horizon. Usually, the density of the air just above the surface of the Earth is greater than its density at greater altitudes. This makes its refractive index greater near the surface than at higher altitudes, which causes light that is travelling roughly horizontally to be refracted downward. This makes the actual distance to the horizon greater than the distance calculated with geometrical formulas. With standard atmospheric conditions, the difference is about 8%. This changes the factor of 3.57, in the metric formulas used above, to about 3.86. For instance, if an observer is standing on seashore, with eyes 1.70 m above sea level, according to the simple geometrical formulas given above the horizon should be 4.7 km away. Actually, atmospheric refraction allows the observer to see 300 metres farther, moving the true horizon 5 km away from the observer. This correction can be, and often is, applied as a fairly good approximation when atmospheric conditions are close to standard. When conditions are unusual, this approximation fails. Refraction is strongly affected by temperature gradients, which can vary considerably from day to day, especially over water. In extreme cases, usually in springtime, when warm air overlies cold water, refraction can allow light to follow the Earth's surface for hundreds of kilometres. Opposite conditions occur, for example, in deserts, where the surface is very hot, so hot, low-density air is below cooler air. This causes light to be refracted upward, causing mirage effects that make the concept of the horizon somewhat meaningless. Calculated values for the effects of refraction under unusual conditions are therefore only approximate. Nevertheless, attempts have been made to calculate them more accurately than the simple approximation described above. Outside the visual wavelength range, refraction will be different. For radar (e.g. for wavelengths 300 to 3 mm i.e. frequencies between 1 and 100 GHz) the radius of the Earth may be multiplied by 4/3 to obtain an effective radius giving a factor of 4.12 in the metric formula i.e. the radar horizon will be 15% beyond the geometrical horizon or 7% beyond the visual. The 4/3 factor is not exact, as in the visual case the refraction depends on atmospheric conditions. If the density profile of the atmosphere is known, the distance d to the horizon is given by where RE is the radius of the Earth, ψ is the dip of the horizon and δ is the refraction of the horizon. The dip is determined fairly simply from where h is the observer's height above the Earth, μ is the index of refraction of air at the observer's height, and μ0 is the index of refraction of air at Earth's surface. The refraction must be found by integration of where ϕ {\displaystyle \phi \,\!} is the angle between the ray and a line through the center of the Earth. The angles ψ and ϕ {\displaystyle \phi \,\!} are related by A much simpler approach, which produces essentially the same results as the first-order approximation described above, uses the geometrical model but uses a radius R′ = 7/6 RE. The distance to the horizon is then Taking the radius of the Earth as 6371 km, with d in km and h in m, with d in mi and h in ft, In the case of radar one typically has R′ = 4/3 RE resulting (with d in km and h in m) in Results from Young's method are quite close to those from Sweer's method, and are sufficiently accurate for many purposes. Astronomical horizon In astronomy, the horizon is the horizontal plane through the eyes of the observer. It is the fundamental plane of the horizontal coordinate system, the locus of points that have an altitude of zero degrees. While similar in ways to the geometrical horizon, in this context a horizon may be considered to be a plane in space, rather than a line on a picture plane. Perspective In many contexts, especially perspective drawing, the curvature of the Earth is disregarded and the horizon is considered the theoretical line to which points on any horizontal plane converge (when projected onto the picture plane) as their distance from the observer increases. For observers near sea level, the difference between this geometrical horizon (which assumes a perfectly flat, infinite ground plane) and the true horizon (which assumes a spherical Earth surface) is imperceptible to the unaided eye. However, for someone on a 1,000 m (3,300 ft) hill looking out across the sea, the true horizon will be about a degree below a horizontal line. The horizon is a key feature of the picture plane in the science of graphical perspective. Assuming the picture plane stands vertical to ground, and P is the perpendicular projection of the eye point O on the picture plane, the horizon is defined as the horizontal line through P. The point P is the vanishing point of lines perpendicular to the picture. If S is another point on the horizon, then it is the vanishing point for all lines parallel to OS. But Brook Taylor (1719) indicated that the horizon plane determined by O and the horizon was like any other plane: The peculiar geometry of perspective where parallel lines converge in the distance, stimulated the development of projective geometry which posits a point at infinity where parallel lines meet. In her book Geometry of an Art (2007), Kirsti Andersen described the evolution of perspective drawing and science up to 1800, noting that vanishing points need not be on the horizon. In a chapter titled "Horizon", John Stillwell recounted how projective geometry has led to incidence geometry, the modern abstract study of line intersection. Stillwell also ventured into foundations of mathematics in a section titled "What are the Laws of Algebra ?" The "algebra of points", originally given by Karl von Staudt deriving the axioms of a field was deconstructed in the twentieth century, yielding a wide variety of mathematical possibilities. Stillwell states See also References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Bosniak_language] | [TOKENS: 1830] |
Contents Bosnian language Bosnian[c] is the standard variety of the Serbo-Croatian language mainly used by Bosniaks.[d] It is one of the three official languages of Bosnia and Herzegovina; a co-official language in Montenegro; and an officially recognized minority language in Croatia, Serbia, North Macedonia and Kosovo.[e] Bosnian uses both the Latin and Cyrillic alphabets,[b] with Latin in everyday use. It is notable among the varieties of Serbo-Croatian for a number of Arabic, Persian and Ottoman Turkish loanwords,[f] largely due to the language's interaction with those cultures through Islamic ties. Bosnian is based on the most widespread dialect of Serbo-Croatian, Shtokavian, more specifically on Eastern Herzegovinian, which is also the basis of standard Croatian, Serbian and Montenegrin varieties. Therefore, the Declaration on the Common Language of Croats, Serbs, Bosniaks and Montenegrins was issued in 2017 in Sarajevo. Although the common name for the common language remains 'Serbo-Croatian', newer alternatives such as 'Bosnian-Croatian-Serbian' and 'Bosnian-Croatian-Montenegrin-Serbian' have been increasingly utilised since the 1990s, especially within diplomatic circles. Alphabet Table of the modern Bosnian alphabet in both Latin and Cyrillic, as well as with the IPA value, sorted according to Cyrillic: History Although Bosnians are, at the level of vernacular idiom, linguistically more homogeneous than either Serbians or Croatians, unlike those nations they failed to codify a standard language in the 19th century, with at least two factors being decisive: The modern Bosnian standard took shape in the 1990s and 2000s. Lexically, Islamic-Oriental loanwords are more frequent; phonetically: the phoneme /x/ (letter h) is reinstated in many words as a distinct feature of vernacular Bosniak speech and language tradition; also, there are some changes in grammar, morphology and orthography that reflect the Bosniak pre-World War I literary tradition, mainly that of the Bosniak renaissance at the beginning of the 20th century. The name "Bosnian language" is a controversial issue for some Croats and Serbs, who also refer to it as the "Bosniak" language (Serbo-Croatian: bošnjački / бошњачки, [bǒʃɲaːtʃkiː]). Bosniak linguists however insist that the only legitimate name is "Bosnian" language (bosanski) and that that is the name that both Croats and Serbs should use. The controversy arises because the name "Bosnian" may seem to imply that it is the language of all Bosnians, while Bosnian Croats and Serbs reject that designation for their idioms.[citation needed] The language is called Bosnian language in the 1995 Dayton Accords and is concluded by observers to have received legitimacy and international recognition at the time. The International Organization for Standardization (ISO), United States Board on Geographic Names (BGN) and the Permanent Committee on Geographical Names (PCGN) recognize the Bosnian language. Furthermore, the status of the Bosnian language is also recognized by bodies such as the United Nations, UNESCO and translation and interpreting accreditation agencies, including internet translation services. Most English-speaking language encyclopedias (Routledge, Glottolog, Ethnologue, etc.) register the language solely as "Bosnian" language. The Library of Congress registered the language as "Bosnian" and gave it an ISO-number. The Slavic language institutes in English-speaking countries offer courses in "Bosnian" or "Bosnian/Croatian/Serbian" language, not in "Bosniak" language (e.g. Columbia, Cornell, Chicago, Washington, Kansas). The same is the case in German-speaking countries, where the language is taught under the name Bosnisch, not Bosniakisch (e.g. Vienna, Graz, Trier) with very few exceptions. I began writing The Legend of Ali Pasha with a specific purpose - to preserve our Bosnian language. Not the language of denominations or peoples of Bosnia, but the language of Bosnia. I also wanted to re-create a historical period of Bosnia. Some Croatian linguists (Zvonko Kovač, Ivo Pranjković, Josip Silić) support the name "Bosnian" language, whereas others (Radoslav Katičić, Dalibor Brozović, Tomislav Ladan) hold that the term Bosnian language is the only one appropriate[clarification needed] and that accordingly the terms Bosnian language and Bosniak language refer to two different things.[clarification needed] The Croatian state institutions, such as the Central Bureau of Statistics, use both terms: "Bosniak" language was used in the 2001 census, while the census in 2011 used the term "Bosnian" language. The majority of Serbian linguists hold that the term Bosniak language is the only one appropriate, which was agreed as early as 1990. The original form of The Constitution of the Federation of Bosnia and Herzegovina called the language "Bosniac language", until 2002 when it was changed in Amendment XXIX of the Constitution of the Federation by Wolfgang Petritsch. The original text of the Constitution of the Federation of Bosnia and Herzegovina was agreed in Vienna and was signed by Krešimir Zubak and Haris Silajdžić on March 18, 1994. The constitution of Republika Srpska, the Serb-dominated entity within Bosnia and Herzegovina, did not recognize any language or ethnic group other than Serbian. Bosniaks were mostly expelled from the territory controlled by the Serbs from 1992, but immediately after the war they demanded the restoration of their civil rights in those territories. The Bosnian Serbs refused to make reference to the Bosnian language in their constitution and as a result had constitutional amendments imposed by High Representative Wolfgang Petritsch. However, the constitution of Republika Srpska refers to it as the Language spoken by Bosniaks, because the Serbs were required to recognise the language officially, but wished to avoid recognition of its name. Serbia includes the Bosnian language as an elective subject in primary schools. Montenegro officially recognizes the Bosnian language: its 2007 Constitution specifically states that although Montenegrin is the official language, Serbian, Bosnian, Albanian and Croatian are also in official use. Differences between Bosnian, Croatian and Serbian The differences between the Bosnian, Serbian, and Croatian literary standards are minimal. Although Bosnian employs more Turkish, Persian, and Arabic loanwords—commonly called orientalisms—mainly in its spoken variety due to the fact that most Bosnian speakers are Muslims, it is still very similar to both Serbian and Croatian in its written and spoken form. "Lexical differences between the ethnic variants are extremely limited, even when compared with those between closely related Slavic languages (such as standard Czech and Slovak, Bulgarian and Macedonian), and grammatical differences are even less pronounced. More importantly, complete understanding between the ethnic variants of the standard language makes translation and second language teaching impossible." The Bosnian language, as a new normative register of the Shtokavian dialect, was officially introduced in 1996 with the publication of Pravopis bosanskog jezika in Sarajevo. According to that work, Bosnian differed from Serbian and Croatian on some main linguistic characteristics, such as: sound formats in some words, especially "h" (kahva versus Serbian kafa); substantial and deliberate usage of Oriental ("Turkish") words; spelling of future tense (kupit ću) as in Croatian but not Serbian (kupiću) (both forms have the same pronunciation). 2018, in the new issue of Pravopis bosanskog jezika, words without "h" are accepted due to their prevalence in language practice. Sample text Article 1 of the Universal Declaration of Human Rights in Bosnian, written in the Cyrillic script: Article 1 of the Universal Declaration of Human Rights in Bosnian, written in the Latin alphabet: Article 1 of the Universal Declaration of Human Rights in English: See also Notes References Sources and further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Computer_network] | [TOKENS: 8037] |
Contents Computer network In computer science, computer engineering, and telecommunications, a network is a group of communicating computers and peripherals known as hosts, which communicate data to other hosts via communication protocols, as facilitated by networking hardware. Within a computer network, hosts are identified by network addresses, which allow networking hardware to locate and identify hosts. Hosts may also have hostnames, memorable labels for the host nodes, which can be mapped to a network address using a hosts file or a name server such as Domain Name Service. The physical medium that supports information exchange includes wired media like copper cables, optical fibers, and wireless radio-frequency media. The arrangement of hosts and hardware within a network architecture is known as the network topology. The first computer network was created in 1940 when George Stibitz connected a terminal at Dartmouth to his Complex Number Calculator at Bell Labs in New York. Today, almost all computers are connected to a computer network, such as the global Internet or embedded networks such as those found in many modern electronic devices. Many applications have only limited functionality unless they are connected to a network. Networks support applications and services, such as access to the World Wide Web, digital video and audio, application and storage servers, printers, and email and instant messaging applications. History In 1940, George Stibitz of Bell Labs connected a teletype at Dartmouth to a Bell Labs computer running his Complex Number Calculator to demonstrate the use of computers at long distance. This was the first real-time, remote use of a computing machine. In the late 1950s, a network of computers was built for the U.S. military Semi-Automatic Ground Environment (SAGE) radar system using the Bell 101 modem. It was the first commercial modem for computers, released by AT&T Corporation in 1958. The modem allowed digital data to be transmitted over regular unconditioned telephone lines at a speed of 110 bits per second (bit/s). In 1959, Christopher Strachey filed a patent application for time-sharing in the United Kingdom and John McCarthy initiated the first project to implement time-sharing of user programs at MIT. Strachey passed the concept on to J. C. R. Licklider at the inaugural UNESCO Information Processing Conference in Paris that year. McCarthy was instrumental in the creation of three of the earliest time-sharing systems (the Compatible Time-Sharing System in 1961, the BBN Time-Sharing System in 1962, and the Dartmouth Time-Sharing System in 1963). In 1959, Anatoly Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organization of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centers. Kitov's proposal was rejected, as later was the 1962 OGAS economy management network project. During the 1960s, Paul Baran and Donald Davies independently invented the concept of packet switching for data communication between computers over a network. Baran's work addressed adaptive routing of message blocks across a distributed network, but did not include routers with software switches, nor the idea that users, rather than the network itself, would provide the reliability. Davies' hierarchical network design included high-speed routers, communication protocols and the essence of the end-to-end principle. The NPL network, a local area network at the National Physical Laboratory (United Kingdom), pioneered the implementation of the concept in 1968-69 using 768 kbit/s links. Both Baran's and Davies' inventions were seminal contributions that influenced the development of computer networks. In 1962 and 1963, J. C. R. Licklider sent a series of memos to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users. This ultimately became the basis for the ARPANET, which began in 1969. That year, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California, Santa Barbara, and the University of Utah. Designed principally by Bob Kahn, the network's routing, flow control, software design and network control were developed by the IMP team working for Bolt Beranek & Newman. In the early 1970s, Leonard Kleinrock carried out mathematical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today. In 1973, Peter Kirstein put internetworking into practice at University College London (UCL), connecting the ARPANET to British academic networks, the first international heterogeneous computer network. That same year, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a local area networking system he created with David Boggs. It was inspired by the packet radio ALOHAnet, started by Norman Abramson and Franklin Kuo at the University of Hawaii in the late 1960s. Metcalfe and Boggs, with John Shoch and Edward Taft, also developed the PARC Universal Packet for internetworking. That year, the French CYCLADES network, directed by Louis Pouzin was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself. In 1974, Vint Cerf and Bob Kahn published their seminal 1974 paper on internetworking, A Protocol for Packet Network Intercommunication. Later that year, Cerf, Yogen Dalal, and Carl Sunshine wrote the first Transmission Control Protocol (TCP) specification, RFC 675, coining the term Internet as a shorthand for internetworking. In July 1976, Metcalfe and Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and in December 1977, together with Butler Lampson and Charles P. Thacker, they received U.S. patent 4063220A for their invention. In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices. In 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1980, Ethernet was upgraded from the original 2.94 Mbit/s protocol to the 10 Mbit/s protocol, which was developed by Ron Crane, Bob Garner, Roy Ogus, Hal Murray, Dave Redell and Yogen Dalal. In 1986, the National Science Foundation (NSF) launched the National Science Foundation Network (NSFNET) as a general-purpose research network connecting various NSF-funded sites to each other and to regional research and education networks. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of 1 Gbit/s. Subsequently, higher speeds of up to 800 Gbit/s were added (as of 2025[update]). The scaling of Ethernet has been a contributing factor to its continued use. In the 1980s and 1990s, as embedded systems were becoming increasingly important in factories, cars, and airplanes, network protocols were developed to allow the embedded computers to communicate. In the late 1990s and 2000s, ubiquitous computing and an Internet of Things became popular. In 1960, the commercial airline reservation system semi-automatic business research environment (SABRE) went online with two connected mainframes. In 1965, Western Electric introduced the first widely used telephone switch that implemented computer control in the switching fabric. In 1972, commercial services were first deployed on experimental public data networks in Europe. Public data networks in Europe, North America and Japan began using X.25 in the late 1970s and interconnected with X.75. This underlying infrastructure was used for expanding TCP/IP networks in the 1980s. In 1977, the first long-distance fiber network was deployed by GTE in Long Beach, California. Hardware The transmission media used to link devices to form a computer network include electrical cable, optical fiber, and free space. In the OSI model, the software to handle the media is defined at layers 1 and 2 — the physical layer and the data link layer. Common examples of networking technologies include: The following classes of wired technologies are used in computer networking. Network connections can be established wirelessly using radio or other electromagnetic means of communication. The last two cases have a large round-trip delay time, which gives slow two-way communication but does not prevent sending large amounts of information (they can have high throughput). Apart from any physical transmission media, networks are built from additional basic system building blocks, such as network interface controllers, repeaters, hubs, bridges, switches, routers, modems, and firewalls. Any particular piece of equipment will frequently contain multiple building blocks and so may perform multiple functions. A network interface controller (NIC) is computer hardware that connects the computer to the network media and has the ability to process low-level network information. For example, the NIC may have a connector for plugging in a cable, or an aerial for wireless transmission and reception, and the associated circuitry. In Ethernet networks, each NIC has a unique Media Access Control (MAC) address—usually stored in the controller's permanent memory. To avoid address conflicts between network devices, the Institute of Electrical and Electronics Engineers (IEEE) maintains and administers MAC address uniqueness. The size of an Ethernet MAC address is six octets. The three most significant octets are reserved to identify NIC manufacturers. These manufacturers, using only their assigned prefixes, uniquely assign the three least-significant octets of every Ethernet interface they produce. A repeater is an electronic device that receives a network signal, cleans it of unnecessary noise and regenerates it. The signal is retransmitted at a higher power level, or to the other side of obstruction so that the signal can cover longer distances without degradation. In most twisted-pair Ethernet configurations, repeaters are required for cable that runs longer than 100 meters. With fiber optics, repeaters can be tens or even hundreds of kilometers apart. Repeaters work on the physical layer of the OSI model but still require a small amount of time to regenerate the signal. This can cause a propagation delay that affects network performance and may affect proper function. As a result, many network architectures limit the number of repeaters used in a network, e.g., the Ethernet 5-4-3 rule. An Ethernet repeater with multiple ports is known as an Ethernet hub. In addition to reconditioning and distributing network signals, a repeater hub assists with collision detection and fault isolation for the network. Hubs and repeaters in LANs have been largely obsoleted by modern network switches. Network bridges and network switches are distinct from a hub in that they only forward frames to the ports involved in the communication whereas a hub forwards to all ports. Bridges only have two ports but a switch can be thought of as a multi-port bridge. Switches normally have numerous ports, facilitating a star topology for devices, and for cascading additional switches. Bridges and switches operate at the data link layer (layer 2) of the OSI model and bridge traffic between two or more network segments to form a single local network. Both are devices that forward frames of data between ports based on the destination MAC address in each frame. They learn the association of physical ports to MAC addresses by examining the source addresses of received frames and only forward the frame when necessary. If an unknown destination MAC is targeted, the device broadcasts the request to all ports except the source, and discovers the location from the reply. Bridges and switches divide the network's collision domain but maintain a single broadcast domain. Network segmentation through bridging and switching helps break down a large, congested network into an aggregation of smaller, more efficient networks. A router is an internetworking device that forwards packets between networks by processing the addressing or routing information included in the packet. The routing information is often processed in conjunction with the routing table. A router uses its routing table to determine where to forward packets and does not require broadcasting packets which is inefficient for very big networks. Modems (modulator-demodulator) are used to connect network nodes via wire not originally designed for digital network traffic, or for wireless. To do this one or more carrier signals are modulated by the digital signal to produce an analog signal that can be tailored to give the required properties for transmission. Early modems modulated audio signals sent over a standard voice telephone line. Modems are still commonly used for telephone lines, using a digital subscriber line technology and cable television systems using DOCSIS technology. A firewall is a network device or software for controlling network security and access rules. Firewalls are inserted in connections between secure internal networks and potentially insecure external networks such as the Internet. Firewalls are typically configured to reject access requests from unrecognized sources while allowing actions from recognized ones. The vital role firewalls play in network security grows in parallel with the constant increase in cyber attacks. Communication A communication protocol is a set of rules for exchanging information over a network. Communication protocols have various characteristics, such as being connection-oriented or connectionless, or using circuit switching or packet switching. In a protocol stack, often constructed per the OSI model, communications functions are divided into protocol layers, where each layer leverages the services of the layer below it until the lowest layer controls the hardware that sends information across the media. The use of protocol layering is ubiquitous across the field of computer networking. An important example of a protocol stack is HTTP, the World Wide Web protocol. HTTP runs over TCP over IP, the Internet protocols, which in turn run over IEEE 802.11, the Wi-Fi protocol. This stack is used between a wireless router and a personal computer when accessing the web. Most modern computer networks use protocols based on packet-mode transmission. A network packet is a formatted unit of data carried by a packet-switched network. Packets consist of two types of data: control information and user data (payload). The control information provides data the network needs to deliver the user data, for example, source and destination network addresses, error detection codes, and sequencing information. Typically, control information is found in packet headers and trailers, with payload data in between. With packets, the bandwidth of the transmission medium can be better shared among users than if the network were circuit switched. When one user is not sending packets, the link can be filled with packets from other users, and so the cost can be shared, with relatively little interference, provided the link is not overused. Often the route a packet needs to take through a network is not immediately available. In that case, the packet is queued and waits until a link is free. The physical link technologies of packet networks typically limit the size of packets to a certain maximum transmission unit (MTU). A longer message may be fragmented before it is transferred and once the packets arrive, they are reassembled to construct the original message. The Internet protocol suite, also called TCP/IP, is the foundation of all modern networking. It offers connection-less and connection-oriented services over an inherently unreliable network traversed by datagram transmission using Internet protocol (IP). At its core, the protocol suite defines the addressing, identification, and routing specifications for Internet Protocol Version 4 (IPv4) and for IPv6, the next generation of the protocol with a much enlarged addressing capability. The Internet protocol suite is the defining set of protocols for the Internet. IEEE 802 is a family of IEEE standards dealing with local area networks and metropolitan area networks. The complete IEEE 802 protocol suite provides a diverse set of networking capabilities. The protocols have a flat addressing scheme. They operate mostly at layers 1 and 2 of the OSI model. For example, MAC bridging (IEEE 802.1D) deals with the routing of Ethernet packets using a Spanning Tree Protocol. IEEE 802.1Q describes VLANs, and IEEE 802.1X defines a port-based network access control protocol, which forms the basis for the authentication mechanisms used in VLANs (but it is also found in WLANs) – it is what the home user sees when the user has to enter a "wireless access key". Ethernet is a family of technologies used in wired LANs. It is described by a set of standards together called IEEE 802.3 published by the Institute of Electrical and Electronics Engineers. Wireless LAN based on the IEEE 802.11 standards, also widely known as WLAN or WiFi, is probably the most well-known member of the IEEE 802 protocol family for home users today. IEEE 802.11 shares many properties with wired Ethernet. Synchronous optical networking (SONET) and Synchronous Digital Hierarchy (SDH) are standardized multiplexing protocols that transfer multiple digital bit streams over optical fiber using lasers. They were originally designed to transport circuit mode communications from a variety of different sources, primarily to support circuit-switched digital telephony. However, due to its protocol neutrality and transport-oriented features, SONET/SDH also was the obvious choice for transporting Asynchronous Transfer Mode (ATM) frames. Asynchronous Transfer Mode (ATM) is a switching technique for telecommunication networks. It uses asynchronous time-division multiplexing and encodes data into small, fixed-sized cells. This differs from other protocols such as the Internet protocol suite or Ethernet that use variable-sized packets or frames. ATM has similarities with both circuit and packet switched networking. This makes it a good choice for a network that must handle both traditional high-throughput data traffic, and real-time, low-latency content such as voice and video. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the actual data exchange begins. ATM still plays a role in the last mile, which is the connection between an Internet service provider and the home user.[needs update] There are a number of different digital cellular standards, including: Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), cdmaOne, CDMA2000, Evolution-Data Optimized (EV-DO), Enhanced Data Rates for GSM Evolution (EDGE), Universal Mobile Telecommunications System (UMTS), Digital Enhanced Cordless Telecommunications (DECT), Digital AMPS (IS-136/TDMA), and Integrated Digital Enhanced Network (iDEN). Routing is the process of selecting network paths to carry network traffic. Routing is performed for many kinds of networks, including circuit switching networks and packet switched networks. In packet-switched networks, routing protocols direct packet forwarding through intermediate nodes. Intermediate nodes are typically network hardware devices such as routers, bridges, gateways, firewalls, or switches. General-purpose computers can also forward packets and perform routing, though because they lack specialized hardware, may offer limited performance. The routing process directs forwarding on the basis of routing tables, which maintain a record of the routes to various network destinations. Most routing algorithms use only one network path at a time. Multipath routing techniques enable the use of multiple alternative paths. Routing can be contrasted with bridging in its assumption that network addresses are structured and that similar addresses imply proximity within the network. Structured addresses allow a single routing table entry to represent the route to a group of devices. In large networks, the structured addressing used by routers outperforms unstructured addressing used by bridging. Structured IP addresses are used on the Internet. Unstructured MAC addresses are used for bridging on Ethernet and similar local area networks. Architecture The physical or geographic locations of network nodes and links generally have relatively little effect on a network, but the topology of interconnections of a network can significantly affect its throughput and reliability. With many technologies, such as bus or star networks, a single failure can cause the network to fail entirely. In general, the more interconnections there are, the more robust the network is; but the more expensive it is to install. Therefore, most network diagrams are arranged by their network topology which is the map of logical interconnections of network hosts. Common topologies are: The physical layout of the nodes in a network may not necessarily reflect the network topology. As an example, with FDDI, the network topology is a ring, but the physical topology is often a star, because all neighboring connections can be routed via a central physical location. Physical layout is not completely irrelevant, however, as common ducting and equipment locations can represent single points of failure due to issues like fires, power failures and flooding. An overlay network is a virtual network that is built on top of another network. Nodes in the overlay network are connected by virtual or logical links. Each link corresponds to a path, perhaps through many physical links, in the underlying network. The topology of the overlay network may (and often does) differ from that of the underlying one. For example, many peer-to-peer networks are overlay networks. They are organized as nodes of a virtual system of links that run on top of the Internet. Overlay networks have been used since the early days of networking, back when computers were connected via telephone lines using modems, even before data networks were developed. The most striking example of an overlay network is the Internet itself. The Internet itself was initially built as an overlay on the telephone network. Even today, each Internet node can communicate with virtually any other through an underlying mesh of sub-networks of wildly different topologies and technologies. Address resolution and routing are the means that allow mapping of a fully connected IP overlay network to its underlying network. Another example of an overlay network is a distributed hash table, which maps keys to nodes in the network. In this case, the underlying network is an IP network, and the overlay network is a table (actually a map) indexed by keys. Overlay networks have also been proposed as a way to improve Internet routing, such as through quality of service guarantees achieve higher-quality streaming media. Previous proposals such as IntServ, DiffServ, and IP multicast have not seen wide acceptance largely because they require modification of all routers in the network.[citation needed] On the other hand, an overlay network can be incrementally deployed on end-hosts running the overlay protocol software, without cooperation from Internet service providers. The overlay network has no control over how packets are routed in the underlying network between two overlay nodes, but it can control, for example, the sequence of overlay nodes that a message traverses before it reaches its destination[citation needed]. For example, Akamai Technologies manages an overlay network that provides reliable, efficient content delivery (a kind of multicast). Academic research includes end system multicast, resilient routing and quality of service studies, among others. Networks may be characterized by many properties or features, such as physical capacity, organizational purpose, user authorization, access rights, and others. Another distinct classification method is that of the physical extent or geographic scale. A nanoscale network has key components implemented at the nanoscale, including message carriers, and leverages physical principles that differ from macroscale communication mechanisms. Nanoscale communication extends communication to very small sensors and actuators such as those found in biological systems and also tends to operate in environments that would be too harsh for other communication techniques. A personal area network (PAN) is a computer network used for communication among computers and different information technological devices close to one person. Some examples of devices that are used in a PAN are personal computers, printers, fax machines, telephones, PDAs, scanners, and video game consoles. A PAN may include wired and wireless devices. The reach of a PAN typically extends to 10 meters. A wired PAN is usually constructed with USB and FireWire connections while technologies such as Bluetooth and infrared communication typically form a wireless PAN. A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as a home, school, office building, or closely positioned group of buildings. Wired LANs are most commonly based on Ethernet technology. Other networking technologies such as ITU-T G.hn also provide a way to create a wired LAN using existing wiring, such as coaxial cables, telephone lines, and power lines. A LAN can be connected to a wide area network (WAN) using a router. The defining characteristics of a LAN, in contrast to a WAN, include higher data transfer rates, limited geographic range, and lack of reliance on leased lines to provide connectivity.[citation needed] Current Ethernet or other IEEE 802.3 LAN technologies operate at data transfer rates up to and in excess of 100 Gbit/s, standardized by IEEE in 2010. A campus area network (CAN) is made up of an interconnection of LANs within a limited geographical area. The networking equipment (switches, routers) and transmission media (optical fiber, Cat5 cabling, etc.) are almost entirely owned by the campus tenant or owner (an enterprise, university, government, etc.). For example, a university campus network is likely to link a variety of campus buildings to connect academic colleges or departments, the library, and student residence halls. A backbone network is part of a computer network infrastructure that provides a path for the exchange of information between different LANs or subnetworks. A backbone can tie together diverse networks within the same building, across different buildings, or over a wide area. When designing a network backbone, network performance and network congestion are critical factors to take into account. Normally, the backbone network's capacity is greater than that of the individual networks connected to it. For example, a large company might implement a backbone network to connect departments that are located around the world. The equipment that ties together the departmental networks constitutes the network backbone. Another example of a backbone network is the Internet backbone, which is a massive, global system of fiber-optic cable and optical networking that carry the bulk of data between wide area networks (WANs), metro, regional, national and transoceanic networks. A metropolitan area network (MAN) is a large computer network that interconnects users with computer resources in a geographic region of the size of a metropolitan area. A wide area network (WAN) is a computer network that covers a large geographic area such as a city, country, or spans even intercontinental distances. A WAN uses a communications channel that combines many types of media such as telephone lines, cables, and airwaves. A WAN often makes use of transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI model: the physical layer, the data link layer, and the network layer. A global area network (GAN) is a network used for supporting mobile users across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial wireless LANs. An intranet is a community of interest under private administration usually by an enterprise, and is only accessible by authorized users (e.g. employees). Intranets do not have to be connected to the Internet, but generally have a limited connection. An extranet is an extension of an intranet that allows secure communications to users outside of the intranet (e.g. business partners, customers). Networks are typically managed by the organizations that own them. Private enterprise networks may use a combination of intranets and extranets. They may also provide network access to the Internet, which has no single owner and permits virtually unlimited global connectivity. An intranet is a set of networks that are under the control of a single administrative entity. An intranet typically uses the Internet Protocol and IP-based tools such as web browsers and file transfer applications. The administrative entity limits the use of the intranet to its authorized users. Most commonly, an intranet is the internal LAN of an organization. A large intranet typically has at least one web server to provide users with organizational information. An extranet is a network that is under the administrative control of a single organization but supports a limited connection to a specific external network. For example, an organization may provide access to some aspects of its intranet to share data with its business partners or customers. These other entities are not necessarily trusted from a security standpoint. The network connection to an extranet is often, but not always, implemented via WAN technology. An internetwork is the connection of multiple different types of computer networks to form a single computer network using higher-layer network protocols and connecting them together using routers. The Internet is the largest example of internetwork. It is a global system of interconnected governmental, academic, corporate, public, and private computer networks. It is based on the networking technologies of the Internet protocol suite. It is the successor of the Advanced Research Projects Agency Network (ARPANET) developed by DARPA of the United States Department of Defense. The Internet utilizes copper communications and an optical networking backbone to enable the World Wide Web (WWW), the Internet of things, video transfer, and a broad range of information services. Participants on the Internet use a diverse array of methods of several hundred documented, and often standardized, protocols compatible with the Internet protocol suite and the IP addressing system administered by the Internet Assigned Numbers Authority and address registries. Service providers and large enterprises exchange information about the reachability of their address spaces through the Border Gateway Protocol (BGP), forming a redundant worldwide mesh of transmission paths. A darknet is an overlay network, typically running on the Internet, that is only accessible through specialized software. It is an anonymizing network where connections are made only between trusted peers — sometimes called friends (F2F) — using non-standard protocols and ports. Darknets are distinct from other distributed peer-to-peer networks as sharing is anonymous (that is, IP addresses are not publicly shared), and therefore users can communicate with little fear of governmental or corporate interference. A virtual private network (VPN) is an overlay network in which some of the links between nodes are carried by open connections or virtual circuits in some larger network (e.g., the Internet) instead of by physical wires. The data link layer protocols of the virtual network are said to be tunneled through the larger network. One common application is secure communications through the public Internet, but a VPN need not have explicit security features, such as authentication or content encryption. VPNs, for example, can be used to separate the traffic of different user communities over an underlying network with strong security features. Services Network services are applications hosted by servers on a computer network, to provide some functionality for members or users of the network, or to help the network itself to operate. The World Wide Web, E-mail, printing and network file sharing are examples of well-known network services. Network services such as Domain Name System (DNS) give names for IP and MAC addresses (people remember names like nm.lan better than numbers like 210.121.67.18), and Dynamic Host Configuration Protocol (DHCP) to ensure that the equipment on the network has a valid IP address. Services are usually based on a service protocol that defines the format and sequencing of messages between clients and servers of that network service. Performance Bandwidth in bit/s may refer to consumed bandwidth, corresponding to achieved throughput or goodput, i.e., the average rate of successful data transfer through a communication path. The throughput is affected by processes such as bandwidth shaping, bandwidth management, bandwidth throttling, bandwidth cap and bandwidth allocation (using, for example, bandwidth allocation protocol and dynamic bandwidth allocation). Network delay is a design and performance characteristic of a telecommunications network. It specifies the latency for a bit of data to travel across the network from one communication endpoint to another. Delay may differ slightly, depending on the location of the specific pair of communicating endpoints. Engineers usually report both the maximum and average delay, and they divide the delay into several components, the sum of which is the total delay: A certain minimum level of delay is experienced by signals due to the time it takes to transmit a packet serially through a link. This delay is extended by more variable levels of delay due to network congestion. IP network delays can range from less than a microsecond to several hundred milliseconds. The parameters that affect performance typically can include throughput, jitter, bit error rate and latency. In circuit-switched networks, network performance is synonymous with the grade of service. The number of rejected calls is a measure of how well the network is performing under heavy traffic loads. Other types of performance measures can include the level of noise and echo. In an Asynchronous Transfer Mode (ATM) network, performance can be measured by line rate, quality of service (QoS), data throughput, connect time, stability, technology, modulation technique, and modem enhancements.[verification needed][full citation needed] There are many ways to measure the performance of a network, as each network is different in nature and design. Performance can also be modeled instead of measured. For example, state transition diagrams are often used to model queuing performance in a circuit-switched network. The network planner uses these diagrams to analyze how the network performs in each state, ensuring that the network is optimally designed. Network congestion occurs when a link or node is subjected to a greater data load than it is rated for, resulting in a deterioration of its quality of service. When networks are congested and queues become too full, packets have to be discarded, and participants must rely on retransmission to maintain reliable communications. Typical effects of congestion include queueing delay, packet loss or the blocking of new connections. A consequence of these latter two is that incremental increases in offered load lead either to only a small increase in the network throughput or to a potential reduction in network throughput. Network protocols that use aggressive retransmissions to compensate for packet loss tend to keep systems in a state of network congestion even after the initial load is reduced to a level that would not normally induce network congestion. Thus, networks using these protocols can exhibit two stable states under the same level of load. The stable state with low throughput is known as congestive collapse. Modern networks use congestion control, congestion avoidance and traffic control techniques where endpoints typically slow down or sometimes even stop transmission entirely when the network is congested to try to avoid congestive collapse. Specific techniques include: exponential backoff in protocols such as 802.11's CSMA/CA and the original Ethernet, window reduction in TCP, and fair queueing in devices such as routers. Another method to avoid the negative effects of network congestion is implementing quality of service priority schemes allowing selected traffic to bypass congestion. Priority schemes do not solve network congestion by themselves, but they help to alleviate the effects of congestion for critical services. A third method to avoid network congestion is the explicit allocation of network resources to specific flows. One example of this is the use of Contention-Free Transmission Opportunities (CFTXOPs) in the ITU-T G.hn home networking standard. For the Internet, RFC 2914 addresses the subject of congestion control in detail. Network resilience is "the ability to provide and maintain an acceptable level of service in the face of faults and challenges to normal operation." Security Computer networks are also used by security hackers to deploy computer viruses or computer worms on devices connected to the network, or to prevent these devices from accessing the network via a denial-of-service attack. Network Security consists of provisions and policies adopted by the network administrator to prevent and monitor unauthorized access, misuse, modification, or denial of the computer network and its network-accessible resources. Network security is used on a variety of computer networks, both public and private, to secure daily transactions and communications among businesses, government agencies, and individuals. Network surveillance is the monitoring of data being transferred over computer networks such as the Internet. The monitoring is often done surreptitiously and may be done by or at the behest of governments, by corporations, criminal organizations, or individuals. It may or may not be legal and may or may not require authorization from a court or other independent agency. Computer and network surveillance programs are widespread today, and almost all Internet traffic is or could potentially be monitored for clues to illegal activity. Surveillance is very useful to governments and law enforcement to maintain social control, recognize and monitor threats, and prevent or investigate criminal activity. With the advent of programs such as the Total Information Awareness program, technologies such as high-speed surveillance computers and biometrics software, and laws such as the Communications Assistance For Law Enforcement Act, governments now possess an unprecedented ability to monitor the activities of citizens. However, many civil rights and privacy groups—such as Reporters Without Borders, the Electronic Frontier Foundation, and the American Civil Liberties Union—have expressed concern that increasing surveillance of citizens may lead to a mass surveillance society, with limited political and personal freedoms. Fears such as this have led to lawsuits such as Hepting v. AT&T. The hacktivist group Anonymous has hacked into government websites in protest of what it considers "draconian surveillance". End-to-end encryption (E2EE) is a digital communications paradigm of uninterrupted protection of data traveling between two communicating parties. It involves the originating party encrypting data so only the intended recipient can decrypt it, with no dependency on third parties. End-to-end encryption prevents intermediaries, such as Internet service providers or application service providers, from reading or tampering with communications. End-to-end encryption generally protects both confidentiality and integrity. Examples of end-to-end encryption include HTTPS for web traffic, PGP for email, OTR for instant messaging, ZRTP for telephony, and TETRA for radio. Typical server-based communications systems do not include end-to-end encryption. These systems can only guarantee the protection of communications between clients and servers, not between the communicating parties themselves. Examples of non-E2EE systems are Google Talk, Yahoo Messenger, Facebook, and Dropbox. The end-to-end encryption paradigm does not directly address risks at the endpoints of the communication themselves, such as the technical exploitation of clients, poor quality random number generators, or key escrow. E2EE also does not address traffic analysis, which relates to things such as the identities of the endpoints and the times and quantities of messages that are sent. The introduction and rapid growth of e-commerce on the World Wide Web in the mid-1990s made it obvious that some form of authentication and encryption was needed. Netscape took the first shot at a new standard. At the time, the dominant web browser was Netscape Navigator. Netscape created a standard called secure socket layer (SSL). SSL requires a server with a certificate. When a client requests access to an SSL-secured server, the server sends a copy of the certificate to the client. The SSL client checks this certificate (all web browsers come with an exhaustive list of root certificates preloaded), and if the certificate checks out, the server is authenticated and the client negotiates a symmetric-key cipher for use in the session. The session is now in a very secure encrypted tunnel between the SSL server and the SSL client. See also References This article incorporates public domain material from Federal Standard 1037C. General Services Administration. Archived from the original on 2022-01-22. Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ethology] | [TOKENS: 954] |
Contents Ethology Ethology is a branch of zoology that studies the behaviour of non-human animals. It has its scientific roots in the work of Charles Darwin and of American and German ornithologists of the late 19th and early 20th century, including Charles O. Whitman, Oskar Heinroth, and Wallace Craig. The modern discipline of ethology is generally considered to have begun during the 1930s with the work of the Dutch biologist Nikolaas Tinbergen and the Austrian biologists Konrad Lorenz and Karl von Frisch, the three winners of the 1973 Nobel Prize in Physiology or Medicine. Ethology combines laboratory and field science, with a strong relation to neuroanatomy, ecology, and evolutionary biology. Etymology The modern term ethology derives from the Greek language: ἦθος, ethos meaning "character" and -λογία, -logia meaning "the study of". The term was first popularized by the American entomologist William Morton Wheeler in 1902. History Ethologists have been concerned particularly with the evolution of behaviour and its understanding in terms of natural selection. In one sense, the first modern ethologist was Charles Darwin, whose 1872 book The Expression of the Emotions in Man and Animals influenced many ethologists. He pursued his interest in behaviour by encouraging his protégé George Romanes, who investigated animal learning and intelligence using an anthropomorphic method, anecdotal cognitivism, that did not gain scientific support. Other early ethologists, such as Eugène Marais, Charles O. Whitman, Oskar Heinroth, Wallace Craig and Julian Huxley, instead concentrated on behaviours that can be called instinctive in that they occur in all members of a species under specified circumstances. Their starting point for studying the behaviour of a new species was to construct an ethogram, a description of the main types of behaviour with their frequencies of occurrence. This provided an objective, cumulative database of behaviour. Due to the work of Konrad Lorenz and Niko Tinbergen, ethology developed strongly in continental Europe during the years prior to World War II. After the war, Tinbergen moved to the University of Oxford, and ethology became stronger in the UK, with the additional influence of William Thorpe, Robert Hinde, and Patrick Bateson at the University of Cambridge. Lorenz, Tinbergen, and von Frisch were jointly awarded the Nobel Prize in Physiology or Medicine in 1973 for their work of developing ethology. Ethology is now a well-recognized scientific discipline, with its own journals such as Animal Behaviour, Applied Animal Behaviour Science, Animal Cognition, Behaviour, Behavioral Ecology and Ethology. In 1972, the International Society for Human Ethology was founded along with its journal, Human Ethology. In 1972, the English ethologist John H. Crook distinguished comparative ethology from social ethology, and argued that much of the ethology that had existed so far was really comparative ethology—examining animals as individuals—whereas, in the future, ethologists would need to concentrate on the behaviour of social groups of animals and the social structure within them. E. O. Wilson's book Sociobiology: The New Synthesis appeared in 1975, and since that time, the study of behaviour has been much more concerned with social aspects. It has been driven by the Darwinism associated with Wilson, Robert Trivers, and W. D. Hamilton. The related development of behavioural ecology has helped transform ethology. Furthermore, a substantial rapprochement with comparative psychology has occurred, so the modern scientific study of behaviour offers a spectrum of approaches. In 2020, Tobias Starzak and Albert Newen from the Institute of Philosophy II at the Ruhr University Bochum postulated that animals may have beliefs. Tinbergen's four questions for ethologists Tinbergen argued that ethology needed to include four kinds of explanation in any instance of behaviour: These explanations are complementary rather than mutually exclusive—all instances of behaviour require an explanation at each of these four levels. For example, the function of eating is to acquire nutrients (which ultimately aids survival and reproduction), but the immediate cause of eating is hunger (causation). Hunger and eating are evolutionarily ancient and are found in many species (evolutionary history), and develop early within an organism's lifespan (development). It is easy to confuse such questions—for example, to argue that people eat because they are hungry and not to acquire nutrients—without realizing that the reason people experience hunger is because it causes them to acquire nutrients. See also References Further reading External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.