text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/Video_games] | [TOKENS: 10231]
Contents Video game A video game,[a] computer game,[b] or simply game is an electronic game that involves interaction with a user interface or input device (such as a joystick, controller, keyboard, or motion sensing device) to generate visual feedback from a display device, most commonly shown in a video format on a television set, computer monitor, flat-panel display or touchscreen on handheld devices, or a virtual reality headset. Most modern video games are audiovisual, with audio complement delivered through speakers or headphones, and sometimes also with other types of sensory feedback (e.g., haptic technology that provides tactile sensations). Some video games also allow microphone and webcam inputs for in-game chatting and livestreaming. Video games are typically categorized according to their hardware platform, which traditionally includes arcade video games, console games, and computer games (which includes LAN games, online games, and browser games). More recently, the video game industry has expanded onto mobile gaming through mobile devices (such as smartphones and tablet computers), virtual and augmented reality systems, and remote cloud gaming. Video games are also classified into a wide range of genres based on their style of gameplay and target audience. The first video game prototypes in the 1950s and 1960s were simple extensions of electronic games using video-like output from large, room-sized mainframe computers. The first consumer video game was the arcade video game Computer Space in 1971, which took inspiration from the earlier 1962 computer game Spacewar!. In 1972 came the now-iconic video game Pong and the first home console, the Magnavox Odyssey. The industry grew quickly during the "golden age" of arcade video games from the late 1970s to early 1980s but suffered from the crash of the North American video game market in 1983 due to loss of publishing control and saturation of the market. Following the crash, the industry matured, was dominated by Japanese companies such as Nintendo, Sega, and Sony, and established practices and methods around the development and distribution of video games to prevent a similar crash in the future, many of which continue to be followed. In the 2000s, the core industry centered on "AAA" games, leaving little room for riskier experimental games. Coupled with the availability of the Internet and digital distribution, this gave room for independent video game development (or "indie games") to gain prominence into the 2010s. Since then, the commercial importance of the video game industry has been increasing. The emerging Asian markets and proliferation of smartphone games in particular are altering player demographics towards casual and cozy gaming, and increasing monetization by incorporating games as a service. Today, video game development requires numerous skills, vision, teamwork, and liaisons between different parties, including developers, publishers, distributors, retailers, hardware manufacturers, and other marketers, to successfully bring a game to its consumers. As of 2020[update], the global video game market had estimated annual revenues of US$159 billion across hardware, software, and services, which is three times the size of the global music industry and four times that of the film industry in 2019, making it a formidable heavyweight across the modern entertainment industry. The video game market is also a major influence behind the electronics industry, where personal computer component, console, and peripheral sales, as well as consumer demands for better game performance, have been powerful driving factors for hardware design and innovation. Origins Early video games used interactive electronic devices with various display formats. The earliest example dates to 1947—a "cathode-ray tube amusement device" was filed for a patent on 25 January 1947, by Thomas T. Goldsmith Jr. and Estle Ray Mann, and issued on 14 December 1948, as U.S. Patent 2455992. Inspired by radar display technology, it consisted of an analog device allowing a user to control the parabolic arc of a dot on the screen to simulate a missile being fired at targets, which were paper drawings fixed to the screen. Other early examples include Christopher Strachey's Checkers, the Nimrod computer at the 1951 Festival of Britain; OXO, a tic-tac-toe computer game by Alexander S. Douglas for the EDSAC in 1952; Tennis for Two, an electronic interactive game engineered by William Higinbotham in 1958; and Spacewar!, written by Massachusetts Institute of Technology students Martin Graetz, Steve Russell, and Wayne Wiitanen's on a DEC PDP-1 computer in 1962. Each game had different means of display: NIMROD had a panel of lights to play the game of Nim, OXO had a graphical display to play tic-tac-toe, Tennis for Two had an oscilloscope to display a side view of a tennis court, and Spacewar! had the DEC PDP-1's vector display to have two spaceships battle each other. These inventions laid the foundation for modern video games. In 1966, while working at Sanders Associates, Ralph H. Baer devised a system to play a basic table tennis game on a television screen. With the company's approval, Baer created the prototype known as the "Brown Box". Sanders patented Baer's innovations and licensed them to Magnavox, which commercialized the technology as the first home video game console, the Magnavox Odyssey, released in 1972. Separately, Nolan Bushnell and Ted Dabney, inspired by seeing Spacewar! running at Stanford University, devised a similar version running in a smaller coin-operated arcade cabinet using a less expensive computer. This was released as Computer Space, the first arcade video game, in 1971. Bushnell and Dabney went on to form Atari, Inc., and with Allan Alcorn, created their second arcade game in 1972, the hit ping pong-style Pong, which was directly inspired by the table tennis game on the Odyssey. Atari made a home version of Pong, which was released by Christmas 1975. The success of the Odyssey and Pong, both as an arcade game and home machine, launched the video game industry. Both Baer and Bushnell have been titled "Father of Video Games" for their contributions. Terminology The term "video game" was developed to describe electronic games played on a video display rather than on a teletype printer, audio speaker, or similar device. This also distinguished from handheld electronic games such as Merlin, which commonly used LED lights for indicators not in combination for imaging purposes. "Computer game" may also be used as a descriptor, as all these types of games essentially require the use of a computer processor; in some cases, it is used interchangeably with "video game". Particularly in the United Kingdom and Western Europe, this is common due to the historic relevance of domestically produced microcomputers. Other terms used include digital game, for example, by the Australian Bureau of Statistics. The term "computer game" can also refer to PC games, which are played primarily on personal computers or other flexible hardware systems, to distinguish them from console games, arcade games, or mobile games. Other terms, such as "television game", "telegame", or "TV game", had been used in the 1970s and early 1980s, particularly for home gaming consoles that rely on connection to a television set. However, these terms were also used interchangeably with "video game" in the 1970s, primarily due to "video" and "television" being synonymous. In Japan, where consoles like the Odyssey were first imported and then made within the country by the large television manufacturers such as Toshiba and Sharp Corporation, such games are known as "TV games", "TV geemu", or "terebi geemu". The term "TV game" is still commonly used into the 21st century. "Electronic game" may also be used to refer to video games, but this also incorporates devices like early handheld electronic games that lack any video output. The first appearance of the term "video game" emerged around 1973. The Oxford English Dictionary cited a 10 November 1973 BusinessWeek article as the first printed use of the term. Though Bushnell believed the term came from a vending magazine review of Computer Space in 1971, a review of the major vending magazines Vending Times and Cashbox showed that the term may have come even earlier, appearing first in a letter dated July 10, 1972. In the letter, Bushnell uses the term "video game" twice. Per video game historian Keith Smith, the sudden appearance suggested that the term had been proposed and readily adopted by those in the field. Around March 1973, Ed Adlum, who ran Cashbox's coin-operated section until 1972 and then later founded RePlay Magazine, covering the coin-op amusement field, in 1975, used the term in an article in March 1973. In a September 1982 issue of RePlay, Adlum is credited with first naming these games as "video games": "RePlay's Eddie Adlum worked at 'Cash Box' when 'TV games' first came out. The personalities in those days were Bushnell, his sales manager Pat Karns, and a handful of other 'TV game' manufacturers like Henry Leyser and the McEwan brothers. It seemed awkward to call their products 'TV games', so borrowing a word from Billboard's description of movie jukeboxes, Adlum started to refer to this new breed of amusement machine as 'video games.' The phrase stuck."[citation needed] Adlum explained in 1985 that up until the early 1970s, amusement arcades typically had non-video arcade games such as pinball machines and electro-mechanical games. With the arrival of video games in arcades during the early 1970s, there was initially some confusion in the arcade industry over what term should be used to describe the new games. He "wrestled with descriptions of this type of game," alternating between "TV game" and "television game" but "finally woke up one day" and said, "What the hell... video game!" While many games readily fall into a clear, well-understood definition of video games, new genres and innovations in game development have raised the question of what are the essential factors of a video game that separate the medium from other forms of entertainment. The introduction of interactive films in the 1980s with games like Dragon's Lair, featured games with full motion video played off a form of media but only limited user interaction. This had required a means to distinguish these games from more traditional board games that happen to also use external media, such as the Clue VCR Mystery Game which required players to watch VCR clips between turns. To distinguish between these two, video games are considered to require some interactivity that affects the visual display. Most video games tend to feature some type of victory or winning conditions, such as a scoring mechanism or a final boss fight. The introduction of walking simulators (adventure games that allow for exploration but lack any objectives) like Gone Home, and empathy games (video games that tend to focus on emotion) like That Dragon, Cancer brought the idea of games that did not have any such type of winning condition and raising the question of whether these were actually games. These are still commonly justified as video games as they provide a game world that the player can interact with by some means. The lack of any industry definition for a video game by 2021 was an issue during the case Epic Games v. Apple which dealt with video games offered on Apple's iOS App Store. Among concerns raised were games like Fortnite Creative and Roblox which created metaverses of interactive experiences, and whether the larger game and the individual experiences themselves were games or not in relation to fees that Apple charged for the App Store. Judge Yvonne Gonzalez Rogers, recognizing that there was yet an industry standard definition for a video game, established for her ruling that "At a bare minimum, video games appear to require some level of interactivity or involvement between the player and the medium" compared to passive entertainment like film, music, and television, and "videogames are also generally graphically rendered or animated, as opposed to being recorded live or via motion capture as in films or television". Rogers still concluded that what is a video game "appears highly eclectic and diverse". The gameplay experience varies radically between video games, but many common elements exist. Most games will launch into a title screen and give the player a chance to review options such as the number of players before starting a game. Most games are divided into levels which the player must work the avatar through, scoring points, collecting power-ups to boost the avatar's innate attributes, all while either using special attacks to defeat enemies or moves to avoid them. This information is relayed to the player through a type of on-screen user interface such as a heads-up display atop the rendering of the game itself. Taking damage will deplete their avatar's health, and if that falls to zero or if the avatar otherwise falls into an impossible-to-escape location, the player will lose one of their lives. Should they lose all their lives without gaining an extra life or "1-UP", then the player will reach the "game over" screen. Many levels as well as the game's finale end with a type of boss character the player must defeat to continue on. In some games, intermediate points between levels will offer save points where the player can create a saved game on storage media to restart the game should they lose all their lives or need to stop the game and restart at a later time. These also may be in the form of a passage that can be written down and reentered at the title screen.[citation needed] Product flaws include software bugs which can manifest as glitches which may be exploited by the player; this is often the foundation of speedrunning a video game. These bugs, along with cheat codes, Easter eggs, and other hidden secrets that were intentionally added to the game can also be exploited. On some consoles, cheat cartridges allow players to execute these cheat codes, and user-developed trainers allow similar bypassing for computer software games. Both of which might make the game easier, give the player additional power-ups, or change the appearance of the game. Components To distinguish from electronic games, a video game is generally considered to require a platform, the hardware which contains computing elements, to process player interaction from some type of input device and displays the results to a video output display. Video games require a platform, a specific combination of electronic components or computer hardware and associated software, to operate. The term system is also commonly used. These platforms may include multiple brandsheld by platform holders, such as Nintendo or Sony, seeking to gain larger market shares. Games are typically designed to be played on one or a limited number of platforms, and exclusivity to a platform or brand is used by platform holders as a competitive edge in the video game market. However, games may be developed for alternative platforms than intended, which are described as ports or conversions. These also may be remasters - where most of the original game's source code is reused and art assets, models, and game levels are updated for modern systems – and remakes, where in addition to asset improvements, significant reworking of the original game and possibly from scratch is performed. The list below is not exhaustive and excludes other electronic devices capable of playing video games such as PDAs and graphing calculators. A console game is played on a home console, a specialized electronic device that connects to a common television set or composite video monitor. Home consoles are specifically designed to play games using a dedicated hardware environment, giving developers a concrete hardware target for development and assurances of what features will be available, simplifying development compared to PC game development. Usually consoles only run games developed for it, or games from other platform made by the same company, but never games developed by its direct competitor, even if the same game is available on different platforms. It often comes with a specific game controller. Major console platforms include Xbox, PlayStation and Nintendo. An arcade video game generally refers to a game played on an even more specialized type of electronic device that is typically designed to play only one game and is encased in a special, large coin-operated cabinet which has one built-in console, controllers (joystick, buttons, etc.), a CRT screen, and audio amplifier and speakers. Arcade games often have brightly painted logos and images relating to the theme of the game. While most arcade games are housed in a vertical cabinet, which the user typically stands in front of to play, some arcade games use a tabletop approach, in which the display screen is housed in a table-style cabinet with a see-through table top. With table-top games, the users typically sit to play. In the 1990s and 2000s, some arcade games offered players a choice of multiple games. In the 1980s, video arcades were businesses in which game players could use a number of arcade video games. In the 2010s, there are far fewer video arcades, but some movie theaters and family entertainment centers still have them. Early arcade games, home consoles, and handheld games were dedicated hardware units with the game's logic built into the electronic componentry of the hardware. Since then, most video game platforms are considered programmable, having means to read and play multiple games distributed on different types of media or formats. Physical formats include ROM cartridges, magnetic storage including magnetic-tape data storage and floppy discs, optical media formats including CD-ROM and DVDs, and flash memory cards. Furthermore digital distribution over the Internet or other communication methods as well as cloud gaming alleviate the need for any physical media. In some cases, the media serves as the direct read-only memory for the game, or it may be the form of installation media that is used to write the main assets to the player's platform's local storage for faster loading periods and later updates. Games can be extended with new content and software patches through either expansion packs which are typically available as physical media, or as downloadable content nominally available via digital distribution. These can be offered freely or can be used to monetize a game following its initial release. Several games offer players the ability to create user-generated content to share with others to play. Other games, mostly those on personal computers, can be extended with user-created modifications or mods that alter or add onto the game; these often are unofficial and were developed by players from reverse engineering of the game, but other games provide official support for modding the game. Video game can use several types of input devices to translate human actions to a game. Most common are the use of game controllers like gamepads and joysticks for most consoles, and as accessories for personal computer systems along keyboard and mouse controls. Common controls on the most recent controllers include face buttons, shoulder triggers, analog sticks, and directional pads ("d-pads"). Consoles typically include standard controllers which are shipped or bundled with the console itself, while peripheral controllers are available as a separate purchase from the console manufacturer or third-party vendors. Similar control sets are built into handheld consoles and onto arcade cabinets. Newer technology improvements have incorporated additional technology into the controller or the game platform, such as touchscreens and motion detection sensors that give more options for how the player interacts with the game. Specialized controllers may be used for certain genres of games, including racing wheels, light guns and dance pads. Digital cameras and motion detection can capture movements of the player as input into the game, which can, in some cases, effectively eliminate the control, and on other systems such as virtual reality, are used to enhance immersion into the game. By definition, all video games are intended to output graphics to an external video display, such as cathode ray tube televisions, newer liquid-crystal display (LCD) televisions and built-in screens, projectors or computer monitors, depending on the type of platform the game is played on. Features such as color depth, refresh rate, frame rate, and screen resolution are a combination of the limitations of the game platform and display device and the program efficiency of the game itself. The game's output can range from fixed displays using LED or LCD elements, text-based games, two-dimensional and three-dimensional graphics, and augmented reality displays. The game's graphics are often accompanied by sound produced by internal speakers on the game platform or external speakers attached to the platform, as directed by the game's programming. This often will include sound effects tied to the player's actions to provide audio feedback, as well as background music for the game. Some platforms support additional feedback mechanics to the player that a game can take advantage of. This is most commonly haptic technology built into the game controller, such as causing the controller to shake in the player's hands to simulate a shaking earthquake occurring in game. Classifications Video games are frequently classified by a number of factors related to how one plays them. A video game, like most other forms of media, may be categorized into genres. However, unlike film or television which use visual or narrative elements, video games are generally categorized into genres based on their gameplay interaction, since this is the primary means which one interacts with a video game. The narrative setting does not impact gameplay; a shooter game is still a shooter game, regardless of whether it takes place in a fantasy world or in outer space. An exception is the horror game genre, used for games that are based on narrative elements of horror fiction, the supernatural, and psychological horror. Genre names are normally self-describing in terms of the type of gameplay, such as action game, role playing game, or shoot 'em up, though some genres have derivations from influential works that have defined that genre, such as roguelikes from Rogue, Grand Theft Auto clones from Grand Theft Auto III, and battle royale games from the film Battle Royale. The names may shift over time as players, developers and the media come up with new terms; for example, first-person shooters were originally called "Doom clones" based on the 1993 game. A hierarchy of game genres exist, with top-level genres like "shooter game" and "action game" that broadly capture the game's main gameplay style, and several subgenres of specific implementation, such as within the shooter game first-person shooter and third-person shooter. Some cross-genre types also exist that fall until multiple top-level genres such as action-adventure game. A video game's mode describes how many players can use the game at the same type. This is primarily distinguished by single-player video games and multiplayer video games. Within the latter category, multiplayer games can be played in a variety of ways, including locally at the same device, on separate devices connected through a local network such as LAN parties, or online via separate Internet connections. Most multiplayer games are based on competitive gameplay, but many offer cooperative and team-based options as well as asymmetric gameplay. Online games use server structures that can also enable massively multiplayer online games (MMOs) to support hundreds of players at the same time. A small number of video games are zero-player games, in which the player has very limited interaction with the game itself. These are most commonly simulation games where the player may establish a starting state and then let the game proceed on its own, watching the results as a passive observer, such as with many computerized simulations of Conway's Game of Life. Most video games are intended for entertainment purposes. Different game types include: Video games can be subject to national and international content rating requirements. Like with film content ratings, video game ratings typing identify the target age group that the national or regional ratings board believes is appropriate for the player, ranging from all-ages, to a teenager-or-older, to mature, to the infrequent adult-only games. Most content review is based on the level of violence, both in the type of violence and how graphic it may be represented, and sexual content, but other themes such as drug and alcohol use and gambling that can influence children may also be identified. A primary identifier based on a minimum age is used by nearly all systems, along with additional descriptors to identify specific content that players and parents should be aware of. The regulations vary from country to country but generally are voluntary systems upheld by vendor practices, with penalty and fines issued by the ratings body on the video game publisher for misuse of the ratings. Among the major content rating systems include: Additionally, the major content system provides have worked to create the International Age Rating Coalition (IARC), a means to streamline and align the content ratings system between different region, so that a publisher would only need to complete the content ratings review for one provider, and use the IARC transition to affirm the content rating for all other regions. Certain nations have even more restrictive rules related to political or ideological content. Within Germany, until 2018, the Unterhaltungssoftware Selbstkontrolle (Entertainment Software Self-Regulation) would refuse to classify, and thus allow sale, of any game depicting Nazi imagery, and thus often requiring developers to replace such imagery with fictional ones. This ruling was relaxed in 2018 to allow for such imagery for "social adequacy" purposes that applied to other works of art. China's video game segment is mostly isolated from the rest of the world due to the government's censorship, and all games published there must adhere to strict government review, disallowing content such as smearing the image of the Chinese Communist Party. Foreign games published in China often require modification by developers and publishers to meet these requirements. Development Video game development and authorship, much like any other form of entertainment, is frequently a cross-disciplinary field. Video game developers, as employees within this industry are commonly referred to, primarily include programmers and graphic designers. Over the years, this has expanded to include almost every type of skill that one might see prevalent in the creation of any movie or television program, including sound designers, musicians, and other technicians; as well as skills that are specific to video games, such as the game designer. All of these are managed by producers. In the early days of the industry, it was more common for a single person to manage all of the roles needed to create a video game. As platforms have become more complex and powerful in the type of material they can present, larger teams have been needed to generate all of the art, programming, cinematography, and more. This is not to say that the age of the "one-man shop" is gone, as this is still sometimes found in the casual gaming and handheld markets, where smaller games are prevalent due to technical limitations such as limited RAM or lack of dedicated 3D graphics rendering capabilities on the target platform (e.g., some PDAs). Video games are programmed like any other piece of computer software. Prior to the mid-1970s, arcade and home consoles were programmed by assembling discrete electro-mechanical components on circuit boards, which limited games to relatively simple logic. By 1975, low-cost microprocessors were available at volume to be used for video game hardware, which allowed game developers to program more detailed games, widening the scope of what was possible. Ongoing improvements in computer hardware technology have expanded what has become possible to create in video games, coupled with convergence of common hardware between console, computer, and arcade platforms to simplify the development process. Today, game developers have a number of commercial and open source tools available for use to make games, often which are across multiple platforms to support portability, or may still opt to create their own for more specialized features and direct control of the game. Today, many games are built around a game engine that handles the bulk of the game's logic, gameplay, and rendering. These engines can be augmented with specialized engines for specific features, such as a physics engine that simulates the physics of objects in real-time. A variety of middleware exists to help developers access other features, such as playback of videos within games, network-oriented code for games that communicate via online services, matchmaking for online games, and similar features. These features can be used from a developer's programming language of choice, or they may opt to also use game development kits that minimize the amount of direct programming they have to do but can also limit the amount of customization they can add into a game. Like all software, video games usually undergo quality testing before release to assure there are no bugs or glitches in the product, though frequently developers will release patches and updates. With the growth of the size of development teams in the industry, the problem of cost has increased. Development studios need the best talent, while publishers reduce costs to maintain profitability on their investment. Typically, a video game console development team ranges from 5 to 50 people, and some exceed 100. In May 2009, Assassin's Creed II was reported to have a development staff of 450. The growth of team size combined with greater pressure to get completed projects into the market to begin recouping production costs has led to a greater occurrence of missed deadlines, rushed games, and the release of unfinished products. While amateur and hobbyist game programming had existed since the late 1970s with the introduction of home computers, a newer trend since the mid-2000s is indie game development. Indie games are made by small teams outside any direct publisher control, their games being smaller in scope than those from the larger "AAA" game studios, and are often experiments in gameplay and art style. Indie game development is aided by the larger availability of digital distribution, including the newer mobile gaming market, and readily-available and low-cost development tools for these platforms. Although departments of computer science have been studying technical aspects of video games for years, theories that examine games as an artistic medium are a relatively recent development. The two most visible schools in this field are ludology and narratology. Narrativists approach video games in the context of what Janet Murray calls "Cyberdrama". That is to say, their major concern is with video games as a storytelling medium, one that arises out of interactive fiction. Murray puts video games in the context of the Holodeck, a fictional piece of technology from Star Trek, arguing for the video game as a medium in which the player is allowed to become another person, and to act out in another world. This image of video games received early widespread popular support, and forms the basis of films such as Tron, eXistenZ and The Last Starfighter. Ludologists break sharply and radically from this idea. They argue that a video game is first and foremost a game, which must be understood in terms of its rules, interface, and the concept of play that it deploys. Espen Aarseth argues that, although games certainly have plots, characters, and aspects of traditional narratives, these aspects are incidental to gameplay. For example, Aarseth is critical of the widespread attention that narrativists have given to the heroine of the game Tomb Raider, saying that "the dimensions of Lara Croft's body, already analyzed to death by film theorists, are irrelevant to me as a player, because a different-looking body would not make me play differently... When I play, I don't even see her body, but see through it and past it." Ludologists reject traditional theories of art because they claim the artistic and socially relevant qualities of a video game, are primarily determined by the underlying set of rules, demands, and expectations imposed on the player.[citation needed] While many games rely on emergent principles, video games commonly present simulated story worlds where emergent behavior occurs within the context of the game. The term "emergent narrative" has been used to describe how, in a simulated environment, storyline can be created simply by "what happens to the player." However, emergent behavior is not limited to sophisticated games. In general, any place where event-driven instructions occur for AI in a game, emergent behavior will exist. For instance, take a racing game in which cars are programmed to avoid crashing, and they encounter an obstacle in the track: the cars might then maneuver to avoid the obstacle causing the cars behind them to slow or maneuver to accommodate the cars in front of them and the obstacle. The programmer never wrote code to specifically create a traffic jam, yet one now exists in the game.[citation needed] Most commonly, video games are protected by copyright, though both patents and trademarks have been used as well. Though local copyright regulations vary to the degree of protection, video games qualify as copyrighted visual-audio works, and enjoy cross-country protection under the Berne Convention. This typically only applies to the underlying code, as well as to the artistic aspects of the game such as its writing, art assets, and music. Gameplay itself is generally not considered copyrightable; in the United States among other countries, video games are considered to fall into the idea–expression distinction in that it is how the game is presented and expressed to the player that can be copyrighted, but not the underlying principles of the game. Because gameplay is normally ineligible for copyright, gameplay ideas in popular games are often replicated and built upon in other games. At times, this repurposing of gameplay can be seen as beneficial and a fundamental part of how the industry has grown by building on the ideas of others. For example Doom (1993) and Grand Theft Auto III (2001) introduced gameplay that created popular new game genres, the first-person shooter and the Grand Theft Auto clone, respectively, in the few years after their release. However, at times and more frequently at the onset of the industry, developers would intentionally create video game clones of successful games and game hardware with few changes, which led to the flooded arcade and dedicated home console market around 1978. Cloning is also a major issue with countries that do not have strong intellectual property protection laws, such as within China. The lax oversight by China's government and the difficulty for foreign companies to take Chinese entities to court had enabled China to support a large grey market of cloned hardware and software systems. The industry remains challenged to distinguish between creating new games based on refinements of past successful games to create a new type of gameplay, and intentionally creating a clone of a game that may simply swap out art assets. Industry The early history of the video game industry, following the first game hardware releases and through 1983, had little structure. Video games quickly took off during the golden age of arcade video games from the late 1970s to early 1980s, but the newfound industry was mainly composed of game developers with little business experience. This led to numerous companies forming simply to create clones of popular games to try to capitalize on the market. Due to loss of publishing control and oversaturation of the market, the North American home video game market crashed in 1983, dropping from revenues of around $3 billion in 1983 to $100 million by 1985. Many of the North American companies created in the prior years closed down. Japan's growing game industry was briefly shocked by this crash but had sufficient longevity to withstand the short-term effects, and Nintendo helped to revitalize the industry with the release of the Nintendo Entertainment System in North America in 1985. Along with it, Nintendo established a number of core industrial practices to prevent unlicensed game development and control game distribution on their platform, methods that continue to be used by console manufacturers today. The industry remained more conservative following the 1983 crash, forming around the concept of publisher-developer dichotomies, and by the 2000s, leading to the industry centralizing around low-risk, triple-A games and studios with large development budgets of at least $10 million or more. The advent of the Internet brought digital distribution as a viable means to distribute games, and contributed to the growth of more riskier, experimental independent game development as an alternative to triple-A games in the late 2000s and which has continued to grow as a significant portion of the video game industry. Video games have a large network effect that draw on many different sectors that tie into the larger video game industry. While video game developers are a significant portion of the industry, other key participants in the market include: The industry itself grew out from both the United States and Japan in the 1970s and 1980s before having a larger worldwide contribution. Today, the video game industry is predominantly led by major companies in North America (primarily the United States and Canada), Europe, and southeast Asia including Japan, South Korea, and China. Hardware production remains an area dominated by Asian companies either directly involved in hardware design or part of the production process, but digital distribution and indie game development of the late 2000s has allowed game developers to flourish nearly anywhere and diversify the field. According to the market research firm Newzoo, the global video game industry drew estimated revenues of over $159 billion in 2020. Mobile games accounted for the bulk of this, with a 48% share of the market, followed by console games at 28% and personal computer games at 23%. Sales of different types of games vary widely between countries due to local preferences. Japanese consumers tend to purchase much more handheld games than console games and especially PC games, with a strong preference for games catering to local tastes. Another key difference is that, though having declined in the West, arcade games remain an important sector of the Japanese gaming industry. In South Korea, computer games are generally preferred over console games, especially MMORPG games and real-time strategy games. Computer games are also popular in China. Effects on society Video game culture is a worldwide new media subculture formed around video games and game playing. As computer and video games have increased in popularity over time, they have had a significant influence on popular culture. Video game culture has also evolved over time hand in hand with internet culture as well as the increasing popularity of mobile games. Many people who play video games identify as gamers, which can mean anything from someone who enjoys games to someone who is passionate about it. As video games become more social with multiplayer and online capability, gamers find themselves in growing social networks. Gaming can both be entertainment as well as competition, as a new trend known as electronic sports is becoming more widely accepted. In the 2010s, video games and discussions of video game trends and topics can be seen in social media, politics, television, film and music. The COVID-19 pandemic during 2020–2021 gave further visibility to video games as a pastime to enjoy with friends and family online as a means of social distancing. Since the mid-2000s there has been debate whether video games qualify as art, primarily as the form's interactivity interfered with the artistic intent of the work and that they are designed for commercial appeal. A significant debate on the matter came after film critic Roger Ebert published an essay "Video Games can never be art", which challenged the industry to prove him and other critics wrong. The view that video games were an art form was cemented in 2011 when the U.S. Supreme Court ruled in the landmark case Brown v. Entertainment Merchants Association that video games were a protected form of speech with artistic merit. Since then, video game developers have come to use the form more for artistic expression, including the development of art games, and the cultural heritage of video games as works of arts, beyond their technical capabilities, have been part of major museum exhibits, including The Art of Video Games at the Smithsonian American Art Museum and toured at other museums from 2012 to 2016. Video games will inspire sequels and other video games within the same franchise, but also have influenced works outside of the video game medium. Numerous television shows (both animated and live-action), films, comics and novels have been created based on existing video game franchises. Because video games are an interactive medium there has been trouble in converting them to these passive forms of media, and typically such works have been critically panned or treated as children's media. For example, until 2019, no video game film had ever been received a "Fresh" rating on Rotten Tomatoes, but the releases of Detective Pikachu (2019) and Sonic the Hedgehog (2020), both receiving "Fresh" ratings, shows signs of the film industry having found an approach to adapt video games for the large screen. That said, some early video game-based films have been highly successful at the box office, such as 1995's Mortal Kombat and 2001's Lara Croft: Tomb Raider. More recently since the 2000s, there has also become a larger appreciation of video game music, which ranges from chiptunes composed for limited sound-output devices on early computers and consoles, to fully-scored compositions for most modern games. Such music has frequently served as a platform for covers and remixes, and concerts featuring video game soundtracks performed by bands or orchestras, such as Video Games Live, have also become popular. Video games also frequently incorporate licensed music, particularly in the area of rhythm games, furthering the depth of which video games and music can work together. Further, video games can serve as a virtual environment under full control of a producer to create new works. With the capability to render 3D actors and settings in real-time, a new type of work machinima (short for "machine cinema") grew out from using video game engines to craft narratives. As video game engines gain higher fidelity, they have also become part of the tools used in more traditional filmmaking. Unreal Engine has been used as a backbone by Industrial Light & Magic for their StageCraft technology for shows like The Mandalorian. Separately, video games are also frequently used as part of the promotion and marketing for other media, such as for films, anime, and comics. However, these licensed games in the 1990s and 2000s often had a reputation for poor quality, developed without any input from the intellectual property rights owners, and several of them are considered among lists of games with notably negative reception, such as Superman 64. More recently, with these licensed games being developed by triple-A studios or through studios directly connected to the licensed property owner, there has been a significant improvement in the quality of these games, with an early trendsetting example of Batman: Arkham Asylum. Besides their entertainment value, appropriately-designed video games have been seen to provide value in education across several ages and comprehension levels. Learning principles found in video games have been identified as possible techniques with which to reform the U.S. education system. It has been noticed that gamers adopt an attitude while playing that is of such high concentration, they do not realize they are learning, and that if the same attitude could be adopted at school, education would enjoy significant benefits.[dubious – discuss] Students are found to be "learning by doing" while playing video games while fostering creative thinking. Video games are also believed to be beneficial to the mind and body. It has been shown that action video game players have better hand–eye coordination and visuo-motor skills, such as their resistance to distraction, their sensitivity to information in the peripheral vision and their ability to count briefly presented objects, than nonplayers. Researchers found that such enhanced abilities could be acquired by training with action games, involving challenges that switch attention between different locations, but not with games requiring concentration on single objects.[citation needed] A 2018 systematic review found evidence that video gaming training had positive effects on cognitive and emotional skills in the adult population, especially with young adults. A 2019 systematic review also added support for the claim that video games are beneficial to the brain, although the beneficial effects of video gaming on the brain differed by video games types. Organisers of video gaming events, such as the organisers of the D-Lux video game festival in Dumfries, Scotland, have emphasised the positive aspects video games can have on mental health. Organisers, mental health workers and mental health nurses at the event emphasised the relationships and friendships that can be built around video games and how playing games can help people learn about others as a precursor to discussing the person's mental health. A study in 2020 from Oxford University also suggested that playing video games can be a benefit to a person's mental health. The report of 3,274 gamers, all over the age of 18, focused on the games Animal Crossing: New Horizons and Plants vs Zombies: Battle for Neighborville and used actual play-time data. The report found that those that played more games tended to report greater "wellbeing". Also in 2020, computer science professor Regan Mandryk of the University of Saskatchewan said her research also showed that video games can have health benefits such as reducing stress and improving mental health. The university's research studied all age groups – "from pre-literate children through to older adults living in long term care homes" – with a main focus on 18 to 55-year-olds. A study of gamers attitudes towards gaming which was reported about in 2018 found that millennials use video games as a key strategy for coping with stress. In the study of 1,000 gamers, 55% said that it "helps them to unwind and relieve stress ... and half said they see the value in gaming as a method of escapism to help them deal with daily work pressures". Video games have caused controversy since the 1970s. Parents and children's advocates regularly raise concerns that violent video games can influence young players into performing those violent acts in real life, and events such as the Columbine High School massacre in 1999 in which some claimed the perpetrators specifically alluded to using video games to plot out their attack, raised further fears.[citation needed] Medical experts and mental health professionals have also raised concerned that video games may be addictive, and the World Health Organization has included "gaming disorder" in the 11th revision of its International Statistical Classification of Diseases. Other health experts, including the American Psychiatric Association, have stated that there is insufficient evidence that video games can create violent tendencies or lead to addictive behavior, though agree that video games typically use a compulsion loop in their core design that can create dopamine that can help reinforce the desire to continue to play through that compulsion loop and potentially lead into violent or addictive behavior. Even with case law establishing that video games qualify as a protected art form, there has been pressure on the video game industry to keep their products in check to avoid over-excessive violence particularly for games aimed at younger children. The potential addictive behavior around games, coupled with increased used of post-sale monetization of video games, has also raised concern among parents, advocates, and government officials about gambling tendencies that may come from video games, such as controversy around the use of loot boxes in many high-profile games. Numerous other controversies around video games and its industry have arisen over the years, among the more notable incidents include the 1993 United States Congressional hearings on violent games like Mortal Kombat which led to the formation of the ESRB ratings system, numerous legal actions taken by attorney Jack Thompson over violent games such as Grand Theft Auto III and Manhunt from 2003 to 2007, the outrage over the "No Russian" level from Call of Duty: Modern Warfare 2 in 2009 which allowed the player to shoot a number of innocent non-player characters at an airport, and the Gamergate harassment campaign in 2014 that highlighted misogyny from a portion of the player demographic. The industry as a whole has also dealt with issues related to gender, racial, and LGBTQ+ discrimination and mischaracterization of these minority groups in video games. A further issue in the industry is related to working conditions, as development studios and publishers frequently use "crunch time", required extended working hours, in the weeks and months ahead of a game's release to assure on-time delivery. Collecting and preservation Players of video games often maintain collections of games. More recently there has been interest in retro gaming, focusing on games from the first decades. Games in retail packaging in good shape have become collector's items for the early days of the industry, with some rare publications having gone for over US$100,000 as of 2020[update]. Separately, there is also concern about the preservation of video games, as both game media and the hardware to play them degrade over time. Further, many of the game developers and publishers from the first decades no longer exist, so records of their games have disappeared. Archivists and preservations have worked within the scope of copyright law to save these games as part of the cultural history of the industry. There are many video game museums around the world, including the National Videogame Museum in Frisco, Texas, which serves as the largest museum wholly dedicated to the display and preservation of the industry's most important artifacts. Europe hosts video game museums such as the Computer Games Museum in Berlin and the Museum of Soviet Arcade Machines in Moscow and Saint-Petersburg. The Museum of Art and Digital Entertainment in Oakland, California is a dedicated video game museum focusing on playable exhibits of console and computer games. The Video Game Museum of Rome is also dedicated to preserving video games and their history. The International Center for the History of Electronic Games at The Strong in Rochester, New York contains one of the largest collections of electronic games and game-related historical materials in the world, including a 5,000-square-foot (460 m2) exhibit which allows guests to play their way through the history of video games. The Smithsonian Institution in Washington, DC has three video games on permanent display: Pac-Man, Dragon's Lair, and Pong. The Museum of Modern Art has added a total of 20 video games and one video game console to its permanent Architecture and Design Collection since 2012. In 2012, the Smithsonian American Art Museum ran an exhibition on "The Art of Video Games". However, the reviews of the exhibit were mixed, including questioning whether video games belong in an art museum. See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/File:Dados_4_a_20_caras_trans.png] | [TOKENS: 123]
File:Dados 4 a 20 caras trans.png Original upload log This image is a derivative work of the following images: Uploaded with derivativeFX File history Click on a date/time to view the file as it appeared at that time. File usage The following 45 pages use this file: Global file usage The following other wikis use this file: View more global usage of this file. Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Joke#cite_note-FOOTNOTEDundes1972-92] | [TOKENS: 8460]
Contents Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Actual_play] | [TOKENS: 2343]
Contents Actual play Actual play, also called live play, is a genre of podcast or web show in which people play tabletop role-playing games (TTRPGs) for an audience. Actual play often encompasses in-character interactions between players, storytelling from the gamemaster, and out-of-character engagements such as dice rolls and discussion of game mechanics. The genre emerged in the early 2000s and became more popular throughout the decade, particularly with the 2015 debut of Critical Role, an actual play webseries featuring professional voice actors. History According to Evan Torner writing in Watch Us Roll (2021), actual play is rooted in phenomena including magazine "play reports" of wargames and internet forums dedicated to role-playing games. With the emergence of esports, livestreamed gaming, and Let's Plays, actual plays of TTRPGs became a popular podcast and webseries format, and contributed to the resurgence of TTRPGs in the 2010s and 2020s. Academic Emily Friedman commented that many incorrectly attribute Acquisitions Incorporated as the start of actual play; however, "the Bradley University RolePlaying Society, originally known as BURPS", produced a DVD in 2008. Friedman noted these "may very well have been the first folks to record gameplay and circulate it for an audience that took entertainment value from it". In 2008, the creators of Penny Arcade partnered with Wizards of the Coast to create a podcast of a few 4th Edition Dungeons & Dragons adventures which led to the creation of the actual play Acquisitions Incorporated. After the podcast was well-received, the players began livestreaming games starting in 2010 at the PAX festival.: 108 Acquisitions Incorporated went on to be described by Inverse in 2019 as the "longest-running live play game". Critical Role, a web series in which professional voice actors play Dungeons & Dragons, launched in 2015. Critical Role has been credited by VentureBeat as responsible for making actual play shows "their own genre of entertainment", and has since become one of the most prominent actual play series. Another popular series is The Adventure Zone, a comedic actual play podcast which has featured several TTRPG systems. As of 2021[update], it received over 6 million monthly downloads, and ranked highly on Apple podcast charts. By 2021, there were hundreds of actual play podcasts. Many web festivals, such as New Jersey, Minnesota, Los Angeles, Baltimore, Cusco, and New Zealand, "now include actual play categories, and many have scholarship programs". Polygon highlighted that "web fest selections are quickly becoming one of the best places to discover the undersung 'ambitious middle' of actual plays — that is, shows that aspire to the same storytelling heights as the most popular troupes, but that lack the resources of time and production budget". Early visual layouts were often either "fullscreen, edited multi-camera shows" or simultaneous-display shows with "boxes arranged on screen: one for the Storyteller or Dungeon Master, two or more for the players, either separately (in remote shows) or in groups (in studio). Another box may display character art, battle maps, sponsors, or other information". The simultaneous-display would become the most prominent layout in the genre. This visual layout is also "a holdover from video game Let's Plays"; Friedman attributed the widespread usage of the simultaneous-display layout to Critical Role's dominance in the genre as well as the layout working well for remotely filmed shows which "boomed after the move to COVID-19 pandemic protocols in 2020". Virtual tabletops (VTTs) also became more commonly used in remote shows during the pandemic lockdown. Shelly Jones, writing in The Routledge Handbook of Role-Playing Game Studies (2024), commented that actual play shows have a wide range in "production quality of editing, sets, costumes, props, and the like" as well as episode length. They noted that some shows, such as Critical Role, have an average episode length of three to four hours with "campaigns that arc over a hundred episodes" while "other hit shows like High Rollers and Dimension 20 eschew lengthy run-times and seasons". The actual play genre has also seen format changes as creators jump to "newer social media platforms such as TikTok", including adjusting the length of episodes "and incorporating special effects and interactive elements to further engage new audiences". TTRPG publishers have engaged with actual plays by licensing shows based on their products, running their own, incorporating content from actual plays back into source material, and playtesting games in actual play format. L.A. by Night is an actual play licensed by the publisher Paradox Interactive, and based on their role-playing game Vampire: The Masquerade; it premiered on Geek & Sundry in 2018. Rivals of Waterdeep is an official Wizards of the Coast actual play show, based on their Dungeons & Dragons system. Wizards of the Coast has also published collaboration sourcebooks based on actual play shows, such as the Explorer's Guide to Wildemount (2020) based on Critical Role and Acquisitions Incorporated (2019) based on the live play game by the same name. Jones highlighted increased commercialized in the genre, noting that many sponsorships are from tabletop gaming accessory companies. Actual play productions have also expanded their reach through merchandise and transmedia products, including game supplements, comics, novels, and animated adaptations. Cultural impact In 2018, the Diana Jones Award for excellence in tabletop gaming named the concept of actual play as that year's award winner, marking the first year the award was not awarded to a game, organization, or individual. Academic Emily Friedman, writing for Los Angeles Review of Books, highlighted that "there's the elemental pleasure of being told a story, intertwined with the alchemy of watching that story be created in front of your eyes (or ears). [...] We perceive simultaneously the character played and the player playing". Friedman also commented that the largest "actual plays have viewer numbers that are the envy of some television networks". Amanda Farough wrote for VentureBeat that "the boundaries and barriers that have traditionally kept TTRPGs hidden behind an opaque divide have come tumbling down" and that actual play "long-form narrative is reshaping itself as an expression of both players and the audiences that accompany them on the journey ahead". Curtis D. Carbonell, in his book the Dread Trident: Tabletop Role-Playing Games and the Modern Fantastic, commented that shows such as Acquisitions Incorporated and Critical Role reflect "a wider phenomenon made clear by numerous Youtube.com videos of individual gaming sessions by random groups ... The confluence of these digital and analog streamed elements adds to the increasing archive of realized gametexts that can be consumed and analyzed with the modern fantastic".: 108 Both Farough and Carbonell highlighted that actual play shows have also increased sales of TTRPGs and related products. Actual plays have contributed towards improving representation of people of color, women, and others in tabletop gaming, which has had a reputation of being primarily made up of white men. Maze Arcana's Sirens, with Satine Phoenix as dungeonmaster (DM), features an all-women group of players. Rivals of Waterdeep, DMed by Tanya DePass, and Into the Motherlands are actual play shows with casts that are entirely made up of people of color. Death2Divinity is an actual play show with an all-queer, "all fat-babe" cast. Actual play shows have also been credited with improving representation of LGBT people in media more generally. Entertainment website CBR has said that LGBT representation has been more easily incorporated into actual plays because they are often produced by independent creators and distributed online. The site named The Adventure Zone and Dimension 20 as two examples of actual plays which include LGBT characters. Academic Melissa Allen, in the Journal of Fandom Studies, wrote that "viewers often feel it is their right to address happenings in-game they do not like because they too are an active participant despite being viewers, not players", which has led actual play series to carefully navigate when to let fan conversations occur freely and when to intervene directly.: 162 In particular, she highlighted the problematic treatment of female players in fan discourse – "there is a small window in terms of mechanics knowledge fans find appropriate" for female players, noting that these players "have to be extremely knowledgeable of game mechanics to an almost impossible extent, but they cannot be so outwardly knowledgeable that they challenge men's standing as arbiters of D&D knowledge".: 166 Allen commented that fan criticisms often "reflect the unease when women and their characters are the focus of the campaign's story".: 169 Allen also stated that "despite the presence of these gatekeeping behaviours and comments that reflected misogynistic discourse, there were many that applauded the female players" in actual plays with fans who frequently challenged sexist discourse, "indicating that male preserve-sanctioned behaviours are actively pushed back against by many in the fandom".: 173 The Critical Role animated series The Legend of Vox Machina was crowdfunded on Kickstarter in 2019, where it raised US$11.39 million, setting the record for the most highly-funded film or TV project in the platform's history. Following this, Amazon streaming service Prime Video acquired exclusive streaming rights to the series. The second Critical Role animated series adaptation, The Mighty Nein, has premiered on November 19, 2025. The "Balance" campaign of The Adventure Zone was adapted into a series of graphic novels, the first of which was published in 2018. The "Fantasy High" campaign of Dimension 20 was adapted as a webcomic; it was first released on Webtoon in 2025. During the 2023 SAG-AFTRA strike, Charlie Hall of Polygon commented that "actual play, which has grown in popularity since well before the pandemic, has often pulled in Hollywood types to fill seats at the table. But neither SAG-AFTRA nor AMPTP is regularly involved in the productions that Polygon spoke with, and therefore they will not be affected". Justin Carter, for Gizmodo, stated it was tricky as the "fate of an Actual Play show depends on the company behind it, and possibly what platform it's released on" – shows such as Dimension 20 on the streaming service Dropout and Purple Worm! Kill! Kill! on the "upcoming 24-hour Dungeons & Dragons Adventure streaming channel" are impacted by the strike as they "fall under SAG's Electronic Media contract, and are thus shut down". However, other actual plays such as Critical Role and shows on the Glass Cannon Network were not impacted by the strike. Christian Hoffer, for ComicBook.com, explained that YouTube and Twitch channels appear to be a "grey area" so "Critical Role and most Actual Play shows that air exclusively on YouTube and Twitch do not appear to impacted by the SAG-AFTRA strike, while productions like Dimension 20 that hire talent and airs on a closed platform (i.e., one that's not open to anyone to post content on) are impacted by the SAG-AFTRA strike". In August 2023, Sam Reich announced that all Dropout shows (including Dimension 20) have resumed production as it was determined that their "New Media Agreement for Non-Dramatic Programming" was actually a non-struck SAG-AFTRA contract. List of actual play media See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-269] | [TOKENS: 12858]
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Demographic_history_of_Palestine_(region)] | [TOKENS: 10404]
Contents Demographic history of Palestine (region) The population of the region of Palestine, which approximately corresponds to modern Israel and Palestine, has varied in both size and ethnic composition throughout its history. Studies of Palestine's demographic changes over the millennia have shown that a Jewish majority in the first century AD had changed to a Christian majority by the 3rd century AD, and later to a Muslim majority, which is thought to have existed in Mandatory Palestine (1920-1948) since at least the 12th century AD, during which the total shift to Arabic language was completed. Iron Age During the seventh century BC, no fewer than eight nations were settled in Palestine. These included the Arameans of the kingdom of Geshur; the Samaritans who replaced the Israelite kingdom in Samaria; the Phoenicians in the northern cities and parts of Galilee; the Philistines in the Philistine pentapolis; the three kingdoms of the Transjordan– Ammon, Moab and Edom; and the Judaeans of Kingdom of Judah. According to Finkelstein and Broshi, the population of Palestine at the end of the eighth century BC was around 400,000. In the area of Judah in the central hills, Benjamin and Jerusalem, the population was approximately 110,000. By the early sixth century, Ofer's survey results suggest that around 100,000 lived in the kingdom of Judah, with the population in the central hills, Benjamin and Jerusalem about 69,000. A study by Yigal Shiloh of The Hebrew University suggests that the population of Palestine in the Iron Age could have never exceeded one million. He writes: "... the population of the country in the Roman-Byzantine period greatly exceeded that in the Iron Age..." Shiloh accepted Israeli archaeologist Magen Broshi's estimates of Palestine's population at 1,000,000–1,250,000 and noted that Iron Age Israel's population must have been less considering population growth. "...If we accept Broshi's population estimates, which appear to be confirmed by the results of recent research, it follows that the estimates for the population during the Iron Age must be set at a lower figure." One study of population growth from 1000 BC to 750 BC estimated the population of Palestine (Judah and Israel) had an average natural growth of 0.4 per cent per annum. Persian period After the Babylonian conquest of Judah and exile, the population and settlement density of Jerusalem, the Shephelah and the Naqab desert dropped significantly. The Persian province of Yehud Medinata was sparsely-populated and predominantly rural, with around half of the settlements of late Iron age Judah and a population of around 30,000 in the 5th to 4th centuries BC. On the other hand, settlement continuity is discerned in the northern parts of the Judean mountains and the Benjamin area. Cities such as Tell en-Nasbeh, Gibeon and Bethel managed to escape destruction and remained continuously inhabited until the early Achaemenid rule. As early as the 7th century BC, Edomites had lived in the Naqab desert and southern Judah, and by the time Judah fell in 586 BC there was already a substantial Edomite population in southern Judah. When the kingdom of Edom itself succumbed, those people continued its traditions in the south, which the Arabic-speaking Qedarites controlled. This area, which became known as Idumaea, was inhabited by a diverse population of Edomites, Judahites, Phoenicians, Qedarites and other Arabs. Based on analysis of epigraphic material and ostraca from the region, around 32% of recorded names were Arabic, 27% were Edomite, 25% were Northwest Semitic, 10% were Judahite (Hebrew) and 5% were Phoenician. A few names were also classified Egyptian and Old Iranian. The exilic returnees resettled during the time of Cyrus the Great, perhaps with a heightened sense of their ethnic identity. Along the coast of western Palestine, the Phoenicians expanded their presence, while Moabites and Ammonites took refuge in the Cisjordan after the destruction of their kingdoms in 582 BC. Hellenistic and Hasmonean period Following the Macedonian conquest of the Achaemenid Empire and the subsequent Wars of the Diadochi, Palestine came under Hellenistic rule and was contested by the Seleucids and Ptolemaics. Between 167 and 160 BC, the Jewish rebel faction of the Maccabees revolted against Seleucid rule, ultimately leading to the independence of the Hasmonean dynasty. Under John Hyrcanus, the Hasmoneans expanded their territories beyond the traditional confines of Judea and incorporated non-Jewish areas in the process. 1 Maccabees relates that the non-Jewish inhabitants of Gezer and Joppa were expelled by Simon Thassi, who settled Jews in their place. Coinciding with the account of Josephus, archaeological evidence attests to significant destruction in the urban and rural settlements in Idumaea, Samaria and the coastal cities from the Hasmonean conquests, followed by the resettlement of Jews in the newly conquered territories. Nevertheless, there's evidence that some of the non-Jewish inhabitants stayed in all of the newly conquered areas. Many sites in Idumaea experienced destruction, including Maresha, Khirbet el-Rasm, Tel Arad, Khirbet 'Uza and possibly Lachish. The lower city of Maresha and Tel Beersheba were soon abandoned, and evidence for an increased Idumaean presence in Egypt suggests some immigrated there. Idumaeans were eventually forcibly proselytized and co-opted into the Jewish nation by John Hyrcanus. Further north, the Galilee received significant Jewish migration from Judea after its conquest, contributing to a 50% increase in settlements, while the pagan population was greatly reduced. By the end of Alexander Jannaeus' reign, the Galilee was also predominantly Jewish. However, unlike John Hyrcanus, Alexander Jannaeus did not compel the non-Jews to assimilate to the Jewish ethnos and permitted minority ethnē to exist within Hasmonean borders, with the exception of Phoenician coastal cities in the north whom Josephus claims he enslaved. Roman period The Roman conquest of Judea led by Pompey took place in 63 BC. The Roman occupation encompassed the end of Jewish independence in Judea, the last years of the Hasmonean kingdom, the Herodian age and the rise of Christianity, the First Jewish–Roman War, and the fall of Jerusalem and the destruction of the Second Temple, and later, the Bar Kokhba revolt. Modern estimates vary: Applebaum argues that in the Herodian kingdom, there were 1.5 million Jews, a figure Ben David says covers the numbers in Judea alone. Salo Wittmayer Baron estimated the population at 2.3 million at the time of Roman emperor Claudius (reigned 41–54). According to Israeli archeologist Magen Broshi, west of the Jordan River the population certainly did not exceed 1 million: "... the population of Palestine in antiquity did not exceed a million persons. It can also be shown, moreover, that this was more or less the size of the population in the peak period – the late Byzantine period, around AD 600" Broshi made calculations based on the grain-producing capacity of Palestine and on its role in the indigenous diet, assuming an average annual per-capita consumption of 200 kg. (with a maximum of 250 kg.), which would work out to the limit of a sustainable population of 1,000,000 people, a figure which, Broshi states, remained roughly constant down to the end of the Byzantine period (600 CE). The proportion of Jews to gentiles is also unknown. Local population displacements occurred with the expulsion of the Jews from Jerusalem – "In the earlier revolt in the previous century, 66–73 CE, Rome destroyed the Temple and forbade Jews to live in the remaining parts of Jerusalem; for this reason, the Rabbis gathered instead on the Mediterranean coast in Yavneh near Jaffa". Dispersal to other parts of the Roman Empire occurred, but some earlier settlements were also established as early as 4 BC. The Bar Kokhba revolt of 132–136 CE saw a major shift in the population of Palestine. The conflict had catastrophic consequences for the Jewish population in Judea, characterized by massive loss of life, widespread enslavement, and extensive forced displacement. The scale of devastation surpassed even that of the first revolt, leaving the region of Judea (not to be confused with the broader province, Judaea) in a state of desolation. Jews were expelled from Jerusalem and a broad surrounding area covering almost the entire traditional district of Judea. Historian Shimon Applebaum estimates that approximately two-thirds of Judaea's Jewish population perished in the revolt, and some scholars, including historian Joan E. Taylor, and genocide scholars Paul R. Bartrop and Samuel Totten, characterize the Roman suppression of the uprising as an act of genocide. According to a late epitome of Dio Cassius's Roman History, the Roman war operations in the country had left some 580,000 Jews dead, with many more dying of hunger and disease, while 50 of their most important outposts and 985 of their most famous villages were razed to the ground. "Thus," writes Dio Cassius, "nearly the whole of Judaea was made desolate." In 2003, historian Hannah Cotton described Dio's figures as highly plausible based on the accuracy of Roman census declarations. This was again supported in 2021 by an ethno-archaeological analysis by archaeologists Dvir Raviv and Chaim Ben David, who concluded that Dio's data represents a "reliable account" based on "contemporaneous documentation". Archaeological evidence corroborates that Jewish settlement in Judea was almost completely eradicated by the end of the revolt. To date, no site in the region of Judea has revealed a continuous occupation layer throughout the 2nd century CE. Findings indicate signs of devastation or depopulation within the first decades of the century, followed by a period of abandonment. When certain former Jewish settlements were reoccupied in the late 2nd or early 3rd century, the new inhabitants were typically non-Jews; this is reflected in a material culture that differs significantly from that of the earlier Jewish population. Roman policy also dictated the mass enslavement and deportation of captives beyond Judaea. The slave market was reportedly flooded with Jewish captives, who were sold into slavery and dispersed across the empire, significantly expanding the Jewish diaspora. One late ancient source states that Emperor Hadrian sold captives "for the price of a daily portion of food for a horse". Historian William V. Harris estimated that more than 100,000 Jews were enslaved. Jerome records the sale of Jewish slaves in Hebron and Gaza, their relocation to Egypt, and captives being resettled by Hadrian in the Cimmerian Bosporus (in modern day Russia and Ukraine). The suppression of the revolt produced a large wave of refugees, some of whom settled in Babylonia, contributing to the spiritual development of the Jewish community in Mesopotamia during the following centuries. At the same time, Jewish communities continued to thrive in other parts of Judea and Palestine as a whole. According to David Goodblatt: "The destruction of the Jewish metropolis of Jerusalem and its environs and the ventual refoundation of the city as the Roman colony of Aelia Capitolina had lasting repercussions. However, in other parts of Palestine the Jewish population remained strong. Literary and archaeological evidence indicates that in the Late Roman-early Byzantine era Jewish commuinities thrived along the eastern, southern and western edges of Judah, in the Galilee, Golan and the Beit Shean region. And a strong Jewish presence continued throughout this period in many poleis, including Caesarea Maritima and Scythopolis." At some point in late antiquity, Jews became a minority, though the exact timing remains disputed. Eschel argues that a combination of three events: the rise of Christianity, the Jewish-Roman wars and Jewish Diaspora made Jews in the minority. David Goodblatt contends that Jews suffered a setback following the Bar Kokhba revolt (132–136), noting that Jews were still in the majority until the 3rd century and even beyond, when Christianity became the empire's official religion, and continued to thrive in different parts of Palestine. According to Doron Bar, archaeological evidence of synagogue remains demonstrate a central Jewish presence throughout Palestine during the entire Byzantine period. The 'ascension' of Constantine the Great in 312 and Christianity becoming the official state religion of Rome in 391, consequently brought to an end Jewish dominance in Palestine. Already by the mid-3rd century the Jewish majority had been reported to have been lost, while others conclude that a Jewish majority lasted much longer – "What does seem clear is a different kind of change – immigration of Christians and the conversion of pagans, Samaritans, and Jews eventually produced a Christian majority". After the Bar Kokhba revolt of 132–136 CE the make-up of the population of Palestine remains in doubt due to the sparsity of data in the historical record. Figures vary considerably as to the demographics of Palestine in the Christian era. No reliable data exist on the population of Palestine in the pre-Muslim period, either in absolute terms or in terms of shares of total population. Although many Jews were killed, expelled or sold off into slavery after the AD 66–70 and the 123–125 rebellions, the degree to which these transfers affected the Jewish dominance in Palestine is rarely addressed. What is certain is that Palestine did not lose its Jewish component. Goldblatt concludes that the Jews may have remained a majority into the 3rd century AD and even beyond. He notes that 'Jewish followers of Jesus' (Jewish Christians) would not have taken part in the rebellions. Moreover, non-Christian conversions from Judaism after the Bar Kochba revolt were not given much attention. "Indeed, many must have reacted to the catastrophe with despair and total abandonment of Judaism. Apostates from Judaism (aside from converts to Christianity) received little notice in antiquity from either Jewish or non-Jewish writers, but ambitious individuals are known to have turned pagan before the war, and it stands to reason that many more did so after its disastrous conclusion. It is impossible to determine the number who joined the budding Christian movement and the number who disappeared into the polytheist majority." Byzantine period The administrative reorganization of the region in 284–305 AD by the Eastern Romans produced three Palestinian provinces of "Greater Palestine" which lasted from the 4th to the early 7th centuries: Palaestina Prima, which included the historic regions of Philistia, Judea and Samaria with the capital in Caesarea Maritima; Palaestina Secunda, which included the Galilee, the Golan Heights, as well as parts of Perea (western Transjordan) and the Decapolis, with its capital being Scythopolis; and Palaestina Salutaris which included Idumaea, the Naqab desert, Arabia Petraea, parts of Sinai, and Transjordan south of the Dead Sea, with its capital in Petra. Under the Byzantines, the religious landscape of Palestine underwent a significant transformation, propelled by the widescale Christianization of the local Jewish, pagan and Samaritan communities, as well as immigration of Christian pilgrims and monks into the Holy land. After the failure of Bar Kokhba revolt in 135, Jews were barred from living in a large part of former Judaea. During this period, Jews were concentrated in the Galilee, the Golan and marginal areas of the Judaean hills, between Eleutheropolis and Hebron in the Daromas, with significant Jewish settlement in the strip between Ein Gedi and Ascalon. Jews also lived in the southern Jordan Valley near Jericho, where they were employed in date and Balsam plantations; around Narbata, in Samaria and in the Jezreel Valley. However, the main centres of Jewish life and culture during this period were the cosmopolitan cities of the coastal plains, particularly Lydda, Jamnia, Azotus and Caesarea Maritima. Samaritans, originally concentrated in communities around Mount Gerizim, had spread beyond Samaria proper in the second century and established communities in the coastal plains, Judea and the Galilee, with some Samaritan villages located as far as Golan. However, the Samaritan revolts against the Byzantines in 484–573 and their forced conversion to Christianity under Maurice (582–602) and Heraclius (610–641) significantly diminished their numbers. Pagan tradition flourished in the period before Constantine, and pagans were a majority in the coastal plains, Judea, Samaria, the Naqab and much of the urban centers. Although proscribed in the Bible, pagan cults had thrived in Palestine throughout the First and Second Temple period, and many Jews did not escape it. According to E. Friedheim, many Jews in Palestine also embraced paganism and pagan cults again following the destruction of the Second Temple in 70 AD. Talmudic tradition forbid customs relating to 'the Amorite ways', showing a persistence of ancient Canaanite customs among the non-Jewish population and their inconsistent penetration into Jewish circles. Christian presence at the start of this period was mainly evidenced in Hellenistic cities and some of the villages in southern Judea. However, Christianity's early influence in either the Jewish, Samaritan or pagan rural areas was minor and came at a much later stage around the sixth century, when many of the community churches in Judea, western Galilee, the Naqab and other places were built. By the fifth century, many pagan temples in Palestine including those in Jerusalem, Bethlehem, Mamre, Tel Qadesh and Tel Dan, had been demolished, and churches were erected in their place. In the rural sector, vast areas of Palestine, such as Galilee and Samaria, had Jewish and Samaritan majorities, and Christianity spread in these areas far more gradually and at a slower pace, achieving real momentum only during the second half of the Byzantine period. Christianity had also spread in the Nabataean cities, towns and villages in southern Palestine, where a mixed Christian and pagan population lived, with elaborate churches built in Abdah, Mampsis and Subeita. Palestine reached its peak population of around 1 to 1.5 million during this period. However, estimates of the relative proportions of Jews, Samaritans and pagans vary widely and are speculative. By counting settlements, Avi-Yonah estimated that Jews comprised half the population of the Galilee at the end of the 3rd century, and a quarter in the other parts of the country, but had declined to 10–15% of the total by 614. On the other hand, by counting churches and synagogues, Tsafrir estimated the Jewish proportion to be 25% in the Byzantine period. Stemberger, however, considers that Jews were the largest population group at the beginning of the 4th century, closely followed by the pagans. According to Schiffman, DellaPergola and Bar, Christians only became the majority of the country's population at the beginning of the fifth century, Medieval Islamic Period Prior to the Muslim conquest of Palestine (635–640), Palaestina Prima had a population of 700 thousand, of which around 100 thousand were Jews and 30 to 80 thousand were Samaritans, with the remainder being Chalcedonian and Miaphysite Christians. Arabs came to constitute a ruling minority in Palestine who despised farming and kept their nomadic lifestyle, with the tribes of Lakhm, Judham, Kinana, Khath'am, Khuza'a, and Azd Sarat forming the army of Jund Filastin (military district of Palestine). The pace of conversion to Islam among the various Christian, Jewish, and Samaritan communities in Palestine varied during the early period (638–1098), and opinions vary regarding the extent of Islamization during the early Islamic period. Some argue Palestine was already majority Muslim by the time of arrival of the First Crusade, while others contend that Christians were still in the majority and the process of mass adoption of Islam took place only from the 13th century onwards, during the Mamluk period. According to archaeologist Gideon Avni, archaeological surveys show that most Christian settlements and sites preserved their identity up to the crusader period, supplemented by the numerosity of churches and monasteries all over Palestine. The early Muslim population, on the other hand, was confined to the Umayyad palaces in the Jordan Valley and around the Sea of Galilee, the ribat fortresses along the coast and the farms of the Naqab desert. Thus, conversion to Islam only gained real momentum in Palestine after Saladin's conquest of Jerusalem in 1187 and the expulsion of Franks. Arabic gradually replaced Palestinian Aramaic and largely by the 9th century, while Islamization was only finalized in the Mamluk period. Urban centers continued to flourish after the Islamic conquest, but changes took place in the 9th and 10th centuries. According to Ellenblum, climate disasters and earthquakes in the late 10th & 11th centuries created internal chaos and famines throughout the Middle East, as well as population decline. Michael Ehrlich argues that the decline in urban centers likely caused local ecclesiastical administrations to weaken or disappear altogether, leaving Christians most susceptible to conversion. When certain urban centers collapsed, surrounding communities would have converted to Islam. Per Ellenblum, the eastern Galilee and central Samaria, where Jews and Samaritans were concentrated respectively, were converted rather quickly and had a Muslim or Jewish-Muslim majority by the crusader period. In Samaria, the sedentarization of nomadic tribes that penetrated Samaria was complemented by the mass conversion of the Samaritan population starting from the Tulunid period (884–905). By the 12th century, central Samaria was the only fully Islamized region in Palestine, while Christians predominated in southern Samaria, the Sinjil-Jerusalem area, the western Galilee and the Hebron Hills, which only converted in later periods. The introduction of Islam in the Hebron hills is archaeologically attested in Jewish villages but not Christian ones, mainly in Susya and Eshtemoa, where the local synagogues were repurposed as mosques. Coastal cities such as Ramla and Ascalon thrived under the Fatimids (969–1099), where many Shiite scholars sought refuge and Shiite shrines were built, and the hinterland of Ramla was predominantly Muslim when the crusaders conquered it. On the other hand, Sufis played an important role in the Islamization of the hinterland of Jerusalem, where they built many religious buildings during the Mamluk period, transforming the cultural landscape. According to Nimrod Luz, when a Sufi settled in a Christian village, the local population often converted to Islam. According to Reuven Atimal, conversion to Islam appears to have halted and apparently even been reversed under the Kingdom of Jerusalem (1099–1291). With the advent of the Ayyubids (1187–1260) and then the consequent Mamluk takeover (1260–1517), it appears that the process of religious conversion to Islam was accelerated. By the start of the Ottoman period in 1516, it is commonly thought that the Muslim majority in the country was more-or-less like that of the mid-19th century. Conversion among the Samaritan families of Nablus to Islam continued well into the 19th century. Ottoman period During the first century of the Ottoman rule, i.e., 1550, Bernard Lewis in a study of Ottoman registers of the early Ottoman Rule of Palestine reports a population of around 300,000: From the mass of detail in the registers, it is possible to extract something like a general picture of the economic life of the country in that period. Out of a total population of about 300,000 souls, between a fifth and a quarter lived in the six towns of Jerusalem, Gaza, Safed, Nablus, Ramle, and Hebron. The remainder consisted mainly of peasants (fellahin), living in villages of varying size, and engaged in agriculture. Their main food-crops were wheat and barley in that order, supplemented by leguminous pulses, olives, fruit, and vegetables. In and around most of the towns there was a considerable number of vineyards, orchards, and vegetable gardens. According to Justin McCarthy, the population of Palestine throughout the 17th and 18th centuries (1601–1801) was likely not much smaller than when it in 1850 (~340,000), after which it started to increase.[page needed] In the late 18th and early 19th centuries, there were several migration waves of Egyptians to Palestine. One notable influx occurred in the 1780s due to a severe famine in Egypt. The main migration wave happened between 1829 and 1841, when Muhammad Ali and Ibrahim Pasha invaded Palestine, some Egyptian settlers and army dropouts stayed in Palestine after Egyptian retreat. These migrants primarily settled in the low country: in the Beisan area, Wadi Araba, the Jezreel valley, the Shephelah, the coastal plain and the Negev desert. This represented the largest migrant group prior to the Jewish migratory waves, with David Grossman estimating the total number between 23,000 and 30,000 people. Algerian refugees ("Maghrebis") also arrived in Palestine in the 1850s following Abdelkader's rebellion. In the late nineteenth century, prior to the rise of Zionism, Jews are thought to have comprised between 2% and 5% of the population of Palestine, although the precise population is not known. Jewish immigration had begun following the 1839 Tanzimat reforms; between 1840 and 1880, the Jewish population of Palestine rose from 9,000 to 23,000. According to Alexander Scholch, Palestine in 1850 had about 350,000 inhabitants, 30% of whom lived in 13 towns; roughly 85% were Muslims, 11% were Christians and 4% Jews. The Ottoman census of 1878 indicated the following demographics for the three districts that best approximated what later became Mandatory Palestine; that is, the Mutasarrifate of Jerusalem, the Nablus Sanjak, and the Acre Sanjak. In addition, some scholars estimate approximately 5,000-10,000 additional foreign-born Jews at this time: According to Ottoman statistics studied by Justin McCarthy, the population of Palestine in the early 19th century was 350,000, in 1860 it was 411,000 and in 1900 about 600,000 of which 94% were Arabs. The estimated 24,000 Jews in Palestine in 1882 represented just 0.3% of the world's Jewish population. 1914 Ottoman census listed the following population figures: Per McCarthy's estimate, in 1914 Palestine had a population of 657,000 Muslim Arabs, 81,000 Christian Arabs, and 59,000 Jews. McCarthy estimates the non-Jewish population of Palestine at 452,789 in 1882, 737,389 in 1914, 725,507 in 1922, 880,746 in 1931 and 1,339,763 in 1946. Based on the work of Roberto Bachi, Sergio Della Pergola estimated that Palestine's population in 1914 was 689,000, comprising 525,000 Muslims, 94,000 Jews, and 70,000 Christians. According to another estimate, the Jewish population in 1914 was 85,000 and subsequently fell to 56,000 in 1916–1919 as a result of World War I. During the war, the Ottoman authorities deported many Jews with foreign citizenship, while others left after they were presented with a choice of taking Ottoman citizenship or leaving Palestine. By December 1915 about 14% of the Jewish population had left, mainly for Egypt, where they awaited the war's end so they could return to Palestine. According to Dr. Mutaz M. Qafisheh, the number of people who held Ottoman citizenship prior to the British Mandate in 1922 was just over 729,873, of which 7,143 were Jews. Qafisheh calculated this using population and immigration statistics from the 1946 Survey of Palestine, as well as the fact that 37,997 people acquired provisional Palestinian naturalization certificates in September 1922 for the purpose of voting in the legislative election, of which all but 100 were Jews. The 1922 census of Palestine lists 3,210 Christians as members of Armenian churches, 271 being Armenian Catholic (176 in Jerusalem-Jaffa, 10 in Samaria, and 85 in Northern) and 2,939 being Armenian Apostolic (11 in Southern, 2,800 in Jerusalem-Jaffa, eight in Samaria, and 120 in Northern) along with 2,970 Armenian speakers, including 2,906 in municipal areas (2,442 in Jerusalem, 216 in Jaffa, 101 in Haifa, four in Gaza, 13 in Nablus, one in Safad, 20 in Nazareth, 13 in Ramleh, one in Tiberias, 37 in Bethlehem, 25 in Acre, four in Tulkarem, 21 in Ramallah, six in Jenin, one in Beersheba, and one in Baisan). British Mandate era In 1920, the British Government's Interim Report on the Civil Administration of Palestine stated that there were hardly 700,000 people living in Palestine: There are now in the whole of Palestine hardly 700,000 people, a population much less than that of the province of Gallilee alone in the time of Christ. Of these 235,000 live in the larger towns, 465,000 in the smaller towns and villages. Four-fifths of the whole population are Moslems. A small proportion of these are Bedouin Arabs; the remainder, although they speak Arabic and are termed Arabs, are largely of mixed race. Some 77,000 of the population are Christians, in large majority belonging to the Orthodox Church, and speaking Arabic. The minority are members of the Latin or of the Uniate Greek Catholic Church, or—a small number—are Protestants. The Jewish element of the population numbers 76,000. Almost all have entered Palestine during the last 40 years. Prior to 1850 there were in the country only a handful of Jews. In the following 30 years a few hundreds came to Palestine. Most of them were animated by religious motives; they came to pray and to die in the Holy Land, and to be buried in its soil. After the persecutions in Russia forty years ago, the movement of the Jews to Palestine assumed larger proportions. Jewish agricultural colonies were founded. They developed the culture of oranges and gave importance to the Jaffa orange trade. They cultivated the vine, and manufactured and exported wine. They drained swamps. They planted eucalyptus trees. They practised, with modern methods, all the processes of agriculture. There are at the present time 64 of these settlements, large and small, with a population of some 15,000. By 1948, the population had risen to 1,900,000, of whom 68% were Arabs, and 32% were Jews (UNSCOP report, including Bedouin). Report and general abstract of the Jewish agriculture was taken by the Palestine Zionist Executive in April 1927. Object of the Census: (p 85) Demography: to enumerate all Jewish inhabitants living in the agricultural and semi-agricultural communities. (p 86) Number of Settlements: 130 places have been enumerated. If we consider the large settlements and the adjacent territories as one geographical unit, then we may group these places into 101 agricultural settlements, 3 semi-agricultural places (Affule, Shekhunath Borukhov and Neve Yaaqov) and 12 farms scattered throughout the country. In addition, there were a few places which, owing to technical difficulties, were not enumerated in the month of April. (Peqiin, Meiron, Mizpa and Zikhron David, numbering in the aggregate 100 persons). Of these agricultural settlements, 32 are located in Judea, 12 in the Plain of Sharon, 32 are located in the Plain of Jesreel, 16 in Lower Galilee, and 9 in Upper Galilee. Most of them have a very small population – about one half being inhabited by less than 100 persons each. In 42 settlements there are from 100 to 500 persons, and in only five does the population exceed 1.000. viz. (p 86) Number of Inhabitants: The aggregate population living in the agricultural and semi-agricultural places were 30.500. (p 87 & p 98) The pre-war population accounts for 9,473 persons, which is slightly less than one-third of the present population, whereas the rest are post-war immigrants. Some 10.000 persons settled since 1924, since the so-called middle-class immigration. Late Arab and Muslim immigration to Palestine At the end of the 18th century, there was a bi-directional movement between Egypt and Palestine. Between 1829 and 1841, thousands of Egyptian fellahin (peasants) arrived in Palestine fleeing Muhammad Ali Pasha's conscription, which he reasoned as the casus belli to invade Palestine in October 1831, ostensibly to repatriate the Egyptian fugitives. Egyptian forced labourers, mostly from the Nile Delta, were brought in by Muhammad Ali and settled in sakināt (neighborhoods) along the coast for agriculture, which set off bad blood with the indigenous fellahin, who resented Muhammad Ali's plans and interference, prompting the wide-scale Peasants' revolt in Palestine in 1834. After Egyptian defeat and retreat in 1841, many laborers and deserters stayed in Palestine. Most of these settled and were quickly assimilated in the cities of Jaffa and Gaza, the Coastal plains and Wadi Ara. Estimates of Egyptian migrants during this period generally place them at 15,000–30,000. At the time, the sedentary population of Palestine fluctuated around 350,000. Palestine experienced a few waves of immigration of Muslims from the lands lost by the Ottoman Empire in the 19th century. Algerians, Circassians and Bosniaks were mostly settled on vacant land and unlike the Egyptians they did not alter the geography of settlement significantly.: 73 The Naqab desert further south preserved its Bedouin population, who had reportedly lived in the area since the 7th century. Many Bedouin tribes moved from the Hejaz and Transjordan in the 14th and 15th centuries. According to the 1922 census of Palestine, "The Ottoman authorities in 1914 placed the tribal population of Beersheba at 55,000, and since that date there has been a migration of tribes from the Hejaz and Southern Transjordan into the Beersheba area mainly as a result of succession of adequate rainfalls and of pressure exerted by other tribes east of the River Jordan." For 1922, the census gives a figure of 74,910 including 72,998 in the tribal areas. Demographer Uziel Schmelz, in his analysis of Ottoman registration data for 1905 populations of Jerusalem and Hebron kazas, found that most Ottoman citizens living in these areas, comprising about one quarter of the population of Palestine, were living at the place where they were born. Specifically, of Muslims, 93.1% were born in their current locality of residence, 5.2% were born elsewhere in Palestine, and 1.6% were born outside Palestine. Of Christians, 93.4% were born in their current locality, 3.0% were born elsewhere in Palestine, and 3.6% were born outside Palestine. Of Jews (excluding the large fraction who were not Ottoman citizens), 59.0% were born in their current locality, 1.9% were born elsewhere in Palestine, and 39.0% were born outside Palestine. According to Roberto Bachi, head of the Israeli Institute of Statistics from 1949 onwards, between 1922 and 1945 there was a net Arab migration into Palestine of between 40,000 and 42,000, excluding 9,700 people who were incorporated after territorial adjustments were made to the borders in the 1920s. Based on these figures, and including those netted by the border alterations, Joseph Melzer calculates an upper boundary of 8.5% for Arab growth in the two decades, and interprets it to mean the local Palestinian community's growth was generated primarily by natural increase in birth rates, for both Muslims and Christians. According to a Jewish Agency survey, 77% of Palestinian population growth in Palestine between 1914 and 1938, during which the Palestinian population doubled, was due to natural increase, while 23% was due to immigration. Arab immigration was primarily from Lebanon, Syria, Transjordan, and Egypt (all countries that bordered Palestine). The overall assessment of several British reports was that the increase in the Arab population was primarily due to natural increase. These included the Hope Simpson Enquiry (1930), the Passfield White Paper (1930), the Peel Commission report (1937), and the Survey of Palestine (1945). However, the Hope Simpson Enquiry did note that there was significant illegal immigration from the surrounding Arab territories, while the Peel Commission and Survey of Palestine claimed that immigration played only a minor role in the growth of the Arab population. The 1931 census of Palestine considered the question of illegal immigration since the previous census in 1922. It estimated that unrecorded immigration during that period may have amounted to 9,000 Jews and 4,000 Arabs. It also gave the proportion of persons living in Palestine in 1931 who were born outside Palestine: Muslims, 2%; Christians, 20%; Jews, 58%. The statistical information for Arab immigration (and expulsions when the clandestine migrants were caught), with a contrast to the figures for Jewish immigration over the same period of 1936–1939, is given by Henry Laurens in the following terms According to Mark Tessler, at least some of the Arab population growth was the result of immigration, mostly from the Sinai, Lebanon, Syria, and Transjordan, stimulated by the relatively favorable economic conditions in Palestine, but he noted differing opinions among scholars over how substantial it was. He cited one study as putting the Arab population growth attributable to immigration between 1922 and 1931 at 7%, meaning that 4% of the Arab population in 1931 was foreign-born, while noting another estimate put the growth in the Arab population attributable to immigration at 38.7%, which would mean that 11.8% of the Arab population in 1931 was foreign-born. Tessler wrote that "Israeli as well as Palestinian scholars have disputed this assertion, however, concluding that it is at best a theory and in all probability a myth." In a 1974 study, demographer Roberto Bachi estimated that about 900 Muslims per year were detected as illegal immigrants but not deported. He noted the impossibility of estimating illegal immigration that was undetected, or the fraction of those persons who eventually departed. He did note that there was an unexplained increase in the Muslim population between 1922 and 1931, and he did suggest, though qualifying it as a "mere guess", that this was due to a combination of unrecorded immigration (using the 1931 census report estimate) and undercounting in the 1922 census. While noting the uncertainty of earlier data, Bachi also observed that the Muslim population growth in the 19th century appeared to be high by world standards: "[B]etween 1800 and 1914, the Muslim population had a yearly average increase of an order of magnitude of roughly 6–7 per thousand. This can be compared to the very crude estimate of about 4 per thousand for the "less developed countries" of the world (in Asia, Africa, and Latin America) between 1800 and 1910. It is possible that some part of the growth of the Muslim population was due to immigration. However, it seems likely that the dominant determinant of this modest growth was the beginning of some natural increase." According to Justin McCarthy, "evidence for Muslim immigration into Palestine is minimal. Because no Ottoman records of that immigration have yet been discovered, one is thrown back on demographic analysis to evaluate Muslim migration." McCarthy argues that there was no significant Arab immigration into mandatory Palestine: From analyses of rates of increase of the Muslim population of the three Palestinian sanjaks, one can say with certainty that Muslim immigration after the 1870s was small. Had there been a large group of Muslim immigrants their numbers would have caused an unusual increase in the population and this would have appeared in the calculated rate of increase from one registration list to another ... Such an increase would have been easily noticed; it was not there. The argument that Arab immigration somehow made up a large part of the Palestinian Arab population is thus statistically untenable. The vast majority of the Palestinian Arabs resident in 1947 were the sons and daughters of Arabs who were living in Palestine before modern Jewish immigration began. There is no reason to believe that they were not the sons and daughters of Arabs who had been in Palestine for many centuries. McCarthy also concludes that there was no significant internal migration to Jewish areas attributable to better economic conditions: Some areas of Palestine did experience greater population growth than others, but the explanation for this is simple. Radical economic change was occurring all over the Mediterranean Basin at the time. Improved transportation, greater mercantile activity, and greater industry had increased the chances for employment in cities, especially coastal cities... Differential population increase was occurring all over the Eastern Mediterranean, not just in Palestine... The increase in Muslim population had little or nothing to do with Jewish immigration. In fact the province that experienced the greatest Jewish population growth (by .035 annually), Jerusalem Sanjak, was the province with the lowest rate of growth of Muslim population (.009). Fred M. Gottheil has questioned McCarthy's estimates of immigration. Gottheil says that McCarthy didn't give proper weight to the importance of economic incentives at the time, and that McCarthy cites Roberto Bachi's estimates as conclusive numbers, rather than lower bounds based on detected illegal immigration. Gad Gilbar has also concluded that the prosperity of Palestine in the 45–50 years before World War I was a result of the modernization and growth of the economy owing to its integration with the world economy and especially with the economies of Europe. Although the reasons for growth were exogenous to Palestine the bearers were not waves of Jewish immigration, foreign intervention nor Ottoman reforms but "primarily local Arab Muslims and Christians." However, Gilbar did attribute the rapid growth of Jaffa and Haifa in the final three decades of Ottoman rule in part to migration, writing that "both attracted population from the rural and urban surroundings and immigrants from outside Palestine." Yehoshua Porath believes that the notion of "large-scale immigration of Arabs from the neighboring countries" is a myth "proposed by Zionist writers". He writes: As all the research by historian Fares Abdul Rahim and geographers of modern Palestine shows, the Arab population began to grow again in the middle of the nineteenth century. That growth resulted from a new factor: the demographic revolution. Until the 1850s there was no "natural" increase of the population, but this began to change when modern medical treatment was introduced and modern hospitals were established, both by the Ottoman authorities and by the foreign Christian missionaries. The number of births remained steady but infant mortality decreased. This was the main reason for Arab population growth. ... No one would doubt that some migrant workers came to Palestine from Syria and Trans-Jordan and remained there. But one has to add to this that there were migrations in the opposite direction as well. For example, a tradition developed in Hebron to go to study and work in Cairo, with the result that a permanent community of Hebronites had been living in Cairo since the fifteenth century. Trans-Jordan exported unskilled casual labor to Palestine; but before 1948 its civil service attracted a good many educated Palestinian Arabs who did not find work in Palestine itself. Demographically speaking, however, neither movement of population was significant in comparison to the decisive factor of natural increase. Modern era As of 2014[update], Israeli and Palestinian statistics for the overall numbers of Jews and Arabs in the area west of the Jordan, inclusive of Israel and the Palestinian territories, are similar and suggest a rough parity in the two populations. Palestinian statistics estimate 6.1 million Palestinians for that area, while Israel's Central Bureau of Statistics estimates 6.2 million Jews living in sovereign Israel. Gaza is estimated by the Israeli Defense Forces (IDF) to have 1.7 million, and the West Bank 2.8 million Palestinians, while Israel proper has 1.7 million Arab citizens. According to Israel's Central Bureau of Statistics, as of May 2006, of Israel's 7 million people, 77% were Jews, 18.5% Arabs, and 4.3% "others". Among Jews, 68% were Sabras (Israeli-born), mostly second- or third-generation Israelis, and the rest are olim – 22% from Europe and the Americas, and 10% from Asia and Africa, including the Arab countries. According to these Israeli and Palestinian estimates, the population in Israel and the Palestinian territories stands at from 6.1 to 6.2 million Palestinians and 6.1 million Jews.[failed verification] According to Sergio DellaPergola, if foreign workers and non-Jewish Russian immigrants in Israel are subtracted, Jews are already a minority in the land between the river and the sea. DellaPergola calculates that Palestinians as of January 2014 number 5.7 million as opposed to a "core Jewish population" of 6.1 million. The Palestinian statistics are contested by some right-wing Israeli think-tanks and non-demographers such as Yoram Ettinger, who claim they overestimate Palestinian numbers by double-counting and counting Palestinians who live abroad. The double-counting argument is dismissed by both Arnon Soffer, Ian Lustick and DellaPergola, the latter dismissing Ettinger's calculations as 'delusional' or manipulated for ignoring the birth-rate differentials between the two populations (3 children per Jewish mother vs 3.4 for Palestinians generally, and 4.1 in the Gaza Strip). DellaPergola allows, however, for an inflation in the Palestinian statistics due to the counting of Palestinians who are abroad, a discrepancy of some 380,000 individuals. The latest Israeli census was conducted by Israel Central Bureau of Statistics in 2019. Israeli census excludes the Gaza Strip. It also excludes all West Bank Palestinian localities, including those in Area C, while it includes the annexed East Jerusalem. It also includes all Israeli settlements in the West Bank. The census also includes the occupied Syrian territory of Golan Heights. As per this census, the total population in 2019 was 9,140,473. Israel's population consists of 7,221,442 "Jews and others", and 1,919,031 Arabs, almost all of which Palestinians, with 26,261 in the Golan Subdistrict, being Syrian, mostly Druze, and a small number Alawite. The population includes the Druze community of Israel (i.e. not Syrian Druze) as well, who generally self-identify as Israeli, and are the only Arab-speaking community that has mandatory military service in the IDF. The latest Palestinian census was conducted by Palestinian Central Bureau of Statistics in 2017. The Palestinian census covers the Gaza Strip and the West Bank, including East Jerusalem. The Palestinian census does not cover Israeli settlements in the West Bank including those in East Jerusalem. The census does not provide any ethnic or religious distinction. However, it is reasonable to assume that almost everyone counted is Palestinian Arab. As per this census, the total population of the Palestinian territories was 4,780,978. The West Bank had a population of 2,881,687, whereas the Gaza Strip had a population of 1,899,291. De facto 1949–1967 borders 53,000Jews 27,000Druze 24,000Alawite 2,000 2021 2021 2021 (i.e. East Jerusalem Palestinians) 9,842,000 Dec. 2023 Here is a summary chart of the above information without relying on color schemes: The Israel Central Bureau of Statistics ("CBS") definition of the Area of the State of Israel: The CBS' definition of the Population of Israel, however: See also Notes References Bibliography Israeli settlementsTimeline, International law West BankJudea and Samaria Area Gaza StripHof Aza Regional Council
========================================
[SOURCE: https://en.wikipedia.org/wiki/Epidemic_model] | [TOKENS: 3049]
Contents Mathematical modelling of infectious diseases Mathematical models can project how infectious diseases progress to show the likely outcome of an epidemic (including in plants) and help inform public health and plant health interventions. Models use basic assumptions or collected statistics along with mathematics to find parameters for various infectious diseases and use those parameters to calculate the effects of different interventions, like mass vaccination programs. The modelling can help decide which intervention(s) to avoid and which to trial, or can predict future growth patterns, etc. History The modelling of infectious diseases is a tool that has been used to study the mechanisms by which diseases spread, to predict the future course of an outbreak and to evaluate strategies to control an epidemic. The first scientist who systematically tried to quantify causes of death was John Graunt in his book Natural and Political Observations made upon the Bills of Mortality, in 1662. The bills he studied were listings of numbers and causes of deaths published weekly. Graunt's analysis of causes of death is considered the beginning of the "theory of competing risks" which according to Daley and Gani is "a theory that is now well established among modern epidemiologists". The earliest account of mathematical modelling of spread of disease was carried out in 1760 by Daniel Bernoulli. Trained as a physician, Bernoulli created a mathematical model to defend the practice of inoculating against smallpox. The calculations from this model showed that universal inoculation against smallpox would increase the life expectancy from 26 years 7 months to 29 years 9 months. Daniel Bernoulli's work preceded the modern understanding of germ theory. In the early 20th century, William Hamer and Ronald Ross applied the law of mass action to explain epidemic behaviour. The 1920s saw the emergence of compartmental models. The Kermack–McKendrick epidemic model (1927) and the Reed–Frost epidemic model (1928) both describe the relationship between susceptible, infected and immune individuals in a population. The Kermack–McKendrick epidemic model was successful in predicting the behavior of outbreaks very similar to that observed in many recorded epidemics. Recently, agent-based models (ABMs) have been used in exchange for simpler compartmental models. For example, epidemiological ABMs have been used to inform public health (nonpharmaceutical) interventions against the spread of SARS-CoV-2. Epidemiological ABMs, in spite of their complexity and requiring high computational power, have been criticized for simplifying and unrealistic assumptions. Still, they can be useful in informing decisions regarding mitigation and suppression measures in cases when ABMs are accurately calibrated. Assumptions Models are only as good as the assumptions on which they are based. If a model makes predictions that are out of line with observed results and the mathematics is correct, the initial assumptions must change to make the model useful. Types of epidemic models "Stochastic" means being or having a random variable. A stochastic model is a tool for estimating probability distributions of potential outcomes by allowing for random variation in one or more inputs over time. Stochastic models depend on the chance variations in risk of exposure, disease and other illness dynamics. Statistical agent-level disease dissemination in small or large populations can be determined by stochastic methods. When dealing with large populations, as in the case of tuberculosis, deterministic or compartmental mathematical models are often used. In a deterministic model, individuals in the population are assigned to different subgroups or compartments, each representing a specific stage of the epidemic. The transition rates from one class to another are mathematically expressed as derivatives, hence the model is formulated using differential equations. While building such models, it must be assumed that the population size in a compartment is differentiable with respect to time and that the epidemic process is deterministic. In other words, the changes in population of a compartment can be calculated using only the history that was used to develop the model. Formally, these models belong to the class of deterministic models; however, they incorporate heterogeneous social features into the dynamics, such as individuals' levels of sociality, opinion, wealth, geographic location, which profoundly influence disease propagation. These models are typically represented by partial differential equations, in contrast to classical models described as systems of ordinary differential equations. Following the derivation principles of kinetic theory, they provide a more rigorous description of epidemic dynamics by starting from agent-based interactions. Sub-exponential growth A common explanation for the growth of epidemics holds that 1 person infects 2, those 2 infect 4 and so on and so on with the number of infected doubling every generation. It is analogous to a game of tag where 1 person tags 2, those 2 tag 4 others who've never been tagged and so on. As this game progresses it becomes increasing frenetic as the tagged run past the previously tagged to hunt down those who have never been tagged. Thus this model of an epidemic leads to a curve that grows exponentially until it crashes to zero as all the population have been infected. i.e. no herd immunity and no peak and gradual decline as seen in reality. Epidemic Models on Networks Epidemics can be modeled as diseases spreading over networks of contact between people. Such a network can be represented mathematically with a graph and is called the contact network. Every node in a contact network is a representation of an individual and each link (edge) between a pair of nodes represents the contact between them. Links in the contact networks may be used to transmit the disease between the individuals and each disease has its own dynamics on top of its contact network. The combination of disease dynamics under the influence of interventions, if any, on a contact network may be modeled with another network, known as a transmission network. In a transmission network, all the links are responsible for transmitting the disease. If such a network is a locally tree-like network, meaning that any local neighborhood in such a network takes the form of a tree, then the basic reproduction can be written in terms of the average excess degree of the transmission network such that: R 0 = ⟨ k 2 ⟩ ⟨ k ⟩ − 1 , {\displaystyle R_{0}={\frac {\langle k^{2}\rangle }{\langle k\rangle }}-1,} where ⟨ k ⟩ {\displaystyle {\langle k\rangle }} is the mean-degree (average degree) of the network and ⟨ k 2 ⟩ {\displaystyle {\langle k^{2}\rangle }} is the second moment of the transmission network degree distribution. It is, however, not always straightforward to find the transmission network out of the contact network and the disease dynamics. For example, if a contact network can be approximated with an Erdős–Rényi graph with a Poissonian degree distribution, and the disease spreading parameters are as defined in the example above, such that β {\displaystyle \beta } is the transmission rate per person and the disease has a mean infectious period of 1 γ {\displaystyle {\dfrac {1}{\gamma }}} , then the basic reproduction number is R 0 = β γ ⟨ k ⟩ {\displaystyle R_{0}={\dfrac {\beta }{\gamma }}{\langle k\rangle }} since ⟨ k 2 ⟩ − ⟨ k ⟩ 2 = ⟨ k ⟩ {\displaystyle {\langle k^{2}\rangle }-{\langle k\rangle }^{2}={\langle k\rangle }} for a Poisson distribution. Reproduction number The basic reproduction number (denoted by R0) is a measure of how transferable a disease is. It is the average number of people that a single infectious person will infect over the course of their infection. This quantity determines whether the infection will increase sub-exponentially, die out, or remain constant: if R0 > 1, then each person on average infects more than one other person so the disease will spread; if R0 < 1, then each person infects fewer than one person on average so the disease will die out; and if R0 = 1, then each person will infect on average exactly one other person, so the disease will become endemic: it will move throughout the population but not increase or decrease. Endemic steady state An infectious disease is said to be endemic when it can be sustained in a population without the need for external inputs. This means that, on average, each infected person is infecting exactly one other person (any more and the number of people infected will grow sub-exponentially and there will be an epidemic, any less and the disease will die out). In mathematical terms, that is: The basic reproduction number (R0) of the disease, assuming everyone is susceptible, multiplied by the proportion of the population that is actually susceptible (S) must be one (since those who are not susceptible do not feature in our calculations as they cannot contract the disease). Notice that this relation means that for a disease to be in the endemic steady state, the higher the basic reproduction number, the lower the proportion of the population susceptible must be, and vice versa. This expression has limitations concerning the susceptibility proportion, e.g. the R0 equals 0.5 implicates S has to be 2, however this proportion exceeds the population size.[citation needed] Assume the rectangular stationary age distribution and let also the ages of infection have the same distribution for each birth year. Let the average age of infection be A, for instance when individuals younger than A are susceptible and those older than A are immune (or infectious). Then it can be shown by an easy argument that the proportion of the population that is susceptible is given by: We reiterate that L is the age at which in this model every individual is assumed to die. But the mathematical definition of the endemic steady state can be rearranged to give: Therefore, due to the transitive property: This provides a simple way to estimate the parameter R0 using easily available data. For a population with an exponential age distribution, This allows for the basic reproduction number of a disease given A and L in either type of population distribution. Compartmental models in epidemiology Compartmental models are formulated as Markov chains. A classic compartmental model in epidemiology is the SIR model, which may be used as a simple model for modelling epidemics. Multiple other types of compartmental models are also employed. In 1927, W. O. Kermack and A. G. McKendrick created a model in which they considered a fixed population with only three compartments: susceptible, S ( t ) {\displaystyle S(t)} ; infected, I ( t ) {\displaystyle I(t)} ; and recovered, R ( t ) {\displaystyle R(t)} . The compartments used for this model consist of three classes: There are many modifications of the SIR model, including those that include births and deaths, where upon recovery there is no immunity (SIS model), where immunity lasts only for a short period of time (SIRS), where there is a latent period of the disease where the person is not infectious (SEIS and SEIR), and where infants can be born with immunity (MSIR).[citation needed] Infectious disease dynamics Mathematical models need to integrate the increasing volume of data being generated on host-pathogen interactions. Many theoretical studies of the population dynamics, structure and evolution of infectious diseases of plants and animals, including humans, are concerned with this problem. Research topics include: Mathematics of mass vaccination If the proportion of the population that is immune exceeds the herd immunity level for the disease, then the disease can no longer persist in the population and its transmission dies out. Thus, a disease can be eliminated from a population if enough individuals are immune due to either vaccination or recovery from prior exposure to disease. For example, smallpox eradication, with the last wild case in 1977, and certification of the eradication of indigenous transmission of 2 of the 3 types of wild poliovirus (type 2 in 2015, after the last reported case in 1999, and type 3 in 2019, after the last reported case in 2012). The herd immunity level will be denoted q. Recall that, for a stable state:[citation needed] In turn, which is approximately:[citation needed] S will be (1 − q), since q is the proportion of the population that is immune and q + S must equal one (since in this simplified model, everyone is either susceptible or immune). Then: Remember that this is the threshold level. Die out of transmission will only occur if the proportion of immune individuals exceeds this level due to a mass vaccination programme. We have just calculated the critical immunization threshold (denoted qc). It is the minimum proportion of the population that must be immunized at birth (or close to birth) in order for the infection to die out in the population. Because the fraction of the final size of the population p that is never infected can be defined as: Hence, Solving for R 0 {\displaystyle R_{0}} , we obtain: If the vaccine used is insufficiently effective or the required coverage cannot be reached, the program may fail to exceed qc. Such a program will protect vaccinated individuals from disease, but may change the dynamics of transmission.[citation needed] Suppose that a proportion of the population q (where q < qc) is immunised at birth against an infection with R0 > 1. The vaccination programme changes R0 to Rq where This change occurs simply because there are now fewer susceptibles in the population who can be infected. Rq is simply R0 minus those that would normally be infected but that cannot be now since they are immune. As a consequence of this lower basic reproduction number, the average age of infection A will also change to some new value Aq in those who have been left unvaccinated. Recall the relation that linked R0, A and L. Assuming that life expectancy has not changed, now:[citation needed] But R0 = L/A so: Thus, the vaccination program may raise the average age of infection, and unvaccinated individuals will experience a reduced force of infection due to the presence of the vaccinated group. For a disease that leads to greater clinical severity in older populations, the unvaccinated proportion of the population may experience the disease relatively later in life than would occur in the absence of vaccine. If a vaccination program causes the proportion of immune individuals in a population to exceed the critical threshold for a significant length of time, transmission of the infectious disease in that population will stop. If elimination occurs everywhere at the same time, then this can lead to eradication.[citation needed] Reliability Models have the advantage of examining multiple outcomes simultaneously, rather than making a single forecast. Models have shown broad degrees of reliability in past pandemics, such as SARS, SARS-CoV-2, Swine flu, MERS and Ebola. See also References Sources Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Insectoids_in_science_fiction] | [TOKENS: 700]
Contents Insectoids in science fiction and fantasy In science fiction and fantasy literatures, the term insectoid ("insect-like") denotes any fantastical fictional creature sharing physical or other traits with ordinary insects (or arachnids). Most frequently, insect-like or spider-like extraterrestrial life forms is meant; in such cases convergent evolution may presumably be responsible for the existence of such creatures. Occasionally, an Earth-bound setting — such as in the film The Fly (1958), in which a scientist is accidentally transformed into a human–fly hybrid, or Franz Kafka's novella The Metamorphosis (1915), which does not bother to explain how a man becomes an enormous insect — is the venue. Etymology The term insectoid denotes any creature or object that shares a similar body or traits with common earth insects and arachnids. The term is a combination of "insect" and "-oid" (a suffix denoting similarity). History Insect-like extraterrestrials have long been a part of the tradition of science fiction. In the 1902 film A Trip to the Moon, Georges Méliès portrayed the Selenites (moon inhabitants) as insectoid. The Woggle-Bug appeared in L. Frank Baum's Oz books beginning in 1904. Olaf Stapledon incorporates insectoids in his 1937 Star Maker novel. In the pulp fiction novels, insectoid creatures were frequently used as the antagonists threatening the damsel in distress. Notable later depictions of hostile insect aliens include the antagonistic "Arachnids", or "Bugs", in Robert A. Heinlein's novel Starship Troopers (1959) and the "buggers" in Orson Scott Card's Ender's Game series (from 1985). The hive mind, or group mind, is a theme in science fiction going back to the alien hive society depicted in H. G. Wells's The First Men in the Moon (1901). Hive minds often imply a lack, or loss, of individuality, identity, or personhood. The individuals forming the hive may specialize in different functions, in the manner of social insects. The hive queen has been a figure in novels including C. J. Cherryh's Serpent's Reach (1981) and the Alien film franchise (from 1979). Insectoid sexuality has been addressed in Philip José Farmer's The Lovers (1952) Octavia Butler's Xenogenesis novels (from 1987) and China Miéville's Perdido Street Station (2000). Analysis The motif of the insect became widely used in science fiction as an "abject human/insect hybrids that form the most common enemy" in related media. Bugs or bug-like shapes have been described as a common trope in them, and the term 'insectoid' is considered "almost a cliche" with regards to the "ubiquitous way of representing alien life". In expressing his ambivalence with regard to science fiction, insectoids were on his mind when Carl Sagan complained of the type of story which "simply ignores what we know of molecular biology and Darwinian evolution.... I have...problems with films in which spiders 30 feet tall are menacing the cities of earth: Since insects and arachnids breathe by diffusion, such marauders would asphyxiate before they could savage their first metropolis". See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Indie_role-playing_game] | [TOKENS: 2759]
Contents Indie role-playing game An indie role-playing game is a role-playing game published by individuals or small press publishers, in contrast to games published by large corporations. Indie tabletop role-playing game designers participate in various game distribution networks, development communities, and gaming conventions, both in person and online. Indie game designer committees grant annual awards for excellence. In the early 2000s, indie role-playing discussion forums such as The Forge developed innovative design patterns and theories. In 2010, the game Apocalypse World established the popular design framework Powered by the Apocalypse, inspiring hundreds of similar games. Starting in the early 2010s, indie game publishing provided new opportunities for LGBTQ writers to share underrepresented stories. Common examples of indie role-playing games include Apocalypse World and the Powered by the Apocalypse framework, The Quiet Year, Fiasco, Fall of Magic, Blades in the Dark, and Dialect. Definition of term Although there is no consensus on the exact definition of an "indie role-playing game," users of the term typically emphasize creative freedom and fair financial compensation for game designers. For example, an organizer of the 2022 Queer Games Bundle on Itch.io told Chase Carter for Dicebreaker: “Our goal is a future in which there are no more starving indie developers. Where corporations don’t rule our brains pumping out endless sequels but instead we have a vibrant games community that produces countless works...To get there we need developers to gain experience and make many games, and that can only happen with time and a livable income.” Some definitions of "indie role-playing game" require that all commercial, design, or conceptual elements of the game stay under the control of the creator(s), while others only specify that the game should be produced outside a corporate environment. All definitions agree that an indie role-playing game can be self-published. Some definitions additionally include small press games, because small press publishing frequently involves creator ownership and/or higher degrees of creative control for writers. Awards Multiple annual awards are given to indie games for excellence in multiple categories of design. The Indie Game Developer Network grants the Indie Groundbreaker Award in the categories of Most Innovative, Best Rules, Best Setting, Best Art, and Game of the Year. IndieCade offers awards for indie role-playing games in addition to video games. The ENNIE Awards and Diana Jones Award frequently honor indie role-playing games, though both awards are also awarded to games published by corporations. The Golden Cobra Challenge grants the Golden Cobra Award for freeform live action role-playing games, including indie tabletop role-playing games with freeform-like design elements. Several previous award committees for indie role-playing games are no longer operational. The Indie RPG Awards were presented to indie games from 2002 to 2018, with the main category of Indie RPG of the Year and sub-categories Best RPG Supplement, Best Free Game, Best Production, Most Innovative Game, and Best Support. Dicebreaker launched the Tabletop Awards in 2022 and awarded it yearly until the website was shuttered in 2024 following the sale of the Gamer Network to IGN. The 200 Word RPG Challenge granted awards from 2015 to 2019. Publication methods Since independent role-playing game publishers lack the financial backing of large companies, they often use different forms of publishing than the traditional three-tier model of publisher, distributor and retailer. Crowdfunding is a common model of promotion, funding, and distribution for indie role-playing games. Both individuals and small-press publishers frequently use Kickstarter and BackerKit for this purpose. Some publishers have no interest in financial success; others define it differently than most mainstream companies by emphasizing artistic fulfillment as a primary goal. Some independent publishers offer free downloads of games in digital form, while others charge a fee for digital download. Indie distribution is often achieved directly by the game's creator via e-commerce on Itch.io, DriveThruRPG, Kickstarter, BackerKit, or via in-person sales at gaming conventions. However, some fulfillment houses and small-scale distributors do handle indie products using the traditional three tier system of publisher, distributor and retailer. Starting in 2018, itch.io became a significant digital distributor of indie role-playing games, primarily in PDF form. Several organizations specialize in sales of indie games using a two-tier system of publisher and retail outlet. Indie Press Revolution distributes games that it labels as independent. Independent publishers may offer games only in digital format, only in print, or they may offer the same game in a variety of formats. Common digital formats include PDF and EPUB. Desktop publishing technologies have allowed indie designers to publish their games as bound books. The advent of print on demand (POD) publishing lowered production costs. Current indie design communities Indie game designers use itch.io to host game jams as inspiration for the development of new games using specific themes or game mechanics. Indie designers also sell games from multiple authors together as "bundles." Large indie roleplaying game bundles sometimes support political or charitable causes, such as Black Lives Matter, trans rights advocacy, abortion access funds, or material support for victims of war. Local gaming conventions provide dedicated space for playing, playtesting, and/or selling indie role-playing games. These include PAX Unplugged in Philadelphia, Breakout Con in Toronto, Big Bad Con in San Francisco, the Double Exposure conventions in Morristown, New Jersey, and BostonFIG. The Open Hearth Gaming Community focuses specifically on indie role-playing games and regularly schedules online play sessions through videoconferencing. In 2023, Open Hearth was founded to continue the online indie gaming calendar of the Gauntlet community after The Gauntlet (tabletop games producer) narrowed its focus to its indie game publishing and podcasting activities. Between 2018 and 2023, the Gauntlet also maintained a lively discussion forum about indie and OSR role-playing games. LGBTQ games Starting in the 2010s, indie role-playing games became a haven for LGBTQ storytelling, due to creators' ability to release non-mainstream content without seeking approval from mainstream publishing companies. Avery Alder's game Monsterhearts was one of the first published Powered by the Apocalypse games and an early example of a specifically queer-themed tabletop role-playing game, followed in 2014 by her first edition of Dream Askew, which focused on queer community-building and became the prototype for the Belonging Outside Belonging system. This laid the groundwork for Jay Dragon's 2019 Belonging Outside Belonging game Sleepaway, which included a custom gender creation system. In 2020, Lucian Kahn's game Visigoths vs. Mall Goths highlighted the bisexual community. The next year, April Kit Walsh's Thirsty Sword Lesbians, a Powered by the Apocalypse descendant, became the first tabletop game (indie or corporate) to win a Nebula Award. In 2022, Women are Werewolves by Yeonsoo Julian Kim and C.A.S. Taylor provided a framework for telling nonbinary stories. As of September 2024, Itch.io lists 573 physical games (as opposed to video games) with the "LGBT" tag. History The Forge, an internet forum overseen by Ron Edwards, provided the center of a self-identified indie RPG community in the early 2000s. This community generally defined indie games by the creators maintaining control of their work and avoiding traditional publishing. Tightly focused designs were a hallmark of this community. The Forge was strongly influenced by Ron Edwards' essay "System Does Matter" and GNS theory, which classified all participants in tabletop role-playing games under one of three personality types: gamist, narrativist, or simulationist. Indie RPGs inspired by the Forge often deliberately aligned with a narrativist approach to game design, focusing on strong characters confronting difficult moral choices. The Forge was started in 1999 by Ed Healy as an information site, with Ron Edwards serving as the editorial lead. In 2001, Ron and Clinton R. Nixon recast the site, centered on the community forum that existed until 2012. Games of note from the Forge community include, in roughly chronological order: William J. White, a professor at Penn State Altoona, highlighted that the Forge went through several eras. During the Spring era (2001–2004), the Forge experienced massive growth: by the end of 2004, there were eight general forums comprising 7,977 threads encompassing 94,733 individual posts—an expansion of almost 400% in thread volume since April 2001. The most active was the RPG Theory forum, with 28,322 posts in 1,639 threads, a thread density of 17.3 posts per thread. The next most active was the Indie Game Design thread, with 23,318 total posts and a thread density of 11.0.: 89 However, a decline in the quality of posts and other moderation actions led many people to leave the Forge for other online communities and this collective group became known as the "Forge diaspora".: 90 In 2005, Edwards closed the "two theoretical discussion forums [...] on the premise that the Big Model was fundamentally complete".: 91 White states that the Autumn era (2007-2010) was impacted by disagreements between Edwards and others who ran the community, such as Nixon who at the time was the Forge's technical expert. In May 2010, there was a "major server crash" and the recovery split the site into a read-only archive (2001 to mid-2010) and active forums (" beginning with January 2008").: 93 The Winter era (2011–2012) featured a much "pared-down forum structure" and the five remaining forums had "relatively low thread densities for all but the Actual Play forum".: 93 In 2012, Edwards announced the forthcoming closure of the community. White commented that the Forge: "served to champion creator-owned 'indie RPGs' and game design innovation. After an initial surge of conceptual discussion and design experimentation on the forum itself from 2000 to 2004, [...] it inspired a panoply of blogs and forums where further discussion took place.": 39 Starting in the mid-00s, storytelling games based upon historical events began to emerge. Examples include Grey Ranks (2007) by Jason Morningstar, which takes place during the 1944 Warsaw Uprising, and Montsegur 1244 (2008) by Frederik Jensen, in which players tell a collaborative story about the Cathars. Powered by the Apocalypse (PbtA) is a narrative-focused game design framework developed by Meguey Baker and Vincent Baker for the 2010 game Apocalypse World.: 182 The Bakers offered PbtA to the indie RPG design community as a starting point for new games with different settings and modified game mechanics. Early PbtA games included Dungeon World, Monsterhearts, and Monster of the Week. As of October 2024, Itch.io listed 1,172 products labelled "PbtA." Story Games was an online discussion forum dedicated to indie role-playing games that focus on shared story creation. The forum operated from 2012 to 2019. Creators used it to discuss design issues, report progress, and promote their games. The forum ceased operation on August 15, 2019. Two sites that emerged to support the Story Games community were The Gauntlet Forums and Fictioneers. Twitter was a main center of indie RPG design discussion, artistic collaboration, and audience outreach from the mid-2010s until 2023. After the Forge forums closed in 2012, many members of that community continued discussing role-playing game theory on Google+ until that site also closed in 2019, after which they also moved their discussions to Twitter. After Elon Musk's purchase and rebrand of Twitter as X in 2023, many indie game writers and artists left the social network or struggled to continue using it for outreach with a reduced user base. Several different digital publishing marketplaces that were later merged into Wolves of Freeport sold indie role-playing games between the 2000s and early 2020s. RPGNow and DriveThruRPG were two companies that sold indie role-playing games (as well as mainstream products) as downloadable PDFs. RPGNow created a separate storefront for low-selling or new entries to this market. Initial plans called for this storefront to use the "indie" moniker, but it was eventually decided to call the storefront RPGNow Edge instead. RPGNow Edge ceased operations in 2007. RPGNow and DriveThruRPG were consolidated into a single company, OneBookShelf, which maintained both sites initially. In August 2007, the two sites were rebranded, with RPGNow bearing the subtitle: "The leading source for indie rpgs". In February 2019, all elements of RPGNow (including purchase library) were redirected to similar pages on DriveThruRPG. In 2023, OneBookShelf merged with Roll20 to become Wolves of Freeport. Related game design movements Some designers of indie role-playing games also participate in related tabletop role-playing game design movements such as Old School Renaissance, indie video game development, or live action role-playing game design such as Nordic LARP. Examples of indie role-playing game designers also working in related movements include Anna Anthropy, Sharang Biswas, Emily Care Boss, Banana Chan, Lucian Kahn, Jonaya Kemper, Jason Morningstar, and Jeeyon Shim. References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Grasshopper_3D] | [TOKENS: 291]
Contents Grasshopper 3D Grasshopper is a visual programming language and environment that runs within the Rhinoceros 3D computer-aided design (CAD) application. The program was created by David Rutten, at Robert McNeel & Associates. Programs are created by dragging components onto a canvas. The outputs of those components are then connected to the inputs of subsequent components. Overview Grasshopper is primarily used to build generative algorithms, such as for generative art. Many of Grasshopper's components create 3D geometry. Programs may also contain other types of algorithms including numeric, textual, audio-visual and haptic applications. Advanced uses of Grasshopper include parametric modelling for structural engineering, architecture and fabrication, lighting performance analysis for energy efficient architecture, and building energy use. The first version of Grasshopper, then named Explicit History, was released in September 2007. Grasshopper was made part of the standard Rhino toolset in Rhino 6.0, and continues to be. AEC Magazine stated that Grasshopper is "Popular among students and professionals, McNeel Associate’s Rhino modelling tool is endemic in the architectural design world. The new Grasshopper environment provides an intuitive way to explore designs without having to learn to script." Research supporting this claim has come from product design and architecture. See also References Further reading External links Computer-aided industrial design • Comparison • History
========================================
[SOURCE: https://en.wikipedia.org/wiki/Joke#cite_note-FOOTNOTECarrell2008304-93] | [TOKENS: 8460]
Contents Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading
========================================
[SOURCE: https://github.com/security/advanced-security/software-supply-chain] | [TOKENS: 864]
Navigation Menu Search code, repositories, users, issues, pull requests... Provide feedback We read every piece of feedback, and take your input very seriously. Saved searches Use saved searches to filter your results more quickly To see all available qualifiers, see our documentation. Secure your software supply chain Manage open source risks with GitHub’s supply chain security. Detect and fix threats early with automated scanning, updates, and policy enforcement—keeping your software resilient. Automatically detect vulnerabilities and get trusted updates with Dependabot. Dependabot surfaces the top 10% of your most critical alerts first using exploitation likelihood, severity scores, and triage rules. Easily sign and verify your builds with artifact attestations—simplifying security and compliance. From dependencies to deployment,lock down your supply chain. Identify critical risks faster with EPSS scores and automated alerts. Map dependencies and dependents, including transitive ones, with one-click SBOMs. Stay secure with automatic pull requests for the latest dependencies. Dependabot groups updates for faster reviews and merges. Enforce security and license compliance on pull requests with the dependency review action (available with GitHub Code Security). Easily sign and verify builds with artifact attestations. Meet external compliance frameworks like SOC2 or strengthen internal security with SLSA—up to Build Level 3. Whether you’re contributing to an open source project or choosing new tools for your team, your security needs are covered with GitHub. Best practices for more secure software Protect your entire GitHub workflow, from personal accounts to code and builds. Learn how to write more secure code from the start with DevSecOps. Explore common application security pitfalls and how to avoid them. When developing a software project, you likely use other software to build and run your application, such as open-source libraries, frameworks or other tools. These resources are collectively referred to as your “dependencies”, because your project depends on them to function properly. Your project could rely on hundreds of these dependencies, forming what is known as your "supply chain". Your supply chain can pose a security risk. If one of your dependencies has a known security weakness or a bug, malicious actors could exploit this vulnerability to, for example, insert malicious code (malware), steal sensitive data, or cause some other type of disruption to your project. This type of threat is called a "supply chain attack". Having vulnerable dependencies in your supply chain compromises the security of your own project, and you put your users at risk, too. One of the most important things you can do to protect your supply chain is to patch your vulnerable dependencies. Attackers don’t just target dependencies you use; they will also target user accounts and build processes as well. It’s important to secure both to ensure that the code you distribute hasn’t been tampered with. GitHub offers a range of features to help you understand the dependencies and secure the dependencies in your environment, and to secure your GitHub accounts and build system. Unlike third-party security add-ons, GitHub’s supply chain features operate entirely in the native GitHub workflows that developers already know and love. By making it easier for developers to remediate vulnerabilities as they go, GitHub frees time for security teams to focus on critical strategies that protect businesses, customers, and communities from application-based vulnerabilities. Supply-chain Levels for Software Artifacts (SLSA) is a framework for improving the end-to-end integrity of a software artifact throughout its development lifecycle. It provides a comprehensive, step-by-step methodology for building integrity and provenance guarantees into your software supply chain. SLSA Level 3 signifies a significantly hardened software supply chain where builds are highly isolated, source code history is verified, and provenance is strictly controlled, providing a strong guarantee against tampering and ensuring the integrity of software artifacts. GitHub Actions and Artifact Attestations greatly simplify the journey to SLSA Level 3. You can export a software bill of materials or SBOM for your repository from the GitHub dependency graph. SBOMs allow transparency into your open source usage and help expose supply chain vulnerabilities, reducing supply chain risks. Most of GitHub’s supply chain features are available for free to all users. A select few advanced features are available to private repos only in GitHub Code Security. See pricing. Site-wide Links Get tips, technical guides, and best practices. Twice a month.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Orion_(constellation)#cite_note-29] | [TOKENS: 4993]
Contents Orion (constellation) Orion is a prominent set of stars visible during winter in the northern celestial hemisphere. It is one of the 88 modern constellations; it was among the 48 constellations listed by the 2nd-century AD/CE astronomer Ptolemy. It is named after a hunter in Greek mythology. Orion is most prominent during winter evenings in the Northern Hemisphere, as are five other constellations that have stars in the Winter Hexagon asterism. Orion's two brightest stars, Rigel (β) and Betelgeuse (α), are both among the brightest stars in the night sky; both are supergiants and slightly variable. There are a further six stars brighter than magnitude 3.0, including three making the short straight line of the Orion's Belt asterism. Orion also hosts the radiant of the annual Orionids, the strongest meteor shower associated with Halley's Comet, and the Orion Nebula, one of the brightest nebulae in the sky. Characteristics Orion is bordered by Taurus to the northwest, Eridanus to the southwest, Lepus to the south, Monoceros to the east, and Gemini to the northeast. Covering 594 square degrees, Orion ranks 26th of the 88 constellations in size. The constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 26 sides. In the equatorial coordinate system, the right ascension coordinates of these borders lie between 04h 43.3m and 06h 25.5m , while the declination coordinates are between 22.87° and −10.97°. The constellation's three-letter abbreviation, as adopted by the International Astronomical Union in 1922, is "Ori". Orion is most visible in the evening sky from January to April, winter in the Northern Hemisphere, and summer in the Southern Hemisphere. In the tropics (less than about 8° from the equator), the constellation transits at the zenith. From May to July (summer in the Northern Hemisphere, winter in the Southern Hemisphere), Orion is in the daytime sky and thus invisible at most latitudes. However, for much of Antarctica in the Southern Hemisphere's winter months, the Sun is below the horizon even at midday. Stars (and thus Orion, but only the brightest stars) are then visible at twilight for a few hours around local noon, just in the brightest section of the sky low in the North where the Sun is just below the horizon. At the same time of day at the South Pole itself (Amundsen–Scott South Pole Station), Rigel is only 8° above the horizon, and the Belt sweeps just along it. In the Southern Hemisphere's summer months, when Orion is normally visible in the night sky, the constellation is actually not visible in Antarctica because the Sun does not set at that time of year south of the Antarctic Circle. In countries close to the equator (e.g. Kenya, Indonesia, Colombia, Ecuador), Orion appears overhead in December around midnight and in the February evening sky. Navigational aid Orion is very useful as an aid to locating other stars. By extending the line of the Belt southeastward, Sirius (α CMa) can be found; northwestward, Aldebaran (α Tau). A line eastward across the two shoulders indicates the direction of Procyon (α CMi). A line from Rigel through Betelgeuse points to Castor and Pollux (α Gem and β Gem). Additionally, Rigel is part of the Winter Circle asterism. Sirius and Procyon, which may be located from Orion by following imaginary lines (see map), also are points in both the Winter Triangle and the Circle. Features Orion's seven brightest stars form a distinctive hourglass-shaped asterism, or pattern, in the night sky. Four stars—Rigel, Betelgeuse, Bellatrix, and Saiph—form a large roughly rectangular shape, at the center of which lie the three stars of Orion's Belt—Alnitak, Alnilam, and Mintaka. His head is marked by an additional eighth star called Meissa, which is fairly bright to the observer. Descending from the Belt is a smaller line of three stars, Orion's Sword (the middle of which is in fact not a star but the Orion Nebula), also known as the hunter's sword. Many of the stars are luminous hot blue supergiants, with the stars of the Belt and Sword forming the Orion OB1 association. Standing out by its red hue, Betelgeuse may nevertheless be a runaway member of the same group. Orion's Belt, or The Belt of Orion, is an asterism within the constellation. It consists of three bright stars: Alnitak (Zeta Orionis), Alnilam (Epsilon Orionis), and Mintaka (Delta Orionis). Alnitak is around 800 light-years away from Earth, 100,000 times more luminous than the Sun, and shines with a magnitude of 1.8; much of its radiation is in the ultraviolet range, which the human eye cannot see. Alnilam is approximately 2,000 light-years from Earth, shines with a magnitude of 1.70, and with an ultraviolet light that is 375,000 times more luminous than the Sun. Mintaka is 915 light-years away and shines with a magnitude of 2.21. It is 90,000 times more luminous than the Sun and is a double star: the two orbit each other every 5.73 days. In the Northern Hemisphere, Orion's Belt is best visible in the night sky during the month of January at around 9:00 pm, when it is approximately around the local meridian. Just southwest of Alnitak lies Sigma Orionis, a multiple star system composed of five stars that have a combined apparent magnitude of 3.7 and lying at a distance of 1150 light-years. Southwest of Mintaka lies the quadruple star Eta Orionis. Orion's Sword contains the Orion Nebula, the Messier 43 nebula, Sh 2-279 (also known as the Running Man Nebula), and the stars Theta Orionis, Iota Orionis, and 42 Orionis. Three stars comprise a small triangle that marks the head. The apex is marked by Meissa (Lambda Orionis), a hot blue giant of spectral type O8 III and apparent magnitude 3.54, which lies some 1100 light-years distant. Phi-1 and Phi-2 Orionis make up the base. Also nearby is the young star FU Orionis. Stretching north from Betelgeuse are the stars that make up Orion's club. Mu Orionis marks the elbow, Nu and Xi mark the handle of the club, and Chi1 and Chi2 mark the end of the club. Just east of Chi1 is the Mira-type variable red giant star U Orionis. West from Bellatrix lie six stars all designated Pi Orionis (π1 Ori, π2 Ori, π3 Ori, π4 Ori, π5 Ori, and π6 Ori) which make up Orion's shield. Around 20 October each year, the Orionid meteor shower (Orionids) reaches its peak. Coming from the border with the constellation Gemini, as many as 20 meteors per hour can be seen. The shower's parent body is Halley's Comet. Hanging from Orion's Belt is his sword, consisting of the multiple stars θ1 and θ2 Orionis, called the Trapezium and the Orion Nebula (M42). This is a spectacular object that can be clearly identified with the naked eye as something other than a star. Using binoculars, its clouds of nascent stars, luminous gas, and dust can be observed. The Trapezium cluster has many newborn stars, including several brown dwarfs, all of which are at an approximate distance of 1,500 light-years. Named for the four bright stars that form a trapezoid, it is largely illuminated by the brightest stars, which are only a few hundred thousand years old. Observations by the Chandra X-ray Observatory show both the extreme temperatures of the main stars—up to 60,000 kelvins—and the star forming regions still extant in the surrounding nebula. M78 (NGC 2068) is a nebula in Orion. With an overall magnitude of 8.0, it is significantly dimmer than the Great Orion Nebula that lies to its south; however, it is at approximately the same distance, at 1600 light-years from Earth. It can easily be mistaken for a comet in the eyepiece of a telescope. M78 is associated with the variable star V351 Orionis, whose magnitude changes are visible in very short periods of time. Another fairly bright nebula in Orion is NGC 1999, also close to the Great Orion Nebula. It has an integrated magnitude of 10.5 and is 1500 light-years from Earth. The variable star V380 Orionis is embedded in NGC 1999. Another famous nebula is IC 434, the Horsehead Nebula, near Alnitak (Zeta Orionis). It contains a dark dust cloud whose shape gives the nebula its name. NGC 2174 is an emission nebula located 6400 light-years from Earth. Besides these nebulae, surveying Orion with a small telescope will reveal a wealth of interesting deep-sky objects, including M43, M78, and multiple stars including Iota Orionis and Sigma Orionis. A larger telescope may reveal objects such as the Flame Nebula (NGC 2024), as well as fainter and tighter multiple stars and nebulae. Barnard's Loop can be seen on very dark nights or using long-exposure photography. All of these nebulae are part of the larger Orion molecular cloud complex, which is located approximately 1,500 light-years away and is hundreds of light-years across. Due to its proximity, it is one of the most intense regions of stellar formation visible from Earth. The Orion molecular cloud complex forms the eastern part of an even larger structure, the Orion–Eridanus Superbubble, which is visible in X-rays and in hydrogen emissions. History and mythology The distinctive pattern of Orion is recognized in numerous cultures around the world, and many myths are associated with it. Orion is used as a symbol in the modern world. In Siberia, the Chukchi people see Orion as a hunter; an arrow he has shot is represented by Aldebaran (Alpha Tauri), with the same figure as other Western depictions. In Greek mythology, Orion was a gigantic, supernaturally strong hunter, born to Euryale, a Gorgon, and Poseidon (Neptune), god of the sea. One myth recounts Gaia's rage at Orion, who dared to say that he would kill every animal on Earth. The angry goddess tried to dispatch Orion with a scorpion. This is given as the reason that the constellations of Scorpius and Orion are never in the sky at the same time. However, Ophiuchus, the Serpent Bearer, revived Orion with an antidote. This is said to be the reason that the constellation of Ophiuchus stands midway between the Scorpion and the Hunter in the sky. The constellation is mentioned in Horace's Odes (Ode 3.27.18), Homer's Odyssey (Book 5, line 283) and Iliad, and Virgil's Aeneid (Book 1, line 535). In old Hungarian tradition, Orion is known as "Archer" (Íjász), or "Reaper" (Kaszás). In recently rediscovered myths, he is called Nimrod (Hungarian: Nimród), the greatest hunter, father of the twins Hunor and Magor. The π and o stars (on upper right) form together the reflex bow or the lifted scythe. In other Hungarian traditions, Orion's Belt is known as "Judge's stick" (Bírópálca). In Ireland and Scotland, Orion was called An Bodach, a figure from Irish folklore whose name literally means "the one with a penis [bod]" and was the husband of the Cailleach (hag). In Scandinavian tradition, Orion's Belt was known as "Frigg's Distaff" (friggerock) or "Freyja's distaff". The Finns call Orion's Belt and the stars below it "Väinämöinen's scythe" (Väinämöisen viikate). Another name for the asterism of Alnilam, Alnitak, and Mintaka is "Väinämöinen's Belt" (Väinämöisen vyö) and the stars "hanging" from the Belt as "Kaleva's sword" (Kalevanmiekka). There are claims in popular media that the Adorant from the Geißenklösterle cave, an ivory carving estimated to be 35,000 to 40,000 years old, is the first known depiction of the constellation. Scholars dismiss such interpretations, saying that perceived details such as a belt and sword derive from preexisting features in the grain structure of the ivory. The Babylonian star catalogues of the Late Bronze Age name Orion MULSIPA.ZI.AN.NA,[note 1] "The Heavenly Shepherd" or "True Shepherd of Anu" – Anu being the chief god of the heavenly realms. The Babylonian constellation is sacred to Papshukal and Ninshubur, both minor gods fulfilling the role of "messenger to the gods". Papshukal is closely associated with the figure of a walking bird on Babylonian boundary stones, and on the star map the figure of the Rooster is located below and behind the figure of the True Shepherd—both constellations represent the herald of the gods, in his bird and human forms respectively. In ancient Egypt, the stars of Orion were regarded as a god, called Sah. Because Orion rises before Sirius, the star whose heliacal rising was the basis for the Solar Egyptian calendar, Sah was closely linked with Sopdet, the goddess who personified Sirius. The god Sopdu is said to be the son of Sah and Sopdet. Sah is syncretized with Osiris, while Sopdet is syncretized with Osiris' mythological wife, Isis. In the Pyramid Texts, from the 24th and 23rd centuries BC, Sah is one of many gods whose form the dead pharaoh is said to take in the afterlife. The Armenians identified their legendary patriarch and founder Hayk with Orion. Hayk is also the name of the Orion constellation in the Armenian translation of the Bible. The Bible mentions Orion three times, naming it "Kesil" (כסיל, literally – fool). Though, this name perhaps is etymologically connected with "Kislev", the name for the ninth month of the Hebrew calendar (i.e. November–December), which, in turn, may derive from the Hebrew root K-S-L as in the words "kesel, kisla" (כֵּסֶל, כִּסְלָה, hope, positiveness), i.e. hope for winter rains.: Job 9:9 ("He is the maker of the Bear and Orion"), Job 38:31 ("Can you loosen Orion's belt?"), and Amos 5:8 ("He who made the Pleiades and Orion"). In ancient Aram, the constellation was known as Nephîlā′, the Nephilim are said to be Orion's descendants. In medieval Muslim astronomy, Orion was known as al-jabbar, "the giant". Orion's sixth brightest star, Saiph, is named from the Arabic, saif al-jabbar, meaning "sword of the giant". In China, Orion was one of the 28 lunar mansions Sieu (Xiù) (宿). It is known as Shen (參), literally meaning "three", for the stars of Orion's Belt. The Chinese character 參 (pinyin shēn) originally meant the constellation Orion (Chinese: 參宿; pinyin: shēnxiù); its Shang dynasty version, over three millennia old, contains at the top a representation of the three stars of Orion's Belt atop a man's head (the bottom portion representing the sound of the word was added later). The Rigveda refers to the constellation as Mriga (the Deer). Nataraja, "the cosmic dancer", is often interpreted as the representation of Orion. Rudra, the Rigvedic form of Shiva, is the presiding deity of Ardra nakshatra (Betelgeuse) of Hindu astrology. The Jain Symbol carved in the Udayagiri and Khandagiri Caves, India in 1st century BCE has a striking resemblance with Orion. Bugis sailors identified the three stars in Orion's Belt as tanra tellué, meaning "sign of three". The Seri people of northwestern Mexico call the three stars in Orion's Belt Hapj (a name denoting a hunter) which consists of three stars: Hap (mule deer), Haamoja (pronghorn), and Mojet (bighorn sheep). Hap is in the middle and has been shot by the hunter; its blood has dripped onto Tiburón Island. The same three stars are known in Spain and most of Latin America as "Las tres Marías" (Spanish for "The Three Marys"). In Puerto Rico, the three stars are known as the "Los Tres Reyes Magos" (Spanish for The Three Wise Men). The Ojibwa/Chippewa Native Americans call this constellation Mesabi for Big Man. To the Lakota Native Americans, Tayamnicankhu (Orion's Belt) is the spine of a bison. The great rectangle of Orion is the bison's ribs; the Pleiades star cluster in nearby Taurus is the bison's head; and Sirius in Canis Major, known as Tayamnisinte, is its tail. Another Lakota myth mentions that the bottom half of Orion, the Constellation of the Hand, represented the arm of a chief that was ripped off by the Thunder People as a punishment from the gods for his selfishness. His daughter offered to marry the person who can retrieve his arm from the sky, so the young warrior Fallen Star (whose father was a star and whose mother was human) returned his arm and married his daughter, symbolizing harmony between the gods and humanity with the help of the younger generation. The index finger is represented by Rigel; the Orion Nebula is the thumb; the Belt of Orion is the wrist; and the star Beta Eridani is the pinky finger. The seven primary stars of Orion make up the Polynesian constellation Heiheionakeiki which represents a child's string figure similar to a cat's cradle. Several precolonial Filipinos referred to the belt region in particular as "balatik" (ballista) as it resembles a trap of the same name which fires arrows by itself and is usually used for catching pigs from the bush. Spanish colonization later led to some ethnic groups referring to Orion's Belt as "Tres Marias" or "Tatlong Maria." In Māori tradition, the star Rigel (known as Puanga or Puaka) is closely connected with the celebration of Matariki. The rising of Matariki (the Pleiades) and Rigel before sunrise in midwinter marks the start of the Māori year. In Javanese culture, the constellation is often called Lintang Waluku or Bintang Bajak, referring to the shape of a paddy field plow. The imagery of the Belt and Sword has found its way into popular Western culture, for example in the form of the shoulder insignia of the 27th Infantry Division of the United States Army during both World Wars, probably owing to a pun on the name of the division's first commander, Major General John F. O'Ryan. The film distribution company Orion Pictures used the constellation as its logo. In artistic renderings, the surrounding constellations are sometimes related to Orion: he is depicted standing next to the river Eridanus with his two hunting dogs Canis Major and Canis Minor, fighting Taurus. He is sometimes depicted hunting Lepus the hare. He sometimes is depicted to have a lion's hide in his hand. There are alternative ways to visualise Orion. From the Southern Hemisphere, Orion is oriented south-upward, and the Belt and Sword are sometimes called the saucepan or pot in Australia and New Zealand. Orion's Belt is called Drie Konings (Three Kings) or the Drie Susters (Three Sisters) by Afrikaans speakers in South Africa and are referred to as les Trois Rois (the Three Kings) in Daudet's Lettres de Mon Moulin (1866). The appellation Driekoningen (the Three Kings) is also often found in 17th and 18th-century Dutch star charts and seaman's guides. The same three stars are known in Spain, Latin America, and the Philippines as "Las Tres Marías" (The Three Marys), and as "Los Tres Reyes Magos" (The Three Wise Men) in Puerto Rico. Even traditional depictions of Orion have varied greatly. Cicero drew Orion in a similar fashion to the modern depiction. The Hunter held an unidentified animal skin aloft in his right hand; his hand was represented by Omicron2 Orionis and the skin was represented by the five stars designated Pi Orionis. Saiph and Rigel represented his left and right knees, while Eta Orionis and Lambda Leporis were his left and right feet, respectively. As in the modern depiction, Mintaka, Alnilam, and Alnitak represented his Belt. His left shoulder was represented by Betelgeuse, and Mu Orionis made up his left arm. Meissa was his head, and Bellatrix his right shoulder. The depiction of Hyginus was similar to that of Cicero, though the two differed in a few important areas. Cicero's animal skin became Hyginus's shield (Omicron and Pi Orionis), and instead of an arm marked out by Mu Orionis, he holds a club (Chi Orionis). His right leg is represented by Theta Orionis and his left leg is represented by Lambda, Mu, and Epsilon Leporis. Further Western European and Arabic depictions have followed these two models. Future Orion is located on the celestial equator, but it will not always be so located due to the effects of precession of the Earth's axis. Orion lies well south of the ecliptic, and it only happens to lie on the celestial equator because the point on the ecliptic that corresponds to the June solstice is close to the border of Gemini and Taurus, to the north of Orion. Precession will eventually carry Orion further south, and by AD 14000, Orion will be far enough south that it will no longer be visible from the latitude of Great Britain. Further in the future, Orion's stars will gradually move away from the constellation due to proper motion. However, Orion's brightest stars all lie at a large distance from Earth on an astronomical scale—much farther away than Sirius, for example. Orion will still be recognizable long after most of the other constellations—composed of relatively nearby stars—have distorted into new configurations, with the exception of a few of its stars eventually exploding as supernovae, for example Betelgeuse, which is predicted to explode sometime in the next million years. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Lod#cite_note-66] | [TOKENS: 4733]
Contents Lod Lod (Hebrew: לוד, fully vocalized: לֹד), also known as Lydda (Ancient Greek: Λύδδα) and Lidd (Arabic: اللِّدّ, romanized: al-Lidd, or اللُّدّ, al-Ludd), is a city 15 km (9+1⁄2 mi) southeast of Tel Aviv and 40 km (25 mi) northwest of Jerusalem in the Central District of Israel. It is situated between the lower Shephelah on the east and the coastal plain on the west. The city had a population of 90,814 in 2023. Lod has been inhabited since at least the Neolithic period. It is mentioned a few times in the Hebrew Bible and in the New Testament. Between the 5th century BCE and up until the late Roman period, it was a prominent center for Jewish scholarship and trade. Around 200 CE, the city became a Roman colony and was renamed Diospolis (Ancient Greek: Διόσπολις, lit. 'city of Zeus'). Tradition identifies Lod as the 4th century martyrdom site of Saint George; the Church of Saint George and Mosque of Al-Khadr located in the city is believed to have housed his remains. Following the Arab conquest of the Levant, Lod served as the capital of Jund Filastin; however, a few decades later, the seat of power was transferred to Ramla, and Lod slipped in importance. Under Crusader rule, the city was a Catholic diocese of the Latin Church and it remains a titular see to this day.[citation needed] Lod underwent a major change in its population in the mid-20th century. Exclusively Palestinian Arab in 1947, Lod was part of the area designated for an Arab state in the United Nations Partition Plan for Palestine; however, in July 1948, the city was occupied by the Israel Defense Forces, and most of its Arab inhabitants were expelled in the Palestinian expulsion from Lydda and Ramle. The city was largely resettled by Jewish immigrants, most of them expelled from Arab countries. Today, Lod is one of Israel's mixed cities, with an Arab population of 30%. Lod is one of Israel's major transportation hubs. The main international airport, Ben Gurion Airport, is located 8 km (5 miles) north of the city. The city is also a major railway and road junction. Religious references The Hebrew name Lod appears in the Hebrew Bible as a town of Benjamin, founded along with Ono by Shamed or Shamer (1 Chronicles 8:12; Ezra 2:33; Nehemiah 7:37; 11:35). In Ezra 2:33, it is mentioned as one of the cities whose inhabitants returned after the Babylonian captivity. Lod is not mentioned among the towns allocated to the tribe of Benjamin in Joshua 18:11–28. The name Lod derives from a tri-consonantal root not extant in Northwest Semitic, but only in Arabic (“to quarrel; withhold, hinder”). An Arabic etymology of such an ancient name is unlikely (the earliest attestation is from the Achaemenid period). In the New Testament, the town appears in its Greek form, Lydda, as the site of Peter's healing of Aeneas in Acts 9:32–38. The city is also mentioned in an Islamic hadith as the location of the battlefield where the false messiah (al-Masih ad-Dajjal) will be slain before the Day of Judgment. History The first occupation dates to the Neolithic in the Near East and is associated with the Lodian culture. Occupation continued in the Levant Chalcolithic. Pottery finds have dated the initial settlement in the area now occupied by the town to 5600–5250 BCE. In the Early Bronze, it was an important settlement in the central coastal plain between the Judean Shephelah and the Mediterranean coast, along Nahal Ayalon. Other important nearby sites were Tel Dalit, Tel Bareqet, Khirbat Abu Hamid (Shoham North), Tel Afeq, Azor and Jaffa. Two architectural phases belong to the late EB I in Area B. The first phase had a mudbrick wall, while the late phase included a circulat stone structure. Later excavations have produced an occupation later, Stratum IV. It consists of two phases, Stratum IVb with mudbrick wall on stone foundations and rounded exterior corners. In Stratum IVa there was a mudbrick wall with no stone foundations, with imported Egyptian potter and local pottery imitations. Another excavations revealed nine occupation strata. Strata VI-III belonged to Early Bronze IB. The material culture showed Egyptian imports in strata V and IV. Occupation continued into Early Bronze II with four strata (V-II). There was continuity in the material culture and indications of centralized urban planning. North to the tell were scattered MB II burials. The earliest written record is in a list of Canaanite towns drawn up by the Egyptian pharaoh Thutmose III at Karnak in 1465 BCE. From the fifth century BCE until the Roman period, the city was a centre of Jewish scholarship and commerce. According to British historian Martin Gilbert, during the Hasmonean period, Jonathan Maccabee and his brother, Simon Maccabaeus, enlarged the area under Jewish control, which included conquering the city. The Jewish community in Lod during the Mishnah and Talmud era is described in a significant number of sources, including information on its institutions, demographics, and way of life. The city reached its height as a Jewish center between the First Jewish-Roman War and the Bar Kokhba revolt, and again in the days of Judah ha-Nasi and the start of the Amoraim period. The city was then the site of numerous public institutions, including schools, study houses, and synagogues. In 43 BC, Cassius, the Roman governor of Syria, sold the inhabitants of Lod into slavery, but they were set free two years later by Mark Antony. During the First Jewish–Roman War, the Roman proconsul of Syria, Cestius Gallus, razed the town on his way to Jerusalem in Tishrei 66 CE. According to Josephus, "[he] found the city deserted, for the entire population had gone up to Jerusalem for the Feast of Tabernacles. He killed fifty people whom he found, burned the town and marched on". Lydda was occupied by Emperor Vespasian in 68 CE. In the period following the destruction of Jerusalem in 70 CE, Rabbi Tarfon, who appears in many Tannaitic and Jewish legal discussions, served as a rabbinic authority in Lod. During the Kitos War, 115–117 CE, the Roman army laid siege to Lod, where the rebel Jews had gathered under the leadership of Julian and Pappos. Torah study was outlawed by the Romans and pursued mostly in the underground. The distress became so great, the patriarch Rabban Gamaliel II, who was shut up there and died soon afterwards, permitted fasting on Ḥanukkah. Other rabbis disagreed with this ruling. Lydda was next taken and many of the Jews were executed; the "slain of Lydda" are often mentioned in words of reverential praise in the Talmud. In 200 CE, emperor Septimius Severus elevated the town to the status of a city, calling it Colonia Lucia Septimia Severa Diospolis. The name Diospolis ("City of Zeus") may have been bestowed earlier, possibly by Hadrian. At that point, most of its inhabitants were Christian. The earliest known bishop is Aëtius, a friend of Arius. During the following century (200-300CE), it's said that Joshua ben Levi founded a yeshiva in Lod. In December 415, the Council of Diospolis was held here to try Pelagius; he was acquitted. In the sixth century, the city was renamed Georgiopolis after St. George, a soldier in the guard of the emperor Diocletian, who was born there between 256 and 285 CE. The Church of Saint George and Mosque of Al-Khadr is named for him. The 6th-century Madaba map shows Lydda as an unwalled city with a cluster of buildings under a black inscription reading "Lod, also Lydea, also Diospolis". An isolated large building with a semicircular colonnaded plaza in front of it might represent the St George shrine. After the Muslim conquest of Palestine by Amr ibn al-'As in 636 CE, Lod which was referred to as "al-Ludd" in Arabic served as the capital of Jund Filastin ("Military District of Palaestina") before the seat of power was moved to nearby Ramla during the reign of the Umayyad Caliph Suleiman ibn Abd al-Malik in 715–716. The population of al-Ludd was relocated to Ramla, as well. With the relocation of its inhabitants and the construction of the White Mosque in Ramla, al-Ludd lost its importance and fell into decay. The city was visited by the local Arab geographer al-Muqaddasi in 985, when it was under the Fatimid Caliphate, and was noted for its Great Mosque which served the residents of al-Ludd, Ramla, and the nearby villages. He also wrote of the city's "wonderful church (of St. George) at the gate of which Christ will slay the Antichrist." The Crusaders occupied the city in 1099 and named it St Jorge de Lidde. It was briefly conquered by Saladin, but retaken by the Crusaders in 1191. For the English Crusaders, it was a place of great significance as the birthplace of Saint George. The Crusaders made it the seat of a Latin Church diocese, and it remains a titular see. It owed the service of 10 knights and 20 sergeants, and it had its own burgess court during this era. In 1226, Ayyubid Syrian geographer Yaqut al-Hamawi visited al-Ludd and stated it was part of the Jerusalem District during Ayyubid rule. Sultan Baybars brought Lydda again under Muslim control by 1267–8. According to Qalqashandi, Lydda was an administrative centre of a wilaya during the fourteenth and fifteenth century in the Mamluk empire. Mujir al-Din described it as a pleasant village with an active Friday mosque. During this time, Lydda was a station on the postal route between Cairo and Damascus. In 1517, Lydda was incorporated into the Ottoman Empire as part of the Damascus Eyalet, and in the 1550s, the revenues of Lydda were designated for the new waqf of Hasseki Sultan Imaret in Jerusalem, established by Hasseki Hurrem Sultan (Roxelana), the wife of Suleiman the Magnificent. By 1596 Lydda was a part of the nahiya ("subdistrict") of Ramla, which was under the administration of the liwa ("district") of Gaza. It had a population of 241 households and 14 bachelors who were all Muslims, and 233 households who were Christians. They paid a fixed tax-rate of 33,3 % on agricultural products, including wheat, barley, summer crops, vineyards, fruit trees, sesame, special product ("dawalib" =spinning wheels), goats and beehives, in addition to occasional revenues and market toll, a total of 45,000 Akçe. All of the revenue went to the Waqf. In 1051 AH/1641/2, the Bedouin tribe of al-Sawālima from around Jaffa attacked the villages of Subṭāra, Bayt Dajan, al-Sāfiriya, Jindās, Lydda and Yāzūr belonging to Waqf Haseki Sultan. The village appeared as Lydda, though misplaced, on the map of Pierre Jacotin compiled in 1799. Missionary William M. Thomson visited Lydda in the mid-19th century, describing it as a "flourishing village of some 2,000 inhabitants, imbosomed in noble orchards of olive, fig, pomegranate, mulberry, sycamore, and other trees, surrounded every way by a very fertile neighbourhood. The inhabitants are evidently industrious and thriving, and the whole country between this and Ramleh is fast being filled up with their flourishing orchards. Rarely have I beheld a rural scene more delightful than this presented in early harvest ... It must be seen, heard, and enjoyed to be appreciated." In 1869, the population of Ludd was given as: 55 Catholics, 1,940 "Greeks", 5 Protestants and 4,850 Muslims. In 1870, the Church of Saint George was rebuilt. In 1892, the first railway station in the entire region was established in the city. In the second half of the 19th century, Jewish merchants migrated to the city, but left after the 1921 Jaffa riots. In 1882, the Palestine Exploration Fund's Survey of Western Palestine described Lod as "A small town, standing among enclosure of prickly pear, and having fine olive groves around it, especially to the south. The minaret of the mosque is a very conspicuous object over the whole of the plain. The inhabitants are principally Moslim, though the place is the seat of a Greek bishop resident of Jerusalem. The Crusading church has lately been restored, and is used by the Greeks. Wells are found in the gardens...." From 1918, Lydda was under the administration of the British Mandate in Palestine, as per a League of Nations decree that followed the Great War. During the Second World War, the British set up supply posts in and around Lydda and its railway station, also building an airport that was renamed Ben Gurion Airport after the death of Israel's first prime minister in 1973. At the time of the 1922 census of Palestine, Lydda had a population of 8,103 inhabitants (7,166 Muslims, 926 Christians, and 11 Jews), the Christians were 921 Orthodox, 4 Roman Catholics and 1 Melkite. This had increased by the 1931 census to 11,250 (10,002 Muslims, 1,210 Christians, 28 Jews, and 10 Bahai), in a total of 2475 residential houses. In 1938, Lydda had a population of 12,750. In 1945, Lydda had a population of 16,780 (14,910 Muslims, 1,840 Christians, 20 Jews and 10 "other"). Until 1948, Lydda was an Arab town with a population of around 20,000—18,500 Muslims and 1,500 Christians. In 1947, the United Nations proposed dividing Mandatory Palestine into two states, one Jewish state and one Arab; Lydda was to form part of the proposed Arab state. In the ensuing war, Israel captured Arab towns outside the area the UN had allotted it, including Lydda. In December 1947, thirteen Jewish passengers in a seven-car convoy to Ben Shemen Youth Village were ambushed and murdered.In a separate incident, three Jewish youths, two men and a woman were captured, then raped and murdered in a neighbouring village. Their bodies were paraded in Lydda’s principal street. The Israel Defense Forces entered Lydda on 11 July 1948. The following day, under the impression that it was under attack, the 3rd Battalion was ordered to shoot anyone "seen on the streets". According to Israel, 250 Arabs were killed. Other estimates are higher: Arab historian Aref al Aref estimated 400, and Nimr al Khatib 1,700. In 1948, the population rose to 50,000 during the Nakba, as Arab refugees fleeing other areas made their way there. A key event was the Palestinian expulsion from Lydda and Ramle, with the expulsion of 50,000-70,000 Palestinians from Lydda and Ramle by the Israel Defense Forces. All but 700 to 1,056 were expelled by order of the Israeli high command, and forced to walk 17 km (10+1⁄2 mi) to the Jordanian Arab Legion lines. Estimates of those who died from exhaustion and dehydration vary from a handful to 355. The town was subsequently sacked by the Israeli army. Some scholars, including Ilan Pappé, characterize this as ethnic cleansing. The few hundred Arabs who remained in the city were soon outnumbered by the influx of Jews who immigrated to Lod from August 1948 onward, most of them from Arab countries. As a result, Lod became a predominantly Jewish town. After the establishment of the state, the biblical name Lod was readopted. The Jewish immigrants who settled Lod came in waves, first from Morocco and Tunisia, later from Ethiopia, and then from the former Soviet Union. Since 2008, many urban development projects have been undertaken to improve the image of the city. Upscale neighbourhoods have been built, among them Ganei Ya'ar and Ahisemah, expanding the city to the east. According to a 2010 report in the Economist, a three-meter-high wall was built between Jewish and Arab neighbourhoods and construction in Jewish areas was given priority over construction in Arab neighborhoods. The newspaper says that violent crime in the Arab sector revolves mainly around family feuds over turf and honour crimes. In 2010, the Lod Community Foundation organised an event for representatives of bicultural youth movements, volunteer aid organisations, educational start-ups, businessmen, sports organizations, and conservationists working on programmes to better the city. In the 2021 Israel–Palestine crisis, a state of emergency was declared in Lod after Arab rioting led to the death of an Israeli Jew. The Mayor of Lod, Yair Revivio, urged Prime Minister of Israel Benjamin Netanyahu to deploy Israel Border Police to restore order in the city. This was the first time since 1966 that Israel had declared this kind of emergency lockdown. International media noted that both Jewish and Palestinian mobs were active in Lod, but the "crackdown came for one side" only. Demographics In the 19th century and until the Lydda Death March, Lod was an exclusively Muslim-Christian town, with an estimated 6,850 inhabitants, of whom approximately 2,000 (29%) were Christian. According to the Israel Central Bureau of Statistics (CBS), the population of Lod in 2010 was 69,500 people. According to the 2019 census, the population of Lod was 77,223, of which 53,581 people, comprising 69.4% of the city's population, were classified as "Jews and Others", and 23,642 people, comprising 30.6% as "Arab". Education According to CBS, 38 schools and 13,188 pupils are in the city. They are spread out as 26 elementary schools and 8,325 elementary school pupils, and 13 high schools and 4,863 high school pupils. About 52.5% of 12th-grade pupils were entitled to a matriculation certificate in 2001.[citation needed] Economy The airport and related industries are a major source of employment for the residents of Lod. Other important factories in the city are the communication equipment company "Talard", "Cafe-Co" - a subsidiary of the Strauss Group and "Kashev" - the computer center of Bank Leumi. A Jewish Agency Absorption Centre is also located in Lod. According to CBS figures for 2000, 23,032 people were salaried workers and 1,405 were self-employed. The mean monthly wage for a salaried worker was NIS 4,754, a real change of 2.9% over the course of 2000. Salaried men had a mean monthly wage of NIS 5,821 (a real change of 1.4%) versus NIS 3,547 for women (a real change of 4.6%). The mean income for the self-employed was NIS 4,991. About 1,275 people were receiving unemployment benefits and 7,145 were receiving an income supplement. Art and culture In 2009-2010, Dor Guez held an exhibit, Georgeopolis, at the Petach Tikva art museum that focuses on Lod. Archaeology A well-preserved mosaic floor dating to the Roman period was excavated in 1996 as part of a salvage dig conducted on behalf of the Israel Antiquities Authority and the Municipality of Lod, prior to widening HeHalutz Street. According to Jacob Fisch, executive director of the Friends of the Israel Antiquities Authority, a worker at the construction site noticed the tail of a tiger and halted work. The mosaic was initially covered over with soil at the conclusion of the excavation for lack of funds to conserve and develop the site. The mosaic is now part of the Lod Mosaic Archaeological Center. The floor, with its colorful display of birds, fish, exotic animals and merchant ships, is believed to have been commissioned by a wealthy resident of the city for his private home. The Lod Community Archaeology Program, which operates in ten Lod schools, five Jewish and five Israeli Arab, combines archaeological studies with participation in digs in Lod. Sports The city's major football club, Hapoel Bnei Lod, plays in Liga Leumit (the second division). Its home is at the Lod Municipal Stadium. The club was formed by a merger of Bnei Lod and Rakevet Lod in the 1980s. Two other clubs in the city play in the regional leagues: Hapoel MS Ortodoxim Lod in Liga Bet and Maccabi Lod in Liga Gimel. Hapoel Lod played in the top division during the 1960s and 1980s, and won the State Cup in 1984. The club folded in 2002. A new club, Hapoel Maxim Lod (named after former mayor Maxim Levy) was established soon after, but folded in 2007. Notable people Twin towns-sister cities Lod is twinned with: See also References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/List_of_network_theory_topics] | [TOKENS: 43]
Contents List of network theory topics Network theory is an area of applied mathematics. This page is a list of network theory topics. Network theorems Network properties Network theory applications Networks with certain properties Other terms Examples of networks
========================================
[SOURCE: https://en.wikipedia.org/wiki/SIR_model] | [TOKENS: 16161]
Contents Compartmental models (epidemiology) Compartmental models are a mathematical framework used to simulate how populations move between different states or "compartments". While widely applied in various fields, they have become particularly fundamental to the mathematical modelling of infectious diseases. In these models, the population is divided into compartments labeled with shorthand notation – most commonly S, I, and R, representing Susceptible, Infectious, and Recovered individuals. The sequence of letters typically indicates the flow patterns between compartments; for example, an SEIS model represents progression from susceptible to exposed to infectious and then back to susceptible again. These models originated in the early 20th century through pioneering epidemiological work by several mathematicians. Key developments include Hamer's work in 1906, Ross's contributions in 1916, collaborative work by Ross and Hudson in 1917, the seminal Kermack and McKendrick model in 1927, and Kendall's work in 1956. The historically significant Reed–Frost model, though often overlooked, also substantially influenced modern epidemiological modeling approaches. Most implementations of compartmental models use ordinary differential equations (ODEs), providing deterministic results that are mathematically tractable. However, they can also be formulated within stochastic frameworks that incorporate randomness, offering more realistic representations of population dynamics at the cost of greater analytical complexity. Epidemiologists and public health officials use these models for several critical purposes: analyzing disease transmission dynamics, projecting the total number of infections and recoveries over time, estimating key epidemiological parameters such as the basic reproduction number (R0) or effective reproduction number (Rt), evaluating potential impacts of different public health interventions before implementation, and informing evidence-based policy decisions during disease outbreaks. Beyond infectious disease modeling, the approach has been adapted for applications in population ecology, pharmacokinetics, chemical kinetics, and other fields requiring the study of transitions between defined states. For such investigations and to consult decision makers, often more complex models are used. SIR model The SIR model is one of the simplest compartmental models, and many models are derivatives of this basic form. The model consists of three compartments: This model is reasonably predictive for infectious diseases that are transmitted from human to human, and where recovery confers lasting resistance, such as measles, mumps, and rubella. These variables (S, I, and R) represent the number of people in each compartment at a particular time. To represent that the number of susceptible, infectious, and removed individuals may vary over time (even if the total population size remains constant), we make the precise numbers a function of t (time): S(t), I(t), and R(t). For a specific disease in a specific population, these functions may be worked out in order to predict possible outbreaks and bring them under control. Note that in the SIR model, R ( 0 ) {\displaystyle R(0)} and R 0 {\displaystyle R_{0}} are different quantities – the former describes the number of recovered at t = 0 whereas the latter describes the ratio between the frequency of contacts to the frequency of recovery. As implied by the variable function of t, the model is dynamic in that the numbers in each compartment may fluctuate over time. The importance of this dynamic aspect is most obvious in an endemic disease with a short infectious period, such as measles in the UK prior to the introduction of a vaccine in 1968. Such diseases tend to occur in cycles of outbreaks due to the variation in number of susceptibles (S(t)) over time. During an epidemic, the number of susceptible individuals falls rapidly as more of them are infected and thus enter the infectious and removed compartments. The disease cannot break out again until the number of susceptibles has built back up, e.g. as a result of offspring being born into the susceptible compartment.[citation needed] Each member of the population typically progresses from susceptible to infectious to recovered. This can be shown as a flow diagram in which the boxes represent the different compartments and the arrows the transition between compartments (see diagram). For the full specification of the model, the arrows should be labeled with the transition rates between compartments. Between S and I, the transition rate is assumed to be d ( S / N ) / d t = − β S I / N 2 {\displaystyle d(S/N)/dt=-\beta SI/N^{2}} , where N {\displaystyle N} is the total population, β {\displaystyle \beta } is the average number of contacts per person per time, multiplied by the probability of disease transmission in a contact between a susceptible and an infectious subject, and S I / N 2 {\displaystyle SI/N^{2}} is the fraction of all possible contacts that involves an infectious and susceptible individual. (This is mathematically similar to the law of mass action in chemistry in which random collisions between molecules result in a chemical reaction and the fractional rate is proportional to the concentration of the two reactants.) Between I and R, the transition rate is assumed to be proportional to the number of infectious individuals which is γ I {\displaystyle \gamma I} . If an individual is infectious for an average time period D {\displaystyle D} , then γ = 1 / D {\displaystyle \gamma =1/D} . This is also equivalent to the assumption that the length of time spent by an individual in the infectious state is a random variable with an exponential distribution. The "classical" SIR model may be modified by using more complex and realistic distributions for the I-R transition rate (e.g. the Erlang distribution). For the special case in which there is no removal from the infectious compartment ( γ = 0 {\displaystyle \gamma =0} ), the SIR model reduces to a very simple SI model, which has a logistic solution, in which every individual eventually becomes infected. The dynamics of an epidemic, for example, the flu, are often much faster than the dynamics of birth and death, therefore, birth and death are often omitted in simple compartmental models. The SIR system without so-called vital dynamics (birth and death, sometimes called demography) described above can be expressed by the following system of ordinary differential equations: where S {\displaystyle S} is the stock of susceptible population in unit number of people, I {\displaystyle I} is the stock of infected in unit number of people, R {\displaystyle R} is the stock of removed population (either by death or recovery) in unit number of people, and N {\displaystyle N} is the sum of these three in unit number of people. β {\displaystyle \beta } is the infection rate constant in the unit number of people infected per day per infected person, and γ {\displaystyle \gamma } is the recovery rate constant in the unit fraction of a person recovered per day per infected person, when time is in unit day. This model was for the first time proposed by William Ogilvy Kermack and Anderson Gray McKendrick as a special case of what we now call Kermack–McKendrick theory, and followed work McKendrick had done with Ronald Ross.[citation needed] This system is non-linear, however it is possible to derive its analytic solution in implicit form. Firstly note that from: it follows that: expressing in mathematical terms the constancy of population N {\displaystyle N} . Note that the above relationship implies that one need only study the equation for two of the three variables. Secondly, we note that the dynamics of the infectious class depends on the following ratio: the so-called basic reproduction number (also called basic reproduction ratio). This ratio is derived as the expected number of new infections (these new infections are sometimes called secondary infections) from a single infection in a population where all subjects are susceptible. This idea can probably be more readily seen if we say that the typical time between contacts is T c = β − 1 {\displaystyle T_{c}=\beta ^{-1}} , and the typical time until removal is T r = γ − 1 {\displaystyle T_{r}=\gamma ^{-1}} . From here it follows that, on average, the number of contacts by an infectious individual with others before the infectious has been removed is: T r / T c . {\displaystyle T_{r}/T_{c}.} By dividing the first differential equation by the third, separating the variables and integrating we get where S ( 0 ) {\displaystyle S(0)} and R ( 0 ) {\displaystyle R(0)} are the initial numbers of, respectively, susceptible and removed subjects. Writing s 0 = S ( 0 ) / N {\displaystyle s_{0}=S(0)/N} for the initial proportion of susceptible individuals, and s ∞ = S ( ∞ ) / N {\displaystyle s_{\infty }=S(\infty )/N} and r ∞ = R ( ∞ ) / N {\displaystyle r_{\infty }=R(\infty )/N} for the proportion of susceptible and removed individuals respectively in the limit t → ∞ , {\displaystyle t\to \infty ,} one has (note that the infectious compartment empties in this limit). This transcendental equation has a solution in terms of the Lambert W function, namely This shows that at the end of an epidemic that conforms to the simple assumptions of the SIR model, unless s 0 = 0 {\displaystyle s_{0}=0} , not all individuals of the population have been removed, so some must remain susceptible. A driving force leading to the end of an epidemic is a decline in the number of infectious individuals. The epidemic does not typically end because of a complete lack of susceptible individuals. The role of both the basic reproduction number and the initial susceptibility are extremely important. In fact, upon rewriting the equation for infectious individuals as follows: it yields that if: then: i.e., there will be a proper epidemic outbreak with an increase of the number of the infectious (which can reach a considerable fraction of the population). On the contrary, if then i.e., independently from the initial size of the susceptible population the disease can never cause a proper epidemic outbreak. As a consequence, it is clear that both the basic reproduction number and the initial susceptibility are extremely important. Note that in the above model the function: models the transition rate from the compartment of susceptible individuals to the compartment of infectious individuals, so that it is called the force of infection. However, for large classes of communicable diseases it is more realistic to consider a force of infection that does not depend on the absolute number of infectious subjects, but on their fraction (with respect to the total constant population N {\displaystyle N} ): Capasso and, afterwards, other authors have proposed nonlinear forces of infection to model more realistically the contagion process. In 2014, Harko and coauthors derived an exact so-called analytical solution (involving an integral that can only be calculated numerically) to the SIR model. In the case without vital dynamics setup, for S ( u ) = S ( t ) {\displaystyle {\mathcal {S}}(u)=S(t)} , etc., it corresponds to the following time parametrization for with initial conditions where u T {\displaystyle u_{T}} satisfies I ( u T ) = 0 {\displaystyle {\mathcal {I}}(u_{T})=0} . By the transcendental equation for R ∞ {\displaystyle R_{\infty }} above, it follows that u T = e − ( R ∞ − R ( 0 ) ) / ρ ( = S ∞ / S ( 0 ) {\displaystyle u_{T}=e^{-(R_{\infty }-R(0))/\rho }(=S_{\infty }/S(0)} , if S ( 0 ) ≠ 0 ) {\displaystyle S(0)\neq 0)} and I ∞ = 0 {\displaystyle I_{\infty }=0} . An equivalent so-called analytical solution (involving an integral that can only be calculated numerically) found by Miller yields Here ξ ( t ) {\displaystyle \xi (t)} can be interpreted as the expected number of transmissions an individual has received by time t {\displaystyle t} . The two solutions are related by e − ξ ( t ) = u {\displaystyle e^{-\xi (t)}=u} . Effectively the same result can be found in the original work by Kermack and McKendrick. These solutions may be easily understood by noting that all of the terms on the right-hand sides of the original differential equations are proportional to I {\displaystyle I} . The equations may thus be divided through by I {\displaystyle I} , and the time rescaled so that the differential operator on the left-hand side becomes simply d / d τ {\displaystyle d/d\tau } , where d τ = I d t {\displaystyle d\tau =Idt} , i.e. τ = ∫ I d t {\displaystyle \tau =\int Idt} . The differential equations are now all linear, and the third equation, of the form d R / d τ = {\displaystyle dR/d\tau =} const., shows that τ {\displaystyle \tau } and R {\displaystyle R} (and ξ {\displaystyle \xi } above) are simply linearly related. A highly accurate analytic approximant of the SIR model as well as exact analytic expressions for the final values S ∞ {\displaystyle S_{\infty }} , I ∞ {\displaystyle I_{\infty }} , and R ∞ {\displaystyle R_{\infty }} were provided by Kröger and Schlickeiser, so that there is no need to perform a numerical integration to solve the SIR model (a simplified example practice on COVID-19 numerical simulation using Microsoft Excel can be found here ), to obtain its parameters from existing data, or to predict the future dynamics of an epidemics modeled by the SIR model. The approximant involves the Lambert W function which is part of all basic data visualization software such as Microsoft Excel, MATLAB, and Mathematica. While Kendall considered the so-called all-time SIR model where the initial conditions S ( 0 ) {\displaystyle S(0)} , I ( 0 ) {\displaystyle I(0)} , and R ( 0 ) {\displaystyle R(0)} are coupled through the above relations, Kermack and McKendrick proposed to study the more general semi-time case, for which S ( 0 ) {\displaystyle S(0)} and I ( 0 ) {\displaystyle I(0)} are both arbitrary. This latter version, denoted as semi-time SIR model, makes predictions only for future times t > 0 {\displaystyle t>0} . An analytic approximant and exact expressions for the final values are available for the semi-time SIR model as well. Numerical solutions to the SIR model can be found in the literature. An example is using the model to analyze COVID-19 spreading data. Three reproduction numbers can be pulled out from the data analyzed with numerical approximation, R 0 {\displaystyle R_{0}} represents the speed of reproduction rate at the beginning of the spreading when all populations are assumed susceptible, e.g. if β 0 = 0.4 d a y − 1 {\displaystyle \beta _{0}=0.4day^{-1}} and γ 0 = 0.2 d a y − 1 {\displaystyle \gamma _{0}=0.2day^{-1}} meaning one infectious person on average infects 0.4 susceptible people per day and recovers in 1/0.2=5 days. Thus when this person recovered, there are two people still infectious directly got from this person and R 0 = 2 {\displaystyle R_{0}=2} , i.e. the number of infectious people doubled in one cycle of 5 days. The data simulated by the model with R 0 = 2 {\displaystyle R_{0}=2} or real data fitted will yield a doubling of the number of infectious people faster than 5 days because the two infected people are infecting people. From the SIR model, we can tell that β {\displaystyle \beta } is determined by the nature of the disease and also a function of the interactive frequency between the infectious person I {\displaystyle I} with the susceptible people S {\displaystyle S} and also the intensity/duration of the interaction like how close they interact for how long and whether or not they both wear masks, thus, it changes over time when the average behavior of the carriers and susceptible people changes. The model use S I {\displaystyle SI} to represent these factors but it indeed is referenced to the initial stage when no action is taken to prevent the spread and all population is susceptible, thus all changes are absorbed by the change of β {\displaystyle \beta } . γ {\displaystyle \gamma } is usually more stable over time assuming when the infectious person shows symptoms, she/he will seek medical attention or be self-isolated. So if we find R t {\displaystyle R_{t}} changes, most probably the behaviors of people in the community have changed from their normal patterns before the outbreak, or the disease has mutated to a new form. Costive massive detection and isolation of susceptible close contacts have effects on reducing 1 / γ {\displaystyle 1/\gamma } but whose efficiencies are under debate. This debate is largely on the uncertainty of the number of days reduced from after infectious or detectable whichever comes first to before a symptom shows up for an infected susceptible person. If the person is infectious after symptoms show up, or detection only works for a person with symptoms, then these prevention methods are not necessary, and self-isolation and/or medical attention is the best way to cut the 1 / γ {\displaystyle 1/\gamma } values. The typical onset of the COVID-19 infectious period is in the order of one day from the symptoms showing up, making massive detection with typical frequency in a few days useless. R t {\displaystyle R_{t}} does not tell us whether or not the spreading will speed up or slow down in the latter stages when the fraction of susceptible people in the community has dropped significantly after recovery or vaccination. R e {\displaystyle R_{e}} corrects this dilution effect by multiplying the fraction of the susceptible population over the total population. It corrects the effective/transmissible interaction between an infectious person and the rest of the community when many of the interaction is immune in the middle to late stages of the disease spreading. Thus, when R e > 1 {\displaystyle R_{e}>1} , we will see an exponential-like outbreak; when R e = 1 {\displaystyle R_{e}=1} , a steady state reached and no number of infectious people changes over time; and when R e < 1 {\displaystyle R_{e}<1} , the disease decays and fades away over time. Using the differential equations of the SIR model and converting them to numerical discrete forms, one can set up the recursive equations and calculate the S, I, and R populations with any given initial conditions but accumulate errors over a long calculation time from the reference point. Sometimes a convergence test is needed to estimate the errors. Given a set of initial conditions and the disease-spreading data, one can also fit the data with the SIR model and pull out the three reproduction numbers when the errors are usually negligible due to the short time step from the reference point. Any point of the time can be used as the initial condition to predict the future after it using this numerical model with assumption of time-evolved parameters such as population, R t {\displaystyle R_{t}} , and γ {\displaystyle \gamma } . However, away from this reference point, errors will accumulate over time thus convergence test is needed to find an optimal time step for more accurate results. Among these three reproduction numbers, R 0 {\displaystyle R_{0}} is very useful to judge the control pressure, e.g., a large value meaning the disease will spread very fast and is very difficult to control. R t {\displaystyle R_{t}} is most useful in predicting future trends, for example, if we know the social interactions have reduced 50% frequently from that before the outbreak and the interaction intensities among people are the same, then we can set R t = 0.5 R 0 {\displaystyle R_{t}=0.5R_{0}} . If social distancing and masks add another 50% cut in infection efficiency, we can set R t = 0.25 R 0 {\displaystyle R_{t}=0.25R_{0}} . R e {\displaystyle R_{e}} will perfectly correlate with the waves of the spreading and whenever R e > 1 {\displaystyle R_{e}>1} , the spreading accelerates, and when R e < 1 {\displaystyle R_{e}<1} , the spreading slows down thus useful to set a prediction on the short-term trends. Also, it can be used to directly calculate the threshold population of vaccination/immunization for the herd immunity stage by setting R t = R 0 {\displaystyle R_{t}=R_{0}} , and R E = 1 {\displaystyle R_{E}=1} , i.e. S = N / R 0 {\displaystyle S=N/R_{0}} . Consider a population characterized by a death rate μ {\displaystyle \mu } and birth rate Λ {\displaystyle \Lambda } , and where a communicable disease is spreading. The model with mass-action transmission is: for which the disease-free equilibrium (DFE) is: In this case, we can derive a basic reproduction number: which has threshold properties. In fact, independently from biologically meaningful initial values, one can show that: The point EE is called the Endemic Equilibrium (the disease is not totally eradicated and remains in the population). With heuristic arguments, one may show that R 0 {\displaystyle R_{0}} may be read as the average number of infections caused by a single infectious subject in a wholly susceptible population, the above relationship biologically means that if this number is less than or equal to one the disease goes extinct, whereas if this number is greater than one the disease will remain permanently endemic in the population. In 1927, W. O. Kermack and A. G. McKendrick created a model in which they considered a fixed population with only three compartments: susceptible, S ( t ) {\displaystyle S(t)} ; infected, I ( t ) {\displaystyle I(t)} ; and recovered, R ( t ) {\displaystyle R(t)} . The compartments used for this model consist of three classes: The flow of this model may be considered as follows: Using a fixed population, N = S ( t ) + I ( t ) + R ( t ) {\displaystyle N=S(t)+I(t)+R(t)} in the three functions resolves that the value N {\displaystyle N} should remain constant within the simulation, if a simulation is used to solve the SIR model. Alternatively, the analytic approximant can be used without performing a simulation. The model is started with values of S ( t = 0 ) {\displaystyle S(t=0)} , I ( t = 0 ) {\displaystyle I(t=0)} and R ( t = 0 ) {\displaystyle R(t=0)} . These are the number of people in the susceptible, infected and removed categories at time equals zero. If the SIR model is assumed to hold at all times, these initial conditions are not independent. Subsequently, the flow model updates the three variables for every time point with set values for β {\displaystyle \beta } and γ {\displaystyle \gamma } . The simulation first updates the infected from the susceptible and then the removed category is updated from the infected category for the next time point (t=1). This describes the flow persons between the three categories. During an epidemic the susceptible category is not shifted with this model, β {\displaystyle \beta } changes over the course of the epidemic and so does γ {\displaystyle \gamma } . These variables determine the length of the epidemic and would have to be updated with each cycle. Several assumptions were made in the formulation of these equations: First, an individual in the population must be considered as having an equal probability as every other individual of contracting the disease with a rate of a {\displaystyle a} and an equal fraction b {\displaystyle b} of people that an individual makes contact with per unit time. Then, let β {\displaystyle \beta } be the multiplication of a {\displaystyle a} and b {\displaystyle b} . This is the transmission probability times the contact rate. Besides, an infected individual makes contact with b {\displaystyle b} persons per unit time whereas only a fraction, S / N {\displaystyle S/N} of them are susceptible. Thus, we have every infective can infect a b S = β S {\displaystyle abS=\beta S} susceptible persons, and therefore, the whole number of susceptibles infected by infectives per unit time is β S I {\displaystyle \beta SI} . For the second and third equations, consider the population leaving the susceptible class as equal to the number entering the infected class. However, a number equal to the fraction γ {\displaystyle \gamma } (which represents the mean recovery/death rate, or 1 / γ {\displaystyle 1/\gamma } the mean infective period) of infectives are leaving this class per unit time to enter the removed class. These processes which occur simultaneously are referred to as the Law of Mass Action, a widely accepted idea that the rate of contact between two groups in a population is proportional to the size of each of the groups concerned. Finally, it is assumed that the rate of infection and recovery is much faster than the time scale of births and deaths and therefore, these factors are ignored in this model. The only steady state solution to the classic SIR model as defined by the differential equations above is I=0, S and R can then take any values. The model can be changed while retaining three compartments to give a steady-state endemic solution by adding some input to the S compartment. For example, one may postulate that the expected duration of susceptibility will be E ⁡ [ min ( T L ∣ T S ) ] {\displaystyle \operatorname {E} [\min(T_{L}\mid T_{S})]} where T L {\displaystyle T_{L}} reflects the time alive (life expectancy) and T S {\displaystyle T_{S}} reflects the time in the susceptible state before becoming infected, which can be simplified to: such that the number of susceptible persons is the number entering the susceptible compartment μ N {\displaystyle \mu N} times the duration of susceptibility: Analogously, the steady-state number of infected persons is the number entering the infected state from the susceptible state (number susceptible, times rate of infection) λ = β I N , {\displaystyle \lambda ={\tfrac {\beta I}{N}},} times the duration of infectiousness 1 μ + v {\displaystyle {\tfrac {1}{\mu +v}}} : There are many modifications of the SIR model, including those that include births and deaths, where upon recovery there is no immunity (SIS model), where immunity lasts only for a short period of time (SIRS), where there is a latent period of the disease where the person is not infectious (SEIS and SEIR), and where infants can be born with immunity (MSIR). Also, compartments for vaccination, detection, or infected vectors like fleas, ticks, or mosquitoes can be added. Compartmental models can also be used to model multiple risk groups, and even the interaction of multiple pathogens. Variations on the basic SIR model Some infections, for example, those from the common cold and influenza, do not confer any long-lasting immunity. Such infections may give temporary resistance but do not give long-term immunity upon recovery from infection, and individuals become susceptible again. We have the model: Note that denoting with N the total population it holds that: It follows that: i.e. the dynamics of infectious is ruled by a logistic function, so that ∀ I ( 0 ) > 0 {\displaystyle \forall I(0)>0} : It is possible to find an analytical solution to this model (by making a transformation of variables: I = y − 1 {\displaystyle I=y^{-1}} and substituting this into the mean-field equations), such that the basic reproduction rate is greater than unity. The solution is given as where I ∞ = ( 1 − γ / β ) N {\displaystyle I_{\infty }=(1-\gamma /\beta )N} is the endemic infectious population, χ = β − γ {\displaystyle \chi =\beta -\gamma } , and V = I ∞ / I 0 − 1 {\displaystyle V=I_{\infty }/I_{0}-1} . As the system is assumed to be closed, the susceptible population is then S ( t ) = N − I ( t ) {\displaystyle S(t)=N-I(t)} . Whenever the integer nature of the number of agents is evident (populations with fewer than tens of thousands of individuals), inherent fluctuations in the disease spreading process caused by discrete agents result in uncertainties. In this scenario, the evolution of the disease predicted by compartmental equations deviates significantly from the observed results. These uncertainties may even cause the epidemic to end earlier than predicted by the compartmental equations. As a special case, one obtains the usual logistic function by assuming γ = 0 {\displaystyle \gamma =0} . This can be also considered in the SIR model with R = 0 {\displaystyle R=0} , i.e. no removal will take place. That is the SI model. The differential equation system using S = N − I {\displaystyle S=N-I} thus reduces to: In the long run, in the SI model, all individuals will become infected. The Susceptible-Infectious-Recovered-Deceased model differentiates between Recovered (meaning specifically individuals having survived the disease and now immune) and Deceased. The SIRD model has semi analytical solutions based on the four parts method. This model uses the following system of differential equations: where β , γ , μ {\displaystyle \beta ,\gamma ,\mu } are the rates of infection, recovery, and mortality, respectively. The Susceptible-Infectious-Recovered-Vaccinated model is an extended SIR model that accounts for vaccination of the susceptible population. This model uses the following system of differential equations: where β , γ , v {\displaystyle \beta ,\gamma ,v} are the rates of infection, recovery, and vaccination, respectively. For the semi-time initial conditions S ( 0 ) = ( 1 − η ) N {\displaystyle S(0)=(1-\eta )N} , I ( 0 ) = η N {\displaystyle I(0)=\eta N} , R ( 0 ) = V ( 0 ) = 0 {\displaystyle R(0)=V(0)=0} and constant ratios k = γ ( t ) / β ( t ) {\displaystyle k=\gamma (t)/\beta (t)} and b = v ( t ) / β ( t ) {\displaystyle b=v(t)/\beta (t)} the model had been solved approximately. The occurrence of a pandemic outburst requires k + b < 1 − 2 η {\displaystyle k+b<1-2\eta } and there is a critical reduced vaccination rate b c {\displaystyle b_{c}} beyond which the steady-state size S ∞ {\displaystyle S_{\infty }} of the susceptible compartment remains relatively close to S ( 0 ) {\displaystyle S(0)} . Arbitrary initial conditions satisfying S ( 0 ) + I ( 0 ) + R ( 0 ) + V ( 0 ) = N {\displaystyle S(0)+I(0)+R(0)+V(0)=N} can be mapped to the solved special case with R ( 0 ) = V ( 0 ) = 0 {\displaystyle R(0)=V(0)=0} . The numerical solution of this model to calculate the real-time reproduction number R t {\displaystyle R_{t}} of COVID-19 can be practiced based on information from the different populations in a community. Numerical solution is a commonly used method to analyze complicated kinetic networks when the analytical solution is difficult to obtain or limited by requirements such as boundary conditions or special parameters. It uses recursive equations to calculate the next step by converting the numerical integration into Riemann sum of discrete time steps e.g., use yesterday's principal and interest rate to calculate today's interest which assumes the interest rate is fixed during the day. The calculation contains projected errors if the analytical corrections on the numerical step size are not included, e.g. when the interest rate of annual collection is simplified to 12 times the monthly rate, a projected error is introduced. Thus the calculated results will carry accumulative errors when the time step is far away from the reference point and a convergence test is needed to estimate the error. However, this error is usually acceptable for data fitting. When fitting a set of data with a close time step, the error is relatively small because the reference point is nearby compared to when predicting a long period of time after a reference point. Once the real-time R t {\displaystyle R_{t}} is pulled out, one can compare it to the basic reproduction number R 0 {\displaystyle R_{0}} . Before the vaccination, R t {\displaystyle R_{t}} gives the policy maker and general public a measure of the efficiency of social mitigation activities such as social distancing and face masking simply by dividing R t R 0 {\displaystyle {\frac {R_{t}}{R_{0}}}} . Under massive vaccination, the goal of disease control is to reduce the effective reproduction number R e = R t S N < 1 {\displaystyle R_{e}={\frac {R_{t}S}{N}}<1} , where S {\displaystyle S} is the number of susceptible population at the time and N {\displaystyle N} is the total population. When R e < 1 {\displaystyle R_{e}<1} , the spreading decays and daily infected cases go down. The susceptible-infected-recovered-vaccinated-deceased (SIRVD) epidemic compartment model extends the SIR model to include the effects of vaccination campaigns and time-dependent fatality rates on epidemic outbreaks. It encompasses the SIR, SIRV, SIRD, and SI models as special cases, with individual time-dependent rates governing transitions between different fractions. This model uses the following system of differential equations for the population fractions S , I , R , V , D {\displaystyle S,I,R,V,D} : where a ( t ) , v ( t ) , μ ( t ) , ψ ( t ) {\displaystyle a(t),v(t),\mu (t),\psi (t)} are the infection, vaccination, recovery, and fatality rates, respectively. For the semi-time initial conditions S ( 0 ) = 1 − η {\displaystyle S(0)=1-\eta } , I ( 0 ) = η {\displaystyle I(0)=\eta } , R ( 0 ) = V ( 0 ) = D ( 0 ) = 0 {\displaystyle R(0)=V(0)=D(0)=0} and constant ratios k = μ ( t ) / a ( t ) {\displaystyle k=\mu (t)/a(t)} , b = v ( t ) / a ( t ) {\displaystyle b=v(t)/a(t)} , and q = ψ ( t ) / a ( t ) {\displaystyle q=\psi (t)/a(t)} the model had been solved approximately, and exactly for some special cases, irrespective of the functional form of a ( t ) {\displaystyle a(t)} . This is achieved upon rewriting the above SIRVD model equations in equivalent, but reduced form where is a reduced, dimensionless time. The temporal dependence of the infected fraction I ( τ ) {\displaystyle I(\tau )} and the rate of new infections j ( τ ) = S ( τ ) I ( τ ) {\displaystyle j(\tau )=S(\tau )I(\tau )} differs when considering the effects of vaccinations and when the real-time dependence of fatality and recovery rates diverge. These differences have been highlighted for stationary ratios and gradually decreasing fatality rates. The case of stationary ratios allows one to construct a diagnostics method to extract analytically all SIRVD model parameters from measured COVID-19 data of a completed pandemic wave. The SIRVB model adds a breakthrough pathway in the SIRV model. The kinetic equations become: where infection rate a ( t ) {\displaystyle a(t)} can be write as β ( t ) / N {\displaystyle \beta (t)/N} , recovery rate μ ( t ) {\displaystyle \mu (t)} can be simplified to a constant γ {\displaystyle \gamma } , v ( t ) {\displaystyle v(t)} is the vaccination rate, b ( t ) {\displaystyle b(t)} is the break through ratio or fraction of immuned people susceptible to reinfection (<1). For many infections, including measles, babies are not born into the susceptible compartment but are immune to the disease for the first few months of life due to protection from maternal antibodies (passed across the placenta and additionally through colostrum). This is called passive immunity. This added detail can be shown by including an M class (for maternally derived immunity) at the beginning of the model. To indicate this mathematically, an additional compartment is added, M(t). This results in the following differential equations: Some people who have had an infectious disease such as tuberculosis never completely recover and continue to carry the infection, whilst not suffering the disease themselves. They may then move back into the infectious compartment and suffer symptoms (as in tuberculosis) or they may continue to infect others in their carrier state, while not suffering symptoms. The most famous example of this is probably Mary Mallon, who infected 22 people with typhoid fever. The carrier compartment is labelled C. For many important infections, there is a significant latency period during which individuals have been infected but are not yet infectious themselves. During this period the individual is in compartment E (for exposed). Assuming that the latency period is a random variable with exponential distribution with parameter a {\displaystyle a} (i.e. the average latency period is a − 1 {\displaystyle a^{-1}} ), and also assuming the presence of vital dynamics with birth rate Λ {\displaystyle \Lambda } equal to death rate N μ {\displaystyle N\mu } (so that the total number N {\displaystyle N} is constant), we have the model: We have S + E + I + R = N , {\displaystyle S+E+I+R=N,} but this is only constant because of the simplifying assumption that birth and death rates are equal; in general N {\displaystyle N} is a variable. For this model, the basic reproduction number is: Similarly to the SIR model, also, in this case, we have a Disease-Free-Equilibrium (N,0,0,0) and an Endemic Equilibrium EE, and one can show that, independently from biologically meaningful initial conditions it holds that: In case of periodically varying contact rate β ( t ) {\displaystyle \beta (t)} the condition for the global attractiveness of DFE is that the following linear system with periodic coefficients: is stable (i.e. it has its Floquet's eigenvalues inside the unit circle in the complex plane). The SEIS model is like the SEIR model (above) except that no immunity is acquired at the end. In this model an infection does not leave any immunity thus individuals that have recovered return to being susceptible, moving back into the S(t) compartment. The following differential equations describe this model: For the case of a disease, with the factors of passive immunity, and a latency period there is the MSEIR model. An MSEIRS model is similar to the MSEIR, but the immunity in the R class would be temporary, so that individuals would regain their susceptibility when the temporary immunity ended. When developing more detailed models for in-depth analysis, models are mostly generated for specific outbreak scenarios of specific diseases, including compartments for targeted research questions like hospitalization compartments or detection dynamics. Even though those models are often tailored for specific situations, there are complex models, still usable for a broad variety of different diseases. One of those attempts to create a general model includes twelve compartments, extending the well-known SEIR model by a second stage of infection, detection compartments, and two doses of vaccination. Additionally smear infections are incorporated via an external Pathogen P {\displaystyle P} and a simplistic vector population is included by S V {\displaystyle S_{V}} and I V {\displaystyle I_{V}} . Moreover population dynamics like birth and death processes can be included. Such complex models enable a deeper understanding of infection dynamics and the introduction of different pharmaceutical and non-pharmaceutical interventions. It is well known that the probability of getting a disease is not constant in time. As a pandemic progresses, reactions to the pandemic may change the contact rates which are assumed constant in the simpler models. Counter-measures such as masks, social distancing, and lockdown will alter the contact rate in a way to reduce the speed of the pandemic. In addition, Some diseases are seasonal, such as the common cold viruses, which are more prevalent during winter. With childhood diseases, such as measles, mumps, and rubella, there is a strong correlation with the school calendar, so that during the school holidays the probability of getting such a disease dramatically decreases. As a consequence, for many classes of diseases, one should consider a force of infection with periodically ('seasonal') varying contact rate with period T equal to one year. Thus, our model becomes (the dynamics of recovered easily follows from R = N − S − I {\displaystyle R=N-S-I} ), i.e. a nonlinear set of differential equations with periodically varying parameters. It is well known that this class of dynamical systems may undergo very interesting and complex phenomena of nonlinear parametric resonance. It is easy to see that if: whereas if the integral is greater than one the disease will not die out and there may be such resonances. For example, considering the periodically varying contact rate as the 'input' of the system one has that the output is a periodic function whose period is a multiple of the period of the input. This allowed to give a contribution to explain the poly-annual (typically biennial) epidemic outbreaks of some infectious diseases as interplay between the period of the contact rate oscillations and the pseudo-period of the damped oscillations near the endemic equilibrium. Remarkably, in some cases, the behavior may also be quasi-periodic or even chaotic. Spatiotemporal compartmental models describe not the total number, but the density of susceptible/infective/recovered persons. Consequently, they also allow to model the distribution of infected persons in space. In most cases, this is done by combining the SIR model with a diffusion equation where D S {\displaystyle D_{S}} , D I {\displaystyle D_{I}} and D R {\displaystyle D_{R}} are diffusion constants. Thereby, one obtains a reaction-diffusion equation. (Note that, for dimensional reasons, the parameter β {\displaystyle \beta } has to be changed compared to the simple SIR model.) Early models of this type have been used to model the spread of the black death in Europe. Extensions of this model have been used to incorporate, e.g., effects of nonpharmaceutical interventions such as social distancing. As social contacts, disease severity and lethality, as well as the efficacy of prophylactic measures may differ substantially between interacting subpopulations, e.g., the elderly versus the young, separate SEIR models for each subgroup may be used that are mutually connected through interaction links. Such Interacting Subpopulation SEIR models have been used for modeling the COVID-19 pandemic at continent scale to develop personalized, accelerated, subpopulation-targeted vaccination strategies that promise a shortening of the pandemic and a reduction of case and death counts in the setting of limited access to vaccines during a wave of virus Variants of Concern. The SIR model has been studied on networks of various kinds in order to model a more realistic form of connection than the homogeneous mixing condition which is usually required. A simple model for epidemics on networks in which an individual has a probability p of being infected by each of his infected neighbors in a given time step leads to results similar to giant component formation on Erdos Renyi random graphs. A stochastic compartment model with a transmission pathway via vectors has been developed recently in which a multiple random walkers approach is implemented to investigate the spreading dynamics in random graphs of the Watts-Strogatz and the Barabási-Albert type to mimic human mobility patterns in complex real world environments such as cities, streets, and transportation networks. This model captures the class of vector transmitted infectious diseases such as Dengue, Malaria (transmission by mosquitoes), pestilence (transmission by fleas), and others. Dynamics of epidemics depend on how people's behavior changes in time. For example, at the beginning of the epidemic, people are ignorant and careless, then, after the outbreak of epidemics and alarm, they begin to comply with the various restrictions and the spreading of epidemics may decline. Over time, some people get tired/frustrated by the restrictions and stop following them (exhaustion), especially if the number of new cases drops down. After resting for some time, they can follow the restrictions again. But during this pause the second wave can come and become even stronger than the first one. Social dynamics should be considered. The social physics models of social stress complement the classical epidemics models. The simplest SIR-social stress (SIRSS) model is organised as follows. The susceptible individuals (S) can be split in three subgroups by the types of behavior: ignorant or unaware of the epidemic (Sign), rationally resistant (Sres), and exhausted (Sexh) that do not react on the external stimuli (this is a sort of refractory period). In other words: S(t) = Sign(t) + Sres(t) + Sexh(t). Symbolically, the social stress model can be presented by the "reaction scheme" (where I denotes the infected individuals): The main SIR epidemic reaction has different reaction rate constants β {\displaystyle \beta } for Sign, Sres, and Sexh. Presumably, for Sres, β {\displaystyle \beta } is lower than for Sign and Sign. The differences between countries are concentrated in two kinetic constants: the rate of mobilization and the rate of exhaustion calculated for COVID-19 epidemic in 13 countries. These constants for this epidemic in all countries can be extracted by the fitting of the SIRSS model to publicly available data Based on the classical SIR model, a Korteweg-de Vries (KdV)–SIR equation and its analytical solution have been proposed to illustrate the fundamental dynamics of an epidemic wave, the dependence of solutions on parameters, and the dependence of predictability horizons on various types of solutions. The KdV-SIR equation is written as follows: d 2 I d t − σ o 2 I + 3 2 σ o 2 I m a x I 2 = 0 {\displaystyle {\frac {d^{2}I}{dt}}-\sigma _{o}^{2}I+{\frac {3}{2}}{\frac {\sigma _{o}^{2}}{I_{max}}}I^{2}=0} . Here, σ o = γ ( R o − 1 ) {\displaystyle \sigma _{o}=\gamma (R_{o}-1)} , R o = β γ S o N {\displaystyle R_{o}={\frac {\beta }{\gamma }}{\frac {S_{o}}{N}}} , and I m a x = S o 2 ( R o − 1 ) 2 R o 2 {\displaystyle I_{max}={\frac {S_{o}}{2}}{\frac {(R_{o}-1)^{2}}{R_{o}^{2}}}} . S o {\displaystyle S_{o}} indicates the initial value of the state variable S {\displaystyle S} . Parameters σ o {\displaystyle \sigma _{o}} (σ-naught) and R o {\displaystyle R_{o}} (R-naught) are the time-independent relative growth rate and basic reproduction number, respectively. I m a x {\displaystyle I_{max}} presents the maximum of the state variables I {\displaystyle I} (for the number of infected persons). The KdV-SIR equation shares the same form as the Korteweg–De Vries equation in the traveling wave coordinate. An analytical solution to the KdV-SIR equation is written as follows: I = I m a x s e c h 2 ( σ o 2 t ) {\displaystyle I=I_{max}sech^{2}\left({\frac {\sigma _{o}}{2}}t\right)} , which represents a solitary wave solution. Heterogeneous (structured, Bayesian) model Modeling a full population of possibly millions people using two constants β {\displaystyle \beta } and γ {\displaystyle \gamma } seem far fetched; each individual has personal characteristics that influence the propagation : immunity status, contact habits and so on. So it is interesting to know what happens if, for instance, β {\displaystyle \beta } and γ {\displaystyle \gamma } are not two constants but some random variables (a pair for each individual). This procedure has several names : "heterogeneous model", "structuration" (see also below for age structured models) or "Bayesian" view. Surprising results emerge, for instance it was proved in that the number of infected at the peak of a heterogeneous epidemic is smaller than the deterministic epidemic having same average β {\displaystyle \beta } ; the same holds true for the total epidemic size S ( 0 ) − S ( ∞ ) {\displaystyle S(0)-S(\infty )} and other models, e.g. SEIR. Modelling vaccination The SIR model can be modified to model vaccination. Typically these introduce an additional compartment to the SIR model, V {\displaystyle V} , for vaccinated individuals. Below are some examples. In presence of a communicable diseases, one of the main tasks is that of eradicating it via prevention measures and, if possible, via the establishment of a mass vaccination program. Consider a disease for which the newborn are vaccinated (with a vaccine giving lifelong immunity) at a rate P ∈ ( 0 , 1 ) {\displaystyle P\in (0,1)} : where V {\displaystyle V} is the class of vaccinated subjects. It is immediate to show that: thus we shall deal with the long term behavior of S {\displaystyle S} and I {\displaystyle I} , for which it holds that: In other words, if the vaccination program is not successful in eradicating the disease, on the contrary, it will remain endemic, although at lower levels than the case of absence of vaccinations. This means that the mathematical model suggests that for a disease whose basic reproduction number may be as high as 18 one should vaccinate at least 94.4% of newborns in order to eradicate the disease. Modern societies are facing the challenge of "rational" exemption, i.e. the family's decision to not vaccinate children as a consequence of a "rational" comparison between the perceived risk from infection and that from getting damages from the vaccine. In order to assess whether this behavior is really rational, i.e. if it can equally lead to the eradication of the disease, one may simply assume that the vaccination rate is an increasing function of the number of infectious subjects: In such a case the eradication condition becomes: i.e. the baseline vaccination rate should be greater than the "mandatory vaccination" threshold, which, in case of exemption, cannot hold. Thus, "rational" exemption might be myopic since it is based only on the current low incidence due to high vaccine coverage, instead taking into account future resurgence of infection due to coverage decline. In case there also are vaccinations of non newborns at a rate ρ the equation for the susceptible and vaccinated subject has to be modified as follows: leading to the following eradication condition: This strategy repeatedly vaccinates a defined age-cohort (such as young children or the elderly) in a susceptible population over time. Using this strategy, the block of susceptible individuals is then immediately removed, making it possible to eliminate an infectious disease, (such as measles), from the entire population. Every T time units a constant fraction p of susceptible subjects is vaccinated in a relatively short (with respect to the dynamics of the disease) time. This leads to the following impulsive differential equations for the susceptible and vaccinated subjects: It is easy to see that by setting I = 0 one obtains that the dynamics of the susceptible subjects is given by: and that the eradication condition is: A huge literature recognizes that the vaccination can be seen as a game: in a population where everybody is vaccinated any epidemic will die off immediately so an additional person will have no interest to vaccinate at all. On the contrary, a person arriving in a population where nobody is vaccinated will have all incentives to vaccinate (the epidemic will break loose in such a population). So, it seems that the individual has interest to do the opposite of the population as a whole. But the population is the sum of all individuals, and the previous affirmation should be false. So, in fact, a Nash equilibrium is reached. Technical tools to treat such situations involve game theory or modern tools such as Mean-field game theory. The influence of age: age-structured models Age has a deep influence on the disease spread rate in a population, especially the contact rate. This rate summarizes the effectiveness of contacts between susceptible and infectious subjects. Taking into account the ages of the epidemic classes s ( t , a ) , i ( t , a ) , r ( t , a ) {\displaystyle s(t,a),i(t,a),r(t,a)} (to limit ourselves to the susceptible-infectious-removed scheme) such that: (where a M ≤ + ∞ {\displaystyle a_{M}\leq +\infty } is the maximum admissible age) and their dynamics is not described, as one might think, by "simple" partial differential equations, but by integro-differential equations: where: is the force of infection, which, of course, will depend, though the contact kernel k ( a , a 1 ; t ) {\displaystyle k(a,a_{1};t)} on the interactions between the ages. Complexity is added by the initial conditions for newborns (i.e. for a=0), that are straightforward for infectious and removed: but that are nonlocal for the density of susceptible newborns: where φ j ( a ) , j = s , i , r {\displaystyle \varphi _{j}(a),j=s,i,r} are the fertilities of the adults. Moreover, defining now the density of the total population n ( t , a ) = s ( t , a ) + i ( t , a ) + r ( t , a ) {\displaystyle n(t,a)=s(t,a)+i(t,a)+r(t,a)} one obtains: In the simplest case of equal fertilities in the three epidemic classes, we have that in order to have demographic equilibrium the following necessary and sufficient condition linking the fertility φ ( . ) {\displaystyle \varphi (.)} with the mortality μ ( a ) {\displaystyle \mu (a)} must hold: and the demographic equilibrium is automatically ensuring the existence of the disease-free solution: A basic reproduction number can be calculated as the spectral radius of an appropriate functional operator. One way to calculate R 0 {\displaystyle R_{0}} is to average the expected number of new infections over all possible infected types. The next-generation method is a general method of deriving R 0 {\displaystyle R_{0}} when more than one class of infectives is involved. This method, originally introduced by Diekmann et al. (1990), can be used for models with underlying age structure or spatial structure, among other possibilities. In this picture, the spectral radius of the next-generation matrix G {\displaystyle G} gives the basic reproduction number, R 0 = ρ ( G ) . {\displaystyle R_{0}=\rho (G).} Consider a sexually transmitted disease. In a naive population where almost everyone is susceptible, but the infection seed, if the expected number of gender 1 is f {\displaystyle f} and the expected number of infected gender 2 is m {\displaystyle m} , we can know how many would be infected in the next-generation. Such that the next-generation matrix G {\displaystyle G} can be written as: G = ( 0 f m 0 ) , {\displaystyle G={\begin{pmatrix}0&f\\m&0\end{pmatrix}},} where each element g i j {\displaystyle g_{ij}} is the expected number of secondary infections of gender i {\displaystyle i} caused by a single infected individual of gender j {\displaystyle j} , assuming that the population of gender i {\displaystyle i} is entirely susceptible. Diagonal elements are zero because people of the same gender cannot transmit the disease to each other but, for example, each f {\displaystyle f} can transmit the disease to m {\displaystyle m} , on average. Meaning that each element g i j {\displaystyle g_{ij}} is a reproduction number, but one where who infects whom is accounted for. If generation a {\displaystyle a} is represented with ϕ a {\displaystyle \phi _{a}} then the next generation ϕ a + 1 {\displaystyle \phi _{a+1}} would be G ϕ a {\displaystyle G\phi _{a}} . The spectral radius of the next-generation matrix is the basic reproduction number, R 0 = ρ ( G ) = m f {\displaystyle R_{0}=\rho (G)={\sqrt {mf}}} , that is here, the geometric mean of the expected number of each gender in the next-generation. Note that multiplication factors f {\displaystyle f} and m {\displaystyle m} alternate because, the infectious person has to 'pass through' a second gender before it can enter a new host of the first gender. In other words, it takes two generations to get back to the same type, and every two generations numbers are multiplied by m {\displaystyle m} × f {\displaystyle f} . The average per generation multiplication factor is therefore m f {\displaystyle {\sqrt {mf}}} . Note that G {\displaystyle G} is a non-negative matrix so it has single, unique, positive, real eigenvalue which is strictly greater than all the others. In mathematical modelling of infectious disease, the dynamics of spreading is usually described through a set of non-linear ordinary differential equations (ODE). So there is always n {\displaystyle n} coupled equations of form C i ˙ = d C i d t = f ( C 1 , C 2 , . . . , C n ) {\displaystyle {\dot {C_{i}}}={\operatorname {d} \!C_{i} \over \operatorname {d} \!t}=f(C_{1},C_{2},...,C_{n})} which shows how the number of people in compartment C i {\displaystyle C_{i}} changes over time. For example, in a SIR model, C 1 = S {\displaystyle C_{1}=S} , C 2 = I {\displaystyle C_{2}=I} , and C 3 = R {\displaystyle C_{3}=R} . Compartmental models have a disease-free equilibrium (DFE) meaning that it is possible to find an equilibrium while setting the number of infected people to zero, I = 0 {\displaystyle I=0} . In other words, as a rule, there is an infection-free steady state. This solution, also usually ensures that the disease-free equilibrium is also an equilibrium of the system. There is another fixed point known as an Endemic Equilibrium (EE) where the disease is not totally eradicated and remains in the population. Mathematically, R 0 {\displaystyle R_{0}} is a threshold for stability of a disease-free equilibrium such that: To calculate R 0 {\displaystyle R_{0}} , the first step is to linearise around the disease-free equilibrium (DFE), but for the infected subsystem of non-linear ODEs which describe the production of new infections and changes in state among infected individuals. Epidemiologically, the linearisation reflects that R 0 {\displaystyle R_{0}} characterizes the potential for initial spread of an infectious person in a naive population, assuming the change in the susceptible population is negligible during the initial spread. A linear system of ODEs can always be described by a matrix. So, the next step is to construct a linear positive operator that provides the next generation of infected people when applied to the present generation. Note that this operator (matrix) is responsible for the number of infected people, not all the compartments. Iteration of this operator describes the initial progression of infection within the heterogeneous population. So comparing the spectral radius of this operator to unity determines whether the generations of infected people grow or not. R 0 {\displaystyle R_{0}} can be written as a product of the infection rate near the disease-free equilibrium and average duration of infectiousness. It is used to find the peak and final size of an epidemic. As described in the example above, so many epidemic processes can be described with a SIR‌ model. However, for many important infections, such as COVID-19, there is a significant latency period during which individuals have been infected but are not yet infectious themselves. During this period the individual is in compartment E (for exposed). Here, the formation of the next-generation matrix from the SEIR‌ model involves determining two compartments, infected and non-infected, since they are the populations that spread the infection. So we only need to model the exposed, E, and infected, I, compartments. Consider a population characterized by a death rate μ {\displaystyle \mu } and birth rate λ {\displaystyle \lambda } where a communicable disease is spreading. As in the previous example, we can use the transition rates between the compartments per capita such that β {\displaystyle \beta } be the infection rate, γ {\displaystyle \gamma } be the recovery rate, and κ {\displaystyle \kappa } be the rate at which a latent individual becomes infectious. Then, we can define the model dynamics using the following equations: { S ˙ = λ − μ S − β S I , E ˙ = β S I − ( μ + κ ) E , I ˙ = κ E − ( μ + γ ) I , R ˙ = γ I − μ R . {\displaystyle {\begin{cases}{\dot {S}}=\lambda -\mu S-\beta SI,\\\\{\dot {E}}=\beta SI-(\mu +\kappa )E,\\\\{\dot {I}}=\kappa E-(\mu +\gamma )I,\\\\{\dot {R}}=\gamma I-\mu R.\end{cases}}} Here we have 4 compartments and we can define vector x = ( S , E , I , R ) {\displaystyle \mathrm {x} =(S,E,I,R)} where x i {\displaystyle \mathrm {x} _{i}} denotes the number or proportion of individuals in the i {\displaystyle i} -th compartment. Let F i ( x ) {\displaystyle F_{i}(\mathrm {x} )} be the rate of appearance of new infections in compartment i {\displaystyle i} such that it includes only infections that are newly arising, but does not include terms which describe the transfer of infectious individuals from one infected compartment to another. Then if V i + {\displaystyle V_{i}^{+}} is the rate of transfer of individuals into compartment i {\displaystyle i} by all other means and V i − {\displaystyle V_{i}^{-}} is the rate of transfer of individuals out of the i {\displaystyle i} -th compartment, then the difference F i ( x ) − V i ( x ) {\displaystyle F_{i}(\mathrm {x} )-V_{i}(\mathrm {x} )} gives the rate of change of such that V i ( x ) = V i − ( x ) − V i + ( x ) {\displaystyle V_{i}(\mathrm {x} )=V_{i}^{-}(\mathrm {x} )-V_{i}^{+}(\mathrm {x} )} . We can now make matrices of partial derivatives of F {\displaystyle F} and V {\displaystyle V} such that F i j = ∂ F i ( x ∗ ) ∂ x j {\displaystyle F_{ij}={\partial \!\ F_{i}(\mathrm {x} ^{*}) \over \partial \!\ \mathrm {x} _{j}}} and V i j = ∂ V i ( x ∗ ) ∂ x j {\displaystyle V_{ij}={\partial \!\ V_{i}(\mathrm {x} ^{*}) \over \partial \!\ \mathrm {x} _{j}}} , where x ∗ = ( S ∗ , E ∗ , I ∗ , R ∗ ) = ( λ / μ , 0 , 0 , 0 ) {\displaystyle \mathrm {x} ^{*}=(S^{*},E^{*},I^{*},R^{*})=(\lambda /\mu ,0,0,0)} is the disease-free equilibrium. We now can form the next-generation matrix (operator) G = F V − 1 {\displaystyle G=FV^{-1}} . Basically, F {\displaystyle F} is a non-negative matrix which represents the infection rates near the equilibrium, and V {\displaystyle V} is an M-matrix for linear transition terms making V − 1 {\displaystyle V^{-1}} a matrix which represents the average duration of infectiousness. Therefore, G i j {\displaystyle G_{ij}} gives the rate at which infected individuals in x j {\displaystyle \mathrm {x} _{j}} produce new infections in x i {\displaystyle \mathrm {x} _{i}} , times the average length of time an individual spends in a single visit to compartment j . {\displaystyle j.} Finally, for this SEIR process we can have: F = ( 0 β S ∗ 0 0 ) {\displaystyle F={\begin{pmatrix}0&\beta S^{*}\\0&0\end{pmatrix}}} and V = ( μ + κ 0 − κ γ + μ ) {\displaystyle V={\begin{pmatrix}\mu +\kappa &0\\-\kappa &\gamma +\mu \end{pmatrix}}} and so R 0 = ρ ( F V − 1 ) = κ β S ∗ ( μ + κ ) ( μ + γ ) . {\displaystyle R_{0}=\rho (FV^{-1})={\frac {\kappa \beta S^{*}}{(\mu +\kappa )(\mu +\gamma )}}.} Estimation methods The basic reproduction number can be estimated through examining detailed transmission chains or through genomic sequencing. However, it is most frequently calculated using epidemiological models. During an epidemic, typically the number of diagnosed infections N ( t ) {\displaystyle N(t)} over time t {\displaystyle t} is known. In the early stages of an epidemic, growth is exponential, with a logarithmic growth rate K := d ln ⁡ ( N ) d t . {\displaystyle K:={\frac {d\ln(N)}{dt}}.} For exponential growth, N {\displaystyle N} can be interpreted as the cumulative number of diagnoses (including individuals who have recovered) or the present number of infection cases; the logarithmic growth rate is the same for either definition. In order to estimate R 0 {\displaystyle R_{0}} , assumptions are necessary about the time delay between infection and diagnosis and the time between infection and starting to be infectious. In exponential growth, K {\displaystyle K} is related to the doubling time T d {\displaystyle T_{d}} as K = ln ⁡ ( 2 ) T d . {\displaystyle K={\frac {\ln(2)}{T_{d}}}.} If an individual, after getting infected, infects exactly R 0 {\displaystyle R_{0}} new individuals only after exactly a time τ {\displaystyle \tau } (the serial interval) has passed, then the number of infectious individuals over time grows as n E ( t ) = n E ( 0 ) R 0 t / τ = n E ( 0 ) e K t {\displaystyle n_{E}(t)=n_{E}(0)\,R_{0}^{t/\tau }=n_{E}(0)\,e^{Kt}} or ln ⁡ ( n E ( t ) ) = ln ⁡ ( n E ( 0 ) ) + ln ⁡ ( R 0 ) t / τ . {\displaystyle \ln(n_{E}(t))=\ln(n_{E}(0))+\ln(R_{0})t/\tau .} The underlying matching differential equation is d n E ( t ) d t = n E ( t ) ln ⁡ ( R 0 ) τ . {\displaystyle {\frac {dn_{E}(t)}{dt}}=n_{E}(t){\frac {\ln(R_{0})}{\tau }}.} or d ln ⁡ ( n E ( t ) ) d t = ln ⁡ ( R 0 ) τ . {\displaystyle {\frac {d\ln(n_{E}(t))}{dt}}={\frac {\ln(R_{0})}{\tau }}.} In this case, R 0 = e K τ {\displaystyle R_{0}=e^{K\tau }} or K = ln ⁡ R 0 τ {\displaystyle K={\frac {\ln R_{0}}{\tau }}} . For example, with τ = 5 d {\displaystyle \tau =5~\mathrm {d} } and K = 0.183 d − 1 {\displaystyle K=0.183~\mathrm {d} ^{-1}} , we would find R 0 = 2.5 {\displaystyle R_{0}=2.5} . If R 0 {\displaystyle R_{0}} is time dependent ln ⁡ ( n E ( t ) ) = ln ⁡ ( n E ( 0 ) ) + 1 τ ∫ 0 t ln ⁡ ( R 0 ( t ) ) d t {\displaystyle \ln(n_{E}(t))=\ln(n_{E}(0))+{\frac {1}{\tau }}\int \limits _{0}^{t}\ln(R_{0}(t))dt} showing that it may be important to keep ln ⁡ ( R 0 ) {\displaystyle \ln(R_{0})} below 0, time-averaged, to avoid exponential growth. In this model, an individual infection has the following stages: This is a SEIR model and R 0 {\displaystyle R_{0}} may be written in the following form R 0 = 1 + K ( τ E + τ I ) + K 2 τ E τ I . {\displaystyle R_{0}=1+K(\tau _{E}+\tau _{I})+K^{2}\tau _{E}\tau _{I}.} This estimation method has been applied to COVID-19 and SARS. It follows from the differential equation for the number of exposed individuals n E {\displaystyle n_{E}} and the number of latent infectious individuals n I {\displaystyle n_{I}} , d d t ( n E n I ) = ( − 1 / τ E R 0 / τ I 1 / τ E − 1 / τ I ) ( n E n I ) . {\displaystyle {\frac {d}{dt}}{\begin{pmatrix}n_{E}\\n_{I}\end{pmatrix}}={\begin{pmatrix}-1/\tau _{E}&R_{0}/\tau _{I}\\1/\tau _{E}&-1/\tau _{I}\end{pmatrix}}{\begin{pmatrix}n_{E}\\n_{I}\end{pmatrix}}.} The largest eigenvalue of the matrix is the logarithmic growth rate K {\displaystyle K} , which can be solved for R 0 {\displaystyle R_{0}} . In the special case τ I = 0 {\displaystyle \tau _{I}=0} , this model results in R 0 = 1 + K τ E {\displaystyle R_{0}=1+K\tau _{E}} , which is different from the simple model above ( R 0 = exp ⁡ ( K τ E ) {\displaystyle R_{0}=\exp(K\tau _{E})} ). For example, with the same values τ = 5 d {\displaystyle \tau =5~\mathrm {d} } and K = 0.183 d − 1 {\displaystyle K=0.183~\mathrm {d} ^{-1}} , we would find R 0 = 1.9 {\displaystyle R_{0}=1.9} , rather than the true value of 2.5 {\displaystyle 2.5} . The difference is due to a subtle difference in the underlying growth model; the matrix equation above assumes that newly infected patients are currently already contributing to infections, while in fact infections only occur due to the number infected at τ E {\displaystyle \tau _{E}} ago. A more correct treatment would require the use of delay differential equations. Latent period is the transition time between contagion event and disease manifestation. In cases of diseases with varying latent periods, the basic reproduction number can be calculated as the sum of the reproduction numbers for each transition time into the disease. An example of this is tuberculosis (TB). Blower and coauthors calculated from a simple model of TB the following reproduction number: R 0 = R 0 FAST + R 0 SLOW {\displaystyle R_{0}=R_{0}^{\text{FAST}}+R_{0}^{\text{SLOW}}} In their model, it is assumed that the infected individuals can develop active TB by either direct progression (the disease develops immediately after infection) considered above as FAST tuberculosis or endogenous reactivation (the disease develops years after the infection) considered above as SLOW tuberculosis. Other considerations within compartmental epidemic models In the case of some diseases such as AIDS and hepatitis B, it is possible for the offspring of infected parents to be born infected. This transmission of the disease down from the mother is referred to as vertical transmission. The influx of additional members into the infected category can be considered within the model by including a fraction of the newborn members in the infected compartment. Diseases transmitted from human to human indirectly, i.e. malaria spread by way of mosquitoes, are transmitted through a vector. In these cases, the infection transfers from human to insect and an epidemic model must include both species, generally requiring many more compartments than a model for direct transmission. Other occurrences which may need to be considered when modeling an epidemic include things such as the following: Deterministic versus stochastic epidemic models The deterministic models presented here are valid only in case of sufficiently large populations, and as such should be used cautiously. These models are only valid in the thermodynamic limit, where the population is effectively infinite. In stochastic models, the long-time endemic equilibrium derived above, does not hold, as there is a finite probability that the number of infected individuals drops below one in a system. In a true system then, the pathogen may not propagate, as no host will be infected. But, in deterministic mean-field models, the number of infected can take on real, namely, non-integer values of infected hosts, and the number of hosts in the model can be less than one, but more than zero, thereby allowing the pathogen in the model to propagate. The reliability of compartmental models is limited to compartmental applications. One of the possible extensions of mean-field models considers the spreading of epidemics on a network based on percolation theory concepts. Stochastic epidemic models have been studied on different networks and more recently applied to the COVID-19 pandemic. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Israeli_Jews] | [TOKENS: 12926]
Contents Israeli Jews Israeli Jews or Jewish Israelis (Hebrew: יהודים ישראלים Yêhūdīm Yīśrāʾēlīm) comprise Israel's largest ethnic and religious community. The core of their demographic consists of those with a Jewish identity and their descendants, including ethnic Jews and religious Jews alike. About 46% of the global Jewish population resides in Israel; yerida is uncommon and is offset exponentially by aliyah, but those who do emigrate from the country typically move to the Western world. As such, the Israeli diaspora is closely tied to the broader Jewish diaspora. Israel is widely described as a melting pot for the various Jewish ethnic divisions, primarily consisting of Ashkenazi Jews, Sephardic Jews, and Mizrahi Jews, as well as many smaller Jewish communities, such as the Beta Israel, the Cochin Jews, the Bene Israel, and the Karaite Jews, among others. Over 25% of Jewish children and 35% of Jewish newborns in Israel are of mixed Ashkenazi and Sephardic or Mizrahi descent, and these figures have been increasing by approximately 0.5% annually: over 50% of Israel's entire Jewish population identifies as having Ashkenazi, Sephardic, and Mizrahi admixture. The integration of Judaism in Israeli Jewish life is split along four categories: the secularists (33%), the traditionalists (24%), the Orthodox (9%), and the Ultra-Orthodox (7%). In addition to religious influences, both Jewish history and Jewish culture serve as important aspects defining Israel's Jewish society, thereby contributing significantly to Israel's identity as the world's only Jewish-majority country. In 2018, Israel's Knesset narrowly voted in favour of Basic Law: Israel as the Nation-State of the Jewish People. As the Israeli government considers a person's Jewish status to be a matter of nationality and citizenship, the definition of Jewishness in the Israeli Law of Return includes patrilineal Jewish descent; this does not align with the stipulations of Judaism's halakha, which defines Jewishness through matrilineality. As of 1970[update], all Jews by blood and their spouses automatically qualify for the right to immigrate to the country and acquire Israeli citizenship. According to the Israel Central Bureau of Statistics, the Israeli Jewish population stood at 7,208,000 in 2023, comprising about 73% of the country's total population. Including non-Jewish relatives (e.g., spouses) raises this figure to 7,762,000, about 79% of the country's population. A 2008 Israel Democracy Institute study found that a plurality of Israeli Jews (47%) identify as Jews first and as Israelis second, and that 39% consider themselves Israelis first and foremost. Upon the Israeli Declaration of Independence in 1948, the Palestinian Jews of the Yishuv in the British Mandate for Palestine became known as Israeli Jews due to their adoption of a new national identity. The former term has since fallen out of common use. History Jews have long considered the Land of Israel to be their homeland, even while living in the diaspora. According to the Hebrew Bible the connection to the Land of Israel began in the covenant of the pieces when the region, which is called the land of Canaan, was promised to Abraham by God. Abraham settled in the region, where his son Isaac and grandson Jacob grew up with their families. Later on, Jacob and his sons went to Egypt. Decades later their descendants were led out of Egypt by Moses and Aaron, given the Tablets of Stone, returned to the land of Canaan and conquered it under the leadership of Joshua. After the period of the judges, in which the Israelites did not have an organized leadership, the Kingdom of Israel was established, which constructed the First Temple. This kingdom was soon split into two—the Kingdom of Judah and the Kingdom of Israel. After the destruction of these kingdoms and the destruction of the First Temple, the Israelites were exiled to Babylon. After about 70 years parts of the Israelites were permitted to return to the region and soon thereafter they built the Second Temple. Later they established the Hasmonean Kingdom. The region was conquered by the Roman Empire in 63 BCE. During the first couple of centuries in the common era during a series of rebellions against the Roman Empire the Second Temple was destroyed and there was a general expulsion of Jews from their homeland. The area was later conquered by migrant Arabs who invaded the Byzantine Empire and established a Muslim Caliphate in the 7th century during the rise of Islam. Throughout the centuries the size of the Jewish population in the land fluctuated. Before the birth of modern Zionism in the 1880s, by the early 19th century, more than 10,000 Jews were still living in the area that is today modern Israel. Following centuries of Jewish diaspora, the 19th century saw the rise of Zionism, a Jewish nationalist movement that had a desire to see the self-determination of the Jewish people through the creation of a homeland for the Jews in Palestine. Significant numbers of Jews have immigrated to Palestine since the 1880s. Zionism remained a minority movement until the rise of Nazism in 1933 and the subsequently attempted extermination of the Jewish people in Nazi-occupied areas of Europe in the Holocaust. In the late 19th century large numbers of Jews began moving to the Ottoman and later British-controlled region. In 1917, the British endorsed a National Home for Jews in Mandate Palestine by issuing the Balfour Declaration. The Jewish population in the region increased from 11% of the population in 1922 to 30% by 1940. In 1937, following the Great Arab Revolt, the partition plan proposed by the Peel Commission was rejected by both the Palestinian Arab leadership and the Zionist Congress. As a result, believing that their position in the Middle East in the event of a war depended on the support of the Arab states, Britain abandoned the idea of a Jewish state in 1939 in favour of a unitary state with a Jewish minority. The White Paper of 1939 capped Jewish immigration for five years, with further immigration dependent on the agreement of the Arabs. In the event, limited Jewish immigration was permitted until the end of the mandate. In 1947, following increasing levels of violence, the British government decided to withdraw from Mandatory Palestine. The 1947 UN Partition Plan split the mandate (apart from Jerusalem) into two states, Jewish and Arab, giving about 56% of Mandatory Palestine to the Jewish state. Immediately following the adoption of the Partition Plan by the United Nations General Assembly, the Palestinian Arab leadership rejected the plan to create the as-yet-unnamed Jewish State and launched a guerrilla war. On 14 May 1948, one day before the end of the British Mandate of Palestine, the leaders of the Jewish community in Palestine led by prime minister David Ben-Gurion, made a declaration of independence, of the State of Israel though without any reference to defined borders. The armies of Egypt, Lebanon, Syria, Jordan, and Iraq invaded the former mandate, thus starting the 1948 Arab–Israeli War. The nascent Israeli Defense Force repulsed the Arab nations from much of the former mandate, thus extending its borders beyond the original UNSCOP partition. By December 1948, Israel controlled much of Mandate Palestine west of the Jordan River. The remainder of the Mandate consisted of Jordan, the area that came to be called the West Bank (controlled by Jordan), and the Gaza Strip (controlled by Egypt). Prior to and during this conflict, 711,000 Palestinians Arabs fled their original lands to become Palestinian refugees. The reasons for this are disputed, and range from claims that the major cause of Palestinian flight was military actions by the Israel Defense Forces and fear of events such as the Deir Yassin massacre to an encouragement to leave by Arab leaders so that they could return when the war was won. Immigration of Holocaust survivors and Jewish refugees from Arab lands doubled Israel's population within one year of its independence. Over the following years approximately 850,000 Sephardi and Mizrahi Jews fled or were expelled from surrounding mostly due persecution in Arab countries, and in smaller numbers from Turkey, India, Afghanistan, and Iran. Of these, about 680,000 settled in Israel. Israel's Jewish population continued to grow at a very high rate for years, fed by waves of Jewish immigration from round the world, most notably the massive immigration wave of Soviet Jews, which arrived in Israel in the early 1990s following the dissolution of the USSR, who, according to the Law of Return, were entitled to become Israeli citizens upon arrival. About 380,000 arrived in 1990–1991 alone. Some 80,000–100,000 Ethiopian Jews have immigrated to Israel since the early 1980s. Since 1948, Israel has been involved in a series of major military conflicts, including the 1956 Suez War, 1967 Six-Day War, 1973 Yom Kippur War, 1982 Lebanon War, and 2006 Lebanon War, as well as a nearly constant series of ongoing minor conflicts. Israel has been also embroiled in an ongoing conflict with the Palestinians in the Israeli-occupied territories, which have been under Israeli control since the Six-Day War, despite the signing of the Oslo Accords on 13 September 1993, and the ongoing efforts of Israeli, Palestinian and global peacemakers. Population According to Israel's Central Bureau of Statistics, as of January 1, 2020, of Israel's 9.136 million people, 74.1% were Jews of any background. Among them, 68% were Sabras (Israeli-born), mostly second- or third-generation Israelis, and the rest are olim (Jewish immigrants to Israel)—22% from Europe and the Americas, and 10% from Asia and Africa, including the Arab countries. Nearly half of all Israeli Jews are descended from Jews who made aliyah from Europe, while around the same number are descended from Jews who made aliyah from Arab countries, Iran, Turkey, and Central Asia. Over two hundred thousand are, or are descended from, Ethiopian and Indian Jews. Israel is the only country in the world with a consistently growing Jewish population due to natural population increase. Jewish communities in the Diaspora feature a population declining or steady, with the exception of the Orthodox and Haredi Jewish communities around the world, whose members often shun birth control for religious reasons, who have experienced rapid population growth. The growth of the Orthodox and Haredi sector has partly balanced out negative population growth amongst other Jewish denominations. Haredi women have 7.7 children on average while the average Israeli Jewish woman has over 3 children. When Israel was first established in 1948, it had the third-largest Jewish population in the world, after the United States and Soviet Union. In the 1970s, Israel surpassed the Soviet Union as having the second-largest Jewish population. In 2003, the Israeli Central Bureau of Statistics reported that Israel had surpassed the United States as the nation with the world's largest Jewish population. The report was contested by Professor Sergio Della Pergola of the Hebrew University of Jerusalem. Considered the greatest demographic expert on Jews, Della Pergola said it would take another three years to close the gap. In January 2006, Della Pergola stated that Israel now had more Jews than the United States, and Tel Aviv had replaced New York as the metropolitan area with the largest Jewish population in the world, while a major demographic study found that Israel's Jewish population surpassed that of the United States in 2008. Due to the decline of Diaspora Jewry as a result of intermarriage and assimilation, along with the steady growth of the Israeli Jewish population, it has been speculated that within about 20 years,[when?] a majority of the world's Jews will live in Israel. In March 2012, the Israeli Census Bureau of Statistics reported on behalf of Ynet has forecast that in 2019, Israel will be home to 6,940,000 Jews, 5.84 million which are non-Haredi Jews living in Israel, compared with 5.27 million in 2009. The number is expected to grow to anywhere between 6.09 million and 9.95 million by 2059, marking a 16%–89% increase with the 2011 population. The Bureau also forecasts that the ultra-Orthodox population will number 1.1 million people by 2019, compared with 750,000 in 2009. By 2059, the projected Haredi Jewish population is estimated to be between 2.73 million and 5.84 million, marking a 264%–686% increase. Thus the total projected Israeli Jewish population by 2059 is estimated to be between 8.82 million and 15.790 million. In January 2014, it was reported by demographer Joseph Chamie that the projected population of Israeli Jews is expected to reach between 9.84 million by the year 2025 and 11.40 million by 2035. For statistical purposes, there are three main metropolitan areas in Israel. The majority of the Jewish population in Israel is located in the central area of Israel within the Metropolitan area of Tel Aviv. The Metropolitan area of Tel Aviv is currently the largest Jewish population center in the world. It has been argued that Jerusalem, Israel's proclaimed capital and largest city with a population of 732,100, and an urban area with a population of over 1,000,000 (including 280,000 Palestinian East Jerusalemites who are not Israeli citizens), with over 700,000 Israeli Jews and Nazareth with a population of 65,500, and an urban area of nearly 200,000 people of which over 110,000 are Israeli Jews should also be classified as metropolitan areas. By the time the State of Israel was proclaimed, the majority of Jews in the state and the region were Ashkenazi. Following the declaration of the state, a flood of Jewish migrants and refugees entered Israel—both from Europe and America and also from Arab and Muslim countries. Most of the Jewish immigrants in the 1950s and 1960s were Jewish Holocaust survivors, as well as Sephardic Jews and Mizrahi Jews (mostly Moroccan Jews, Algerian Jews, Tunisian Jews, Yemenite Jews, Bukharan Jews, Iranian Jews, Iraqi Jews, Kurdish Jews, and smaller communities, principally from Lebanon, Syria, Libya, Egypt, India, Turkey and Afghanistan). In recent decades other Jewish communities have also immigrated to Israel including Ethiopian Jews, Russian Jews and Bnei Menashe. Among Israeli Jews, 75% are Sabras (Israeli-born), mostly second- or third-generation Israelis, and the rest are olim (Jewish immigrants to Israel)—19% from Europe, Americas and Oceania, and 9% from Asia and Africa, mostly the Muslim world. The Israeli government does not trace the diaspora origin of Israeli Jews. The CBS traces the paternal country of diaspora origin of Israeli Jews (including non–Halachically Jewish immigrants who arrived on the Law of Return) as of 2010 is as follows. In Israel there are approximately 300,000 citizens with Jewish ancestry who are not Jewish according to Orthodox interpretations of Jewish law. Of this number approximately 10% are Christian and 89% are either Jewish or non-religious. The total number of conversions under the Nativ program of IDF was 640 in 2005 and 450 in 2006. From 2002 to 1 October 2007, a total of 2,213 soldiers have converted under Nativ. In 2003, 437 Christians converted to Judaism; in 2004, 884; and in 2005, 733. Recently several thousand conversions conducted by the Chief Rabbinate under the leadership of Rabbi Chaim Drukman have been annulled, and the official Jewish status over several thousand people who converted through the conversion court of the Chief Rabbinate since 1999 hangs in limbo as the proceedings continue regarding these individuals Jewish status. The vast majority of these individuals are former Soviet Union immigrants. In his book from 2001 "The Invention and Decline of Israeliness: State, Culture and Military in Israel", the Israeli sociologist Baruch Kimmerling identified and divided the modern Israeli society into seven population groups (seven subcultures): The secular upper-middle class group, the national religious group, the traditionalist Mizrahim group, the Orthodox religious group, the Arab citizens of Israel, the Russian immigrants group and the Ethiopian immigrants group. According to Kimmerling, each of these population groups have distinctive characteristics, such as place of resident, consumption patterns, education systems, communications media and more. Today, Jews whose family emigrated from European countries and the Americas, on their paternal line, constitute the largest single group among Israeli Jews and consist of about 3,000,000 people living in Israel. About 1,200,000 of them are descended from or are immigrants from the former Soviet Union who returned from the diaspora after the fall of the Former Soviet Union 1991 (about 300,000 of them are not considered to be Jewish under Jewish law). Most of the other 1,800,000 are descended from the first Zionist settlers in the Land of Israel, as well as Holocaust survivors and their descendants, with an additional 200,000 having immigrated or descended from immigrants from English-speaking countries and South America. They have played a prominent role in various fields including the arts, entertainment, literature, sports, science and technology, business and economy, media, and politics of Israel since its founding, and tend to be the most affluent of Israeli Jews. Not all Jews immigrating to Israel from European countries are of Ashkenazi origin (the majority of French Jews are of Sephardic, and some Jews from the Asian Republics of the USSR are Mizrahi), and the Israeli government does not distinguish between Jewish communities in its census. During the first decades of Israel as a state, strong cultural conflict existed between Mizrahi, Sephardic and Ashkenazi Jews (mainly east European Ashkenazim). The roots of this conflict, which still exists to a much smaller extent in present-day Israeli society, stem from the many cultural differences between the various Jewish communities, despite the government's encouragement of the "melting pot". That is to say, all Jewish immigrants in Israel were strongly encouraged to "melt down" their own particular exile identities within the general social "pot" in order to become Israeli. The current most prominent European countries of origin of the Israeli Jews are as follows:[citation needed] The majority of Israeli Jews are Mizrahi. The exact proportion of Mizrahi and Sephardic Jewish populations in Israel is unknown (since it is not included in the census); some estimates place Jews of Mizrahi origin at up to 61% of the Israeli Jewish population, with hundreds of thousands more having mixed Ashkenazi heritage due to cross-cultural intermarriage. In a survey that attempted to be representative, 44.9% percent of the Israeli Jewish sample identified as either Mizrahi or Sephardi, 44.2% as Ashkenazi or Russian Jews, about 3% as Beta Israel and 7.9% as mixed or other. Jews from North Africa and Asia have come to be called "Mizrahi Jews". Most African and Asian Jewish communities use the Sephardic prayer ritual and abide by the rulings of Sephardic rabbinic authorities, and therefore consider themselves to be "Sephardim" in the broader sense of "Jews of the Spanish rite", though not in the narrower sense of "Spanish Jews". Of late, the term Mizrahi has come to be associated with all Jews in Israel with backgrounds in Islamic lands. Cultural and/or racial biases against the newcomers were compounded by the fledgling state's lack of financial resources and inadequate housing to handle the massive population influx. Austerity was the law of the land during the country's first decade of existence. Thus, hundreds of thousands of new Sephardic immigrants were sent to live in tent cities in outlying areas. Sephardim (in its wider meaning) were often victims of discrimination, and were sometimes called schwartze (meaning "black" in Yiddish). The most egregious effects of racism were documented in the Yemenite children affair, in which Yemenite children were placed in the foster care of Ashkenazim families, their families being told that their children had died. Some believe that even worse than the housing discrimination was the differential treatment accorded the children of these immigrants, many of whom were tracked by the largely European education establishment into dead-end "vocational" high schools, without any real assessment of their intellectual capacities. Mizrahi Jews protested their unfair treatment, and even established the Israeli Black Panthers movement with the mission of working for social justice. The effects of this early discrimination still linger a half-century later, as documented by the studies of the Adva Center, a think tank on social equality, and by other Israeli academic research (cf., for example, Tel Aviv University Professor Yehuda Shenhav's article in Hebrew documenting the gross under-representation of Sephardic Jewry in Israeli high school history textbooks.) All Israeli Prime Ministers have been Ashkenazi, although Sephardim and Mizrahim have attained high positions including ministerial positions, chief of staffs and presidency. The student bodies of Israel's universities remain overwhelmingly Ashkenazi in origin, despite the fact that roughly half the country's population is non-Ashkenazi. The tent cities of the 1950s morphed into so-called "development towns". Scattered over border areas of the Negev Desert and the Galilee, far from the bright lights of Israel's major cities, most of these towns never had the critical mass or ingredients to succeed as places to live, and they continue to suffer from high unemployment, inferior schools, and chronic brain drain.[citation needed] While the Israeli Black Panthers no longer exist, the Mizrahi Democratic Rainbow Coalition and many other NGOs carry on the struggle for equal access and opportunity in housing, education, and employment for the country's underprivileged populace—still largely composed of Sephardim and Mizrahim, joined now by newer immigrants from Ethiopia and the Caucasus Mountains. Today over 2,500,000 Mizrahi Jews, and Sephardic Jews live in Israel with the majority of them being descendants of the 680,000 Jews who fled Arab countries due to expulsions and antisemitism, with smaller numbers having emigrated from the Islamic Republics of the Former Soviet Union (c.250,000), India (70,000), Iran (200,000–250,000), and Turkey (80,000). Before the immigration of over 1,000,000 Russian, mainly Ashkenazi Jews to Israel after to collapse of the Soviet Union, 70% of Israeli Jews were Sephardic or Mizrahi Jews. The current most prominent countries of diaspora origin of these Jewish communities are as follows: Israel also has small populations of Italian (rite) Jews from Italy and Romaniote Jews from Greece, Cyprus and Turkey. Both groups are considered distinct from the Sephardim and the Ashkenazim. Jews from both communities made aliyah in relatively large numbers during the 20th century, especially after the Holocaust. Both came in relatively small numbers as compared to other Jewish groups. Despite their small numbers, the Italian have been prominent in the economy and academia. Most Italian and Romaniote Israelis and their descendants live in the Tel Aviv area. Argentines in Israel are the largest immigrant group from Latin America and one of the fastest growing groups. The vast majority of Argentines in Israel are Jewish Argentines who make Aliyah but there is also an important group of non-Jewish Argentines, having, or being married to somebody who has, at least one Jewish grandparent, who choose Israel as their new home. There are about 50,000 Argentines residing in Israel although some estimates put the figure at 70,000. Most Jewish Argentines are Ashkenazi Jews.[citation needed] Nearly all of the Ethiopian Beta Israel community today lives in Israel, comprising more than 121,000 people. Most of this population are the descendants and the immigrants who immigrated to Israel during two massive waves of immigration mounted by the Israeli government—"Operation Moses" (1984) and during "Operation Solomon" (1991). Civil war and famine in Ethiopia prompted the Israeli government to mount these dramatic rescue operations. The rescues were within the context of Israel's national mission to gather Diaspora Jews and bring them to the Jewish homeland. Some immigration has continued up until the present day. Today 81,000 Ethiopian Israelis were born in Ethiopia, while 38,500 or 32% of the community are native born Israelis. Over time, the Ethiopian Jews in Israel moved out of the government-owned mobile home camps that they initially lived in and settled mainly in the various cities and towns throughout Israel, mainly with the encouragement of the Israeli authorities who granted the new immigrants generous government loans or low-interest mortgages. Similarly to other groups of immigrant Jews who made aliyah to Israel, the Ethiopian Jews have faced obstacles in their integration to Israeli society. Initially the main challenges of the Ethiopian Jewish community in Israel were due in part to communication difficulties (most of the population could not read or write in Hebrew, and much of the veteran population could not hold a simple conversation in the Hebrew language), and discrimination in certain areas of the Israeli society. Unlike Russian immigrants, many of whom arrive with job skills, Ethiopians came from a subsistence economy and were ill-prepared to work in an industrialized society. Over the years there has been significant progress in the integration of this population group in the Israeli society, primarily due to the fact that most of the young Ethiopian population is conscripted into the military service (mandatory for all Israelis at 18), where most Ethiopian Jews have been able to increase their chances for better opportunities. The 2013 Miss Israel title was given to Yityish Titi Aynaw, the first Ethiopian-born contestant to win the pageant. Aynaw, moved to Israel from Ethiopia with her family when she was 12. Intermarriage between Ashkenazi Jews and Sephardi/Mizrahi Jews in Israel was initially uncommon, due in part to distances of each group's settlement in Israel, economic gaps, and cultural and/or racial biases. In recent generations, however, the barriers were lowered by state-sponsored assimilation of all the Jewish communities into a common Sabra (native-born Israeli) identity, which facilitated extensive "mixed marriages". The percentage of Jewish children born to mixed marriages between Ashkenazi Jews and Sephardi/Mizrahi Jews rose steadily. A 1995 survey found that 5.3% of Jews aged 40–43, 16.5% of Jews aged 20–21, and 25% of Jews aged 10–11 were of mixed ancestry. That same year, 25% of Jewish children born in Israel were mixed. Even though the assimilation rate among the Israeli Jewish community has always been low, the propriety and degree of assimilation of Israeli Jews and Jews worldwide has always been a significant and controversial issue within the modern Israeli Jewish community, with both political and religious skeptics. While not all Jews disapprove of intermarriage, many members of the Israeli Jewish community have expressed their concern that a high rate of interfaith marriages will result in the eventual disappearance of the Israeli Jewish community. In contrast to the current moderate birth rates of Israeli Jews and the relative low trends of assimilation, some communities within Israeli Jewry, such as Orthodox Jews, have significantly higher birth rates and lower intermarriage rates, and are growing rapidly. Since the establishment of the State of Israel in 1948 the term "Yerida" has been used to mark the emigration of Jews from Israel, whether in groups (small or large) or individually. The name is used in a pejorative sense, as "yerida" means "going down", while "aliyah", immigration to Israel, means "going up". Through the years, the majority of Israeli Jews who emigrated from Israel went to the United States and Canada. For many years definitive data on Israeli emigration was unavailable. In The Israeli Diaspora sociologist Stephen J. Gold maintains that calculation of Jewish emigration has been a contentious issue, explaining, "Since Zionism, the philosophy that underlies the existence of the Jewish state, calls for return home of the world's Jews, the opposite movement—Israelis leaving the Jewish state to reside elsewhere—clearly presents an ideological and demographic problem." Among the most common reasons for emigration of Israeli Jews from Israel are economic constraints, economic characteristics (U.S. and Canada have always been richer nations than Israel), disappointment of the Israeli government, Israel's ongoing security issues, as well as the excessive role of religion in the lives of Israelis. In recent decades, considerable numbers of Israeli Jews have moved abroad. Reasons for emigration vary, but generally relate to a combination of economic and political concerns. According to data published in 2006, from 1990 to 2005, 230,000 Israelis left the country; a large proportion of these departures included people who initially immigrated to Israel and then reversed their course (48% of all post-1990 departures and even 60% of 2003 and 2004 departures were former immigrants to Israel). 8% of Jewish immigrants in the post-1990 period left Israel. In 2005 alone, 21,500 Israelis left the country and had not yet returned at the end of 2006; among them 73% were Jews. At the same time, 10,500 Israelis came back to Israel after over one year abroad; 84% of them were Jews. In addition, the Israeli Jewish diaspora group also has many Jews worldwide, especially the ones who originate from Western countries, who moved to Israel and gained Israeli citizenship under the Law of Return, who lived in Israel for a time, then returned to their country of origin and kept their dual citizenship. Many Israeli Jews emigrated to the United States throughout the period of the declaration of the state of Israel and until today. Today, the descendants of these people are known as Israeli-Americans. The 2000 Census counted 106,839 Israeli Americans. It is estimated that 400,000–800,000 Israeli Jews have immigrated to the United States since the 1950s, though this number remains a contested figure, since many Israelis are originally from other countries and may list their origin countries when arriving in the United States. Moscow has the largest single Israeli expatriate community in the world, with 80,000 Israeli citizens living in the city as of 2014, almost all of them native Russian-speakers. Many Israeli cultural events are hosted for the community, and many live part of the year in Israel. (To cater to the Israeli community, Israeli cultural centres are located in Moscow, Saint Petersburg, Novosibirsk and Yekaterinburg.) Many Israeli Jews emigrated to Canada throughout the period of the declaration of the state of Israel and until today. Today, the descendants of these people are known as Israeli Canadians. It is estimated that as many as 30,000 Jewish Israelis live in Canada. Many Israeli Jews emigrated to the United Kingdom throughout the period of the declaration of the state of Israel and until today. Today, the descendants of these people are known as Israeli-British. It is estimated that as many as 30,000 Jewish Israelis live in the United Kingdom. The majority of the Israeli Jews in the UK live in London and in particular in the heavily populated Jewish area of Golders Green. In the northern part of Israel the percentage of Jewish population is declining. The increasing population of Arabs within Israel, and the majority status they hold in two major geographic regions—the Galilee and the Triangle—has become a growing point of open political contention in recent years. The phrase demographic threat (or demographic bomb) is used within the Israeli political sphere to describe the growth of Israel's Arab citizenry as constituting a threat to its maintenance of its status as a Jewish state with a Jewish demographic majority. Israeli historian Benny Morris states: The Israeli Arabs are a time bomb. Their slide into complete Palestinization has made them an emissary of the enemy that is among us. They are a potential fifth column. In both demographic and security terms they are liable to undermine the state. So that if Israel again finds itself in a situation of existential threat, as in 1948, it may be forced to act as it did then. If we are attacked by Egypt (after an Islamist revolution in Cairo) and by Syria, and chemical and biological missiles slam into our cities, and at the same time Israeli Palestinians attack us from behind, I can see an expulsion situation. It could happen. If the threat to Israel is existential, expulsion will be justified[...] The term "demographic bomb" was famously used by Benjamin Netanyahu in 2003 when he asserted that if the percentage of Arab citizens rises above its current level of about 20 percent, Israel will not be able to maintain a Jewish demographic majority. Netanyahu's comments were criticized as racist by Arab Knesset members and a range of civil rights and human rights organizations, such as the Association for Civil Rights in Israel. Even earlier allusions to the "demographic threat" can be found in an internal Israeli government document drafted in 1976 known as the Koenig Memorandum, which laid out a plan for reducing the number and influence of Arab citizens of Israel in the Galilee region. In 2003, the Israeli daily Ma'ariv published an article entitled, "Special Report: Polygamy is a Security Threat," detailing a report put forth by the Director of the Israeli Population Administration at the time, Herzl Gedj; the report described polygamy in the Bedouin sector a "security threat" and advocated means of reducing the birth rate in the Arab sector. The Population Administration is a department of the Demographic Council, whose purpose, according to the Israeli Central Bureau of Statistics is: "to increase the Jewish birthrate by encouraging women to have more children using government grants, housing benefits, and other incentives." In 2008 the Minister of the Interior appointed Yaakov Ganot as new head of the Population Administration, which according to Haaretz is "probably the most important appointment an interior minister can make." The rapid population growth with the Haredi sector may affect, according to some Israeli researchers, the preservation of a Jewish majority in the state of Israel. Preserving a Jewish majority population within the state of Israel has been a defining principle among Israeli Jews, where Jewish couples are encouraged to have large families. Many financial incentives were given on behalf of the Israeli government. For instance, Israel's first Prime Minister David Ben-Gurion set up a monetary fund for Jewish women who gave birth to at least 10 children. To further increase the Israeli Jewish fertility rate and population, many fertility clinics have been opened and are operated throughout the country. As part of Israel's universal health-care coverage, Israel spends $60 million annually on publicly funded fertility treatments and operates more fertility clinics per capita than any other country in the world. A study showed that in 2010, Jewish birthrates rose by 31% and 19,000 diaspora Jews immigrated to Israel, while the Arab birthrate fell by 1.7%. By June 2013, a number of Israeli demographers called the so-called Arab demographic time bomb a myth, citing a declining Arab and Muslim birth rate, an incremental increase in the Israeli Jewish birth rate, unnecessary demographic scare campaigns, as well as inflated statistics released by the Palestinian Authority. Israeli former Ambassador Yoram Ettinger has rejected the assertion of a demographic time bomb, saying that anyone who believes such claims are either misled or mistaken. American political scientist Ian Lustick has accused Ettinger and his associates for multiple methodological errors and having a political agenda. Jewish Israeli culture Jewishness is widely considered by Israeli Jews as a national, ethnic and religious identity (See Ethnoreligious group). Commonly, the Israeli Jews are identified as haredim (ultra-Orthodox), datim (Orthodox), masortim (traditionalists), and hiloni (secular). In 2011, roughly 9% of Israeli Jews defined as Haredim (ultra-orthodox religious); an additional 10% are "religious"; 15% consider themselves "religious traditionalists", not strictly adhering to religion; further 23% are self-defined "'not very religious' traditionalists" and 43% are "secular" ("hiloni"). A 2025 Pew Research Center survey on religious switching among Israeli Jews revealed that 22% transitioned between different Jewish groups. The study found that 45% of Israeli Jews identify as hilonim (secular), though only 38% were raised as such. Among masortim (traditionalists), the figures were 24% and 25%, respectively, while datim (Orthodox) accounted for 18% compared to 23% in their upbringing. Meanwhile, haredim (ultra-Orthodox) comprised 13%, slightly higher than the 12% raised in that tradition. This data highlights a notable shift: hilonim experienced the most significant net gain (+7%), whereas datim faced the steepest decline (-5%). The other groups saw minor changes, with masortim decreasing by 1% and haredim increasing by 1%. In 2020, the ultra-Orthodox Israelis are already numbered more than 1.1 million (14 percent of total population). However, 78% of all Israelis (virtually all Israeli Jews) participate in a Passover seder, and 63% fast on Yom Kippur. Some who consider themselves ethnically Jewish follow other religions such as Christianity or Messianic Judaism. Jewish religious practice in Israel is quite varied. Among the 4.3 million American Jews described as "strongly connected" to Judaism, over 80% report some sort of active engagement with Judaism, ranging from attendance at daily prayer services on one end of the spectrum to as little as attendance to Passover Seders or lighting Hanukkah candles on the other.[citation needed][relevant?] Unlike North American Jews, according to the 2013 Israel Democracy Institute's data, the majority of Israeli Jews tend not to align themselves with Jewish religious movements (such as Orthodox, Reform or Conservative Judaism, although they also exist in Israel, like in the West) but instead tend to define their religious affiliation by degree of their religious practice. Another characteristic of the Jewish community in Israel is the relatively high dynamism in which the Israeli Jews tend to define their religious status. Among the secular and traditionalist groups some individuals choose to embrace Orthodox Judaism. In 2009 around 200,000 Israeli Jews aged 20 and above defined themselves as "Baalei teshuva" (בעלי תשובה), Nevertheless, in practice about a quarter of them have a traditionalist lifestyle. Various Orthodox organizations operate in Israel with the aim of getting non-Orthodox Jews embrace Orthodox Judaism. Notable examples are the Chasidic movements Chabad and Breslov which have gained much popularity among the Baalei teshuva, the organizations Arachim and Yad LeAchim who initiate seminars in Judaism, and the organization Aish HaTorah. On the other hand, among the religious and Orthodox groups in Israel, many individuals chose to part from the religious lifestyle and embrace a secular lifestyle (they are referred to as Yotz'im bish'ela). A research conducted in 2011 estimated that about 30 percent of the national religious youth from the religious lifestyle embrace a secular lifestyle, but 75 percent of them go back to religion after a formation process of their self-identity, which usually lasts until age 28.[citation needed] The percentage from those who grew up in Chassidic homes, is even higher than that. Contrary to Baalei teshuva, the Orthodox Jews whom wish to embrace a secular lifestyle have very few organizations whom assist them in parting from the Haredi world, and often they end up finding themselves destitute or struggling to complete the educational and social gaps. The most prominent organizations whom assist Yotz'im bish'ela are the NGO organizations Hillel and Dror.[citation needed] Education is a core value in Jewish culture and in Israeli society at large with many Israeli parents sacrificing their own personal comforts and financial resources to provide their children with the highest standards of education possible. Much of the Israeli Jewish population seek education as a passport to a decent job and a middle class paycheck in the country's competitive high-tech economy. Jewish parents especially mothers take great responsibility to inculcate the value of education in their children at a young age. Striving for high academic achievement and educational success is stressed in many modern Jewish Israeli households as parents make sure that their children are well educated adequately in order to gain the necessary technological skills needed for employment success to compete in Israel's modern high-tech job market. Israelis see competency with in demand job skills such as literacy in math and science as especially necessary for employment success in Israel's competitive 21st-century high-tech economy. Israel's Jewish population maintains a relatively high level of educational attainment where just under half of all Israeli Jews (46%) hold post-secondary degrees. This figure has remained stable in their already high levels of educational attainment over recent generations. Israeli Jews (among those ages 25 and older) have an average of 11.6 years of schooling making them one of the most highly educated of all major religious groups in the world. The Israeli government regulates and finances most of the schools operating in the country, including the majority of those run by private organizations. The national school system has two major branches—a Hebrew-speaking branch and an Arabic-speaking branch. The core curricula for the two systems are almost identical in mathematics, sciences, and English. It is different in humanities (history, literature, etc.). While Hebrew is taught as a second language in Arab schools since the third grade and obligatory for Arabic-speaking schools' matriculation exams, only basic knowledge of Arabic is taught in Hebrew-speaking schools, usually from the 7th to the 9th grade. Arabic is not obligatory for Hebrew-speaking schools' matriculation exams.[citation needed] The movement for the revival of Hebrew as a spoken language was particularly popular among new Jewish Zionist immigrants who came to Palestine since the 1880s. Eliezer Ben-Yehuda (born in the Russian Empire) and his followers created the first Hebrew-speaking schools, newspapers, and other Hebrew-language institutions. After his immigration to Israel, and due to the impetus of the Second Aliyah (1905–1914), Hebrew prevailed as the single official and spoken language of the Jewish community of mandatory Palestine. When the State of Israel was formed in 1948, the government viewed Hebrew as the de facto official language and initiated a melting pot policy, where every immigrant was required to study Hebrew and often to adopt a Hebrew surname. Use of Yiddish, which was the main competitor prior to World War II, was discouraged, and the number of Yiddish speakers declined as the older generations died out, though Yiddish is still commonly used in Ashkenazi haredi communities. Modern Hebrew is also the primary official language of the modern State of Israel and almost all Israeli Jews are native Hebrew-speakers and speak Hebrew as their primary language. A variety of other languages are still spoken within some Israeli Jewish communities, communities that are representative of the various Jewish ethnic divisions from around the world that have come together to make up Israel's Jewish population. Even though the majority of Israeli Jews are native Hebrew speakers, many Jewish immigrants still continue to speak their former languages—many immigrants from the Soviet Union continue to speak primarily Russian at home and many immigrants from Ethiopia continue to speak primarily Amharic at home. Many of Israel's Hasidic Jews (being exclusively of Ashkenazi descent) are raised speaking Yiddish. Classical Hebrew is the language of most Jewish religious literature, such as the Tanakh (Bible) and Siddur (prayerbook). Currently, 90% of the Israeli-Jewish public is proficient in Hebrew, and 70% is highly proficient. Some prominent Israeli politicians such as David Ben-Gurion tried to learn Arabic, and the Mizrahi Jews spoke Judeo-Arabic although most of their descendants in Israel today only speak Hebrew.[citation needed] Legal and political status in Israel Israel was established as a homeland for the Jewish people and is often referred to as the Jewish state. Israel's Declaration of Independence specifically called for the establishment of a Jewish state with equality of social and political rights, irrespective of religion, race, or sex. The notion that Israel should be constituted in the name of and maintain a special relationship with a particular group of people, the Jewish people, has drawn much controversy vis-à-vis minority groups living in Israel—the large number of Muslim and Christian Palestinians residing in Israel. Nevertheless, through the years many Israeli Jewish nationalists have based the legitimacy of Israel being a Jewish state on the Balfour Declaration and ancient historical ties to the land, asserting that both play particular roles as evidence under international law, as well as a fear that a hostile Arab world might be disrespectful of a Jewish minority—alleging a variety of possible harms up to and including genocide—were Israel to become a post-national "state for all its citizens".[citation needed] Through the years, as Israel's continued existence as a "Jewish State" has relied upon the maintenance of a Jewish demographic majority, Israeli demographers, politicians and bureaucrats have treated Jewish population growth promotion as a central question in their research and policymaking. The Law of Return is an Israeli legislation that grants all Jews and those of Jewish lineage the right to gain an Israeli citizenship and to settle in Israel. It was enacted by the Knesset, Israel's Parliament, on 5 July 1950, and the related Law of Citizenship in 1952. These two pieces of legislation contain expressions pertaining to religion, history and nationalism, as well as to democracy, in a combination unique to Israel. Together, they grant preferential treatment to Jews returning to their ancestral homeland. The Law of Return declares that Israel constitutes a home not only for the inhabitants of the State, but also for all members of the Jewish people everywhere, be they living in poverty and fear of persecution or be they living in affluence and safety. The law declares to the Jewish people and to the world that the State of Israel welcomes the Jews of the world to return to their ancient homeland. Currently, all the marriages and divorces in Israel (as well as within the Jewish community) are recognized by the Israeli Interior Ministry only if performed under an official recognized religious authority and only between a man and a woman of the same religion. The Jewish marriage and divorce in Israel are under the jurisdiction of the Chief Rabbinate of Israel, which defines a person's Jewish status strictly according to halakha. Civil marriages are only officially sanctioned if performed abroad. As a result, it is not uncommon for couples who may for some reason not be able (or chose not) to get married in Israel to travel overseas to get married. During its time of existence the legal settlement that gives the rabbinical courts the monopoly on conducting the marriages and divorces of the entire Israeli Jewish population has been a source of great criticism from the secular public in Israel, but also to the ardent support from the religious public. The main argument of the supporters of the law is that its cancellation will divide the Jewish people in Israel between the Jews who would marry and divorce each other within the Jewish religious authorities and the Jews who would marry and divorce each other within the civil marriages—which would not be registered or inspected by the religious authorities, and thus their children would be considered illegitimate to marry the children of the couples married within the religious court, from fear of them being considered Mamzer. Opponents of the law see it as a severe offense to the human civil rights made by the state of Israel. However, common-law marriage is recognized by Israeli law, without restriction of ethnicity, religion or sex (that is, both for inter-sex and same-sex couples, and between a Jew and a non-Jew). Once, the status of common law marriage is proven and obtained, it gives a legal status almost equal to marriage. National military service is mandatory for any Israeli over the age of 18, with the exception of the Arab Muslim and Christian population (currently estimated at around 20% of the Israeli population) and many ultra-Orthodox Jews (currently estimated at around 8% of the Israeli Jewish population and rising steeply). Druze and Circassian men are liable, by agreement with their community leaders. Members of the exempted groups can still volunteer, but very few do, except for the Bedouin where a relatively large number of men have tended to volunteer. The Israeli Jewish population and especially the secular Israeli Jewish population, is currently the only population group in Israel that has a mandatory military conscription for both men and women—a fact that has caused much resentment from within the Jewish community towards the non-serving population, some of which are demanding that all the Israeli citizens share an equal amount of responsibilities, whether in the Israeli army or as part of Sherut Leumi. In addition, in the recent decades a growing minority from within the Israeli Jewish conscripts have denounced the mandatory enrollment and refused to serve, claiming that due to financial insecurities they feel that they need to be spending their time more productively pursuing their chosen studies or career paths. Some individual resentment may also be compounded by the typically low wages paid to conscripts—the current Israeli policies see National Service as a duty rendered to the country and its citizens, and therefore the Israeli army does not pay any wages to conscripts, but instead grants a low monthly allowance to the full-time national service personnel, depending on the type of their duty. The Jewish National Fund is a private organization established in 1901 to buy and develop land in the Land of Israel for Jewish settlement; land purchases were funded by donations from world Jewry exclusively for that purpose. The JNF currently owns 13% of the land in Israel, while 79.5% is owned by the government (this land is leased on a non-discriminatory basis)[citation needed] and the rest, around 6.5%, is evenly divided between private Arab and Jewish owners. Thus, the Israel Land Administration (ILA) administers 93.5% of the land in Israel (Government Press Office, Israel, 22 May 1997). A significant portion of JNF lands were originally properties left behind by Palestinian "absentees" and as a result the legitimacy of some JNF land ownership has been a matter of dispute. The JNF purchased these lands from the State of Israel between 1949 and 1953, after the state took control of them according to the Absentee Properties Law. While the JNF charter specifies the land is for the use of the Jewish People, land has been leased to Bedouin herders. Nevertheless, JNF land policy has been criticized as discrimination. When the Israel Land Administration leased JNF land to Arabs, it took control of the land in question and compensated the JNF with an equivalent amount of land in areas not designated for development (generally in the Galilee and the Negev), thus ensuring that the total amount of land owned by the JNF remains the same. This was a complicated and controversial mechanism, and in 2004 use of it was suspended. After Supreme Court discussions and a directive by the Attorney General instructing the ILA to lease JNF land to Arabs and Jews alike, in September 2007 the JNF suggested reinstating the land-exchange mechanism. While the JNF and the ILA view an exchange of lands as a long-term solution, opponents say that such maneuvers privatize municipal lands and preserve a situation in which significant lands in Israel are not available for use by all of its citizens. As of 2007, the High Court delayed ruling on JNF policy regarding leasing lands to non-Jews, and changes to the ILA-JNF relationship were up in the air. Adalah and other organizations furthermore express concern that proposed severance of the relation between the ILA and JNF, as suggested by Ami Ayalon, would leave the JNF free to retain the same proportion of lands for Jewish uses as it seeks to settle hundreds of thousands of Jews in areas with a tenuous Jewish demographic majority (in particular, 100,000 Jews in existing Galilee communities and 250,000 Jews in new Negev communities via the Blueprint Negev). The main language used for communication among Israeli citizens and among the Israeli Jews is Modern Hebrew, a language that emerged in the late 19th century, based on different dialects of ancient Hebrew and influenced by Yiddish, Arabic, Slavic languages, and German. Hebrew and Arabic are currently official languages of Israel. Government ministries publish all material intended for the public in Hebrew, with selected material translated into Arabic, English, Russian, and other languages spoken in Israel. The country's laws are published in Hebrew, and eventually English and Arabic translations are published. Publishing the law in Hebrew in the official gazette (Reshumot) is enough to make it valid. Unavailability of an Arabic translation can be regarded as a legal defense only if the defendant proves he could not understand the meaning of the law in any conceivable way. Following appeals to the Israeli Supreme Court, the use of Arabic on street signs and labels increased dramatically. In response to one of the appeals presented by Arab Israeli organizations,[which?] the Supreme Court ruled that although second to Hebrew, Arabic is an official language of the State of Israel, and should be used extensively. Today most highway signage is trilingual, written in Hebrew, Arabic, and English. Hebrew is the standard language of communication at places of work except inside the Arab community, and among recent immigrants, foreign workers, and with tourists. The state's schools in Arab communities teach in Arabic according to a specially adapted curriculum. This curriculum includes mandatory lessons of Hebrew as foreign language from the 3rd grade onwards. Arabic is taught in Hebrew-speaking schools, but only the basic level is mandatory. The Israeli national anthem and the Israeli flag have exclusively Jewish themes and symbols: Critics of Israel as a Jewish nation state have suggested that it should adopt more inclusive and neutral symbolism for the national flag and anthem arguing that they exclude the non-Jewish citizens of Israel from their narrative of a national identity. Defenders of the flag say that many flags in Europe bear crosses (such as the flags of Sweden, Finland, Norway, United Kingdom, Switzerland, and Greece), while flags in predominantly Muslim countries bear distinctive Muslim symbols (such as Turkey, Tunisia, Algeria, Mauritania, and Saudi Arabia). Through the years some Israeli-Arab politicians have requested a reevaluation of the Israeli flag and Israeli national anthem, arguing that they cannot represent all citizens of Israel, including the Arab citizens of Israel. Although the proposals to change the flag have never been discussed in the state institutions, they do occasionally get to a public discussion, as part of the discussion on whether Israel is, as defined by the Basic Law: Human Dignity and Liberty, "A Jewish and Democratic State", or if it must become, as demanded by certain circles, "a state of all its citizens". The demand to change the flag is seen among many Israelis as a threat to the very essence of the state.[citation needed] In relation to this, in 2001 the Israeli Minister of Education Limor Livnat ordered the enforcement of the flag amendment she initiated, and ordered the raising of the flag in the front of all schools in Israel, even those serving the Arab population.[citation needed] Intercommunal relations As part of the Israeli–Palestinian conflict, over the years, various Palestinian militants have carried out attacks against Israeli Jews. 2012 statistics from B'Tselem state that 3,500 Israelis have been killed and 25,000 have been wounded as a result of Palestinian violence since the establishment of the state of Israel in 1948. These figures include soldiers as well as civilians, including those killed in exchanges of gunfire. Israeli statistics listing 'hostile terrorist attacks' also include incidents which stones are thrown. Suicide bombings constituted just 0.5% of Palestinian attacks against Israelis in the first two years of the Al Aqsa Intifada, though this percentage accounted for half of the Israelis killed in that period. According to the Israel Ministry of Foreign Affairs, there were 56 terrorist attacks against Israelis from 1952 to 1967. During the 1970s, numerous attacks against Israeli civilians were carried out by Palestinians from Lebanon. Notable incidents include the Coastal Road Massacre (25 adults and 13 children killed, 71 injured), the Avivim school bus massacre (3 adults and 9 children killed, 25 injured), the Kiryat Shmona massacre (9 adults and 9 children killed, 15 injured), the Lod Airport massacre (26 killed, 79 injured), and the Ma'alot massacre (8 adults and 23 children killed, 70 injured). Israel Ministry of Foreign Affairs lists 96 fatal terror attacks against Israelis from September 1993 to September 2000, of which 16 were bombing attacks, resulting in 269 deaths. During the Second Intifada, a period of increased violence from September 2000 to 2005, Palestinians carried out 152 suicide bombings and attempted to carry out over 650 more. Other methods of attack include launching Qassam rockets and mortars into Israel, kidnapping of both soldiers and civilians (including children), shootings, assassination, stabbings, and lynchings. As of November 2012, over 15,000 rockets and mortars have been fired at Israel from the Gaza Strip. Israel Ministry of Foreign Affairs reported that of the 1,010 Israelis killed between September 2000 and January 2005, 78 percent were civilians. Another 8,341 were injured in what the Israeli Ministry of Foreign Affairs referred to as terrorist attacks between 2000 and 2007. In 2010, Israel honored the memory of all 3,971 Israeli civilian victims whom have been killed through Israel's history, as part of political violence, Palestinian political violence, and terrorism in general. There are significant tensions between Arab citizens and their Jewish counterparts. Polls differ considerably in their findings regarding intercommunal relations. On 29 April 2007 Haaretz reported that an Israeli Democracy Institute (IDI) poll of 507 people showed that 75% of "Israeli Arabs would support a constitution that maintained Israel's status as a Jewish and democratic state while guaranteeing equal rights for minorities, while 23% said they would oppose such a definition." In contrast, a 2006 poll commissioned by The Center Against Racism, showed negative attitudes towards Arabs, based on questions asked to 500 Jewish residents of Israel representing all levels of Jewish society. The poll found that: 63% of Jews believe Arabs are a security threat; 68% of Jews would refuse to live in the same building as an Arab; 34% of Jews believe that Arab culture is inferior to Israeli culture. Additionally, support for segregation between Jewish and Arab citizens was found to be higher among Jews of Middle Eastern origin than those of European origin. A more recent poll by the Center Against Racism (2008) found a worsening of Jewish citizens' perceptions of their Arab counterparts: A 2007 poll conducted by Sammy Smooha, a sociologist at Haifa University, in the aftermath of the 2006 Lebanon War, found that: Surveys in 2009 found a radicalization in the positions of Israeli Arabs towards the State of Israel, with 41% of Israeli Arabs recognizing Israel's right to exist as a Jewish and democratic state (down from 65.6% in 2003), and 53.7% believing Israel has a right to exist as an independent country (down from 81.1% in 2003). Polls also showed that 40% of Arab citizens engaged in Holocaust denial. A 2010 Arab Jewish Relations Survey, compiled by Prof. Sami Smoocha in collaboration with the Jewish-Arab Center at the University of Haifa shows that: A 2010 poll from the Arab World for Research and Development found that: A range of politicians, rabbis, journalists, and historians commonly refer to the 20–25% minority of Arabs in Israel as being a "fifth column" inside the state of Israel. Genetics Israeli Jews encompass a diverse range of Jewish communities from around the world, such as Ashkenazi, Sephardi, Mizrahi, Beta Israel, Cochin, Bene Israel, and Karaite Jews, among others, representing roughly half of all Jewish people living today. This rich tapestry of Jewish diaspora communities contributes to the genetic composition of Israeli Jews, reflecting the diverse ancestral origins of those who immigrated to Israel. Over time, these communities are growing closer together and intermixing, resulting in a dynamic and evolving genetic makeup among Israeli Jews. Genetic studies have revealed that Jewish populations worldwide share a significant amount of Middle Eastern genetic ancestry, suggesting a common origin in the ancient Near East. This shared genetic heritage likely includes contributions from the Israelites and other ancient populations in the region. Jews also exhibit genetic signatures that indicate some degree of genetic admixture with local populations in the regions where they settled due to intermarriage, migrations, and other interactions with those populations throughout history. Jews of diverse ancestries exhibit genetic connections to neighboring non-Jewish populations in the Levant, such as the Lebanese, Samaritans, Palestinians, Bedouins, and Druze. Additionally, there are genetic connections to Southern European populations, including Cypriots, Italians (southern) and Greeks, which can be attributed to historical interactions and migrations. See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/LitRPG] | [TOKENS: 875]
Contents LitRPG LitRPG, short for literary role-playing game, is a literary genre combining the conventions of computer RPGs with science-fiction and fantasy novels. The term was introduced in 2013.[citation needed] In LitRPG, game-like elements form an essential part of the story, and visible RPG statistics (for example strength, intelligence, damage) are a significant part of the reading experience. This distinguishes the genre from novels that tie in with a game, like those set in the world of Dungeons & Dragons; books that are actual games, such as the choose-your-own-adventure Fighting Fantasy type of publication; or games that are literarily described, like MUDs and interactive fiction. Typically, the main character in a LitRPG novel is consciously interacting with the game or game-like world and attempting to progress within it. History The literary trope of getting inside a computer game is not new. Andre Norton's Quag Keep (1978) enters the world of the characters of a D&D game. Larry Niven and Steven Barnes's Dream Park (1981) has a setting of LARP-like games as a kind of reality TV in the future (2051). With the rise of MMORPGs in the 1990s came science fiction novels that utilised virtual game worlds for their plots.[citation needed] In Taiwan, the first of Yu Wo's nine ½ Prince novels appeared, published in October 2004 by Ming Significant Cultural. In Japan, the genre started in 1993 with the comedy Magical Circle Guru Guru where the characters lived in a JRPG and the cliches and mechanics of the time were often a source of humor. Later Japanese examples include .hack//Sign in 2002 and Sword Art Online in 2009.[citation needed] The Korean series Legendary Moonlight Sculptor has over 50 volumes.[citation needed] These novels and others were precursors to a more stat-heavy form of novel. Using a looser definition, a Russian publishing initiative identified the genre and gave it a name. The first Russian novel in this style appeared in 2012 at the Russian self-publishing website samizdat.ru, the novel Господство клана Неспящих (Clan Dominance: The Sleepless Ones) by Dem Mikhailov set in the fictional sword and sorcery game world of Valdira, printed by Leningrad Publishers later that year under the title Господство кланов (The Rule of the Clans) in the series Современный фантастический боевик (Modern Fantastic Action Novel) and translated into English as The Way of the Clan as a Kindle book in 2015.[citation needed] In 2013, EKSMO, a major Russian publishing house, started its multiple-author project entitled LitRPG. According to Magic Dome Books, a major translator of Russian LitRPG, the term LitRPG was coined in late 2013 during a brainstorming session between writer Vasily Mahanenko, EKSMO's science fiction editor Dmitry Malkin, and fellow LitRPG series editor and author Alex Bobl [ru]. Since 2014, EKSMO has been running LitRPG competitions and publishing the winning stories. Most LitRPG books were self-published, with some works in the genre having reached mainstream success. In January 2020 Aleron Kong's The Land: Monsters appeared on the Wall Street Journal bestseller list in the Fiction E-Books category.[non-primary source needed] Matt Dinniman's Dungeon Crawler Carl series was picked up by Ace Books, and has a television series in production. In March 2025, Dinniman's novel This Inevitable Ruin (book 7 of Dungeon Crawler Carl) reached #2 in the Audio Fiction category of The New York Times best-seller list. GameLit Many of the post-2014 writers in this field insist that depiction of a character's in-game progression must be part of the definition of LitRPG, leading to the emergence of the term GameLit to embrace stories set in a game universe but which do not necessarily embody leveling and skill raising. Notable examples See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Competition_(biology)] | [TOKENS: 3633]
Contents Competition (biology) Competition is an interaction between organisms or species in which both require one or more resources that are in limited supply (such as food, water, or territory). Competition lowers the fitness of both organisms involved since the presence of one of the organisms always reduces the amount of the resource available to the other. In the study of community ecology, competition within and between members of a species is an important biological interaction. Competition is one of many interacting biotic and abiotic factors that affect community structure, species diversity, and population dynamics (shifts in a population over time). There are three major mechanisms of competition: interference, exploitation, and apparent competition (in order from most direct to least direct). Interference and exploitation competition can be classed as "real" forms of competition, while apparent competition is not, as organisms do not share a resource, but instead share a predator. Competition among members of the same species is known as intraspecific competition, while competition between individuals of different species is known as interspecific competition. According to the competitive exclusion principle, species less suited to compete for resources must either adapt or die out, although competitive exclusion is rarely found in natural ecosystems. According to evolutionary theory, competition within and between species for resources is important in natural selection. More recently, however, researchers have suggested that evolutionary biodiversity for vertebrates has been driven not by competition between organisms, but by these animals adapting to colonize empty livable space; this is termed the 'Room to Roam' hypothesis. Interference competition During interference competition, also called contest competition, organisms interact directly by fighting for scarce resources. For example, large aphids defend feeding sites on cottonwood leaves by ejecting smaller aphids from better sites. Male-male competition in red deer during rut is an example of interference competition that occurs within a species. Interference competition occurs directly between individuals via aggression when the individuals interfere with the foraging, survival, and reproduction of others, or by directly preventing their physical establishment in a portion of the habitat. An example of this can be seen between the ant Novomessor cockerelli and red harvester ants, where the former interferes with the ability of the latter to forage by plugging the entrances to their colonies with small rocks. Male bowerbirds, who create elaborate structures called bowers to attract potential mates, may reduce the fitness of their neighbors directly by stealing decorations from their structures. In animals, interference competition is a strategy mainly adopted by larger and stronger organisms within a habitat. As such, populations with high interference competition have adult-driven generation cycles. At first, the growth of juveniles is stunted by larger adult competitors. However, once the juveniles reach adulthood, they experience a secondary growth cycle. Plants, on the other hand, primarily engage in interference competition with their neighbors through allelopathy, or the production of biochemicals. Interference competition can be seen as a strategy that has a clear cost (injury or death) and benefit (obtaining resources that would have gone to other organisms). In order to cope with strong interference competition, other organisms often either do the same or engage in exploitation competition. For example, depending on the season, larger ungulate red deer males are competitively dominant due to interference competition. However, does and fawns have dealt with this through temporal resource partitioning — foraging for food only when adult males are not present. Exploitation competition Exploitation competition, or scramble competition, occurs indirectly when organisms both use a common limiting resource or shared food item. Instead of fighting or exhibiting aggressive behavior in order to win resources, exploitative competition occurs when resource use by one organism depletes the total amount available for other organisms. These organisms might never interact directly but compete by responding to changes in resource levels. Very obvious examples of this phenomenon include a diurnal species and a nocturnal species that nevertheless share the same resources or a plant that competes with neighboring plants for light, nutrients, and space for root growth. This form of competition typically rewards those organisms who claim the resource first. As such, exploitation competition is often size-dependent and smaller organisms are favored since smaller organisms typically have higher foraging rates. Since smaller organisms have an advantage when exploitative competition is important in an ecosystem, this mechanism of competition might lead to a juvenile-driven generation cycle: individual juveniles succeed and grow fast, but once they mature they are outcompeted by smaller organisms. In plants, exploitative competition can occur both above- and below ground. Aboveground, plants reduce the fitness of their neighbors by vying for sunlight plants consume nitrogen by absorbing it into their roots, making nitrogen unavailable to nearby plants. Plants that produce many roots typically reduce soil nitrogen to very low levels, eventually killing neighboring plants. Exploitative competition has also been shown to occur both within species (intraspecific) and between different species (interspecific). Furthermore, many competitive interactions between organisms are some combination of exploitative and interference competition, meaning the two mechanisms are far from mutually exclusive. For example, a recent 2019 study found that the native thrip species Frankliniella intonsa was competitively dominant over an invasive thrip species Frankliniella occidentalis because it not only exhibited greater time feeding (exploitative competition) but also greater time guarding its resources (interference competition). Plants may also exhibit both forms of competition, not only scrambling for space for root growth but also directly inhibiting other plants' development through allelopathy. Apparent competition Apparent competition occurs when two otherwise unrelated prey species indirectly compete for survival through a shared predator. This form of competition typically manifests in new equilibrium abundances of each prey species. For example, suppose there are two species (species A and species B), which are preyed upon by food-limited predator species C. Scientists observe an increase in the abundance of species A and a decline in the abundance of species B. In an apparent competition model, this relationship is found to be mediated through predator C; a population explosion of species A increases the abundance of predator species C due to a greater total food source. Since there are now more predators, species A and B would be hunted at higher rates than before. Thus, the success of species A was to the detriment of species B — not because they competed for resources, but because their increased numbers had indirect effects on the predator population. This one-predator/two-prey model has been explored by ecologists as early as 1925, but the term "apparent competition" was first coined by University of Florida ecologist Robert D. Holt in 1977. Holt found that field ecologists at the time were erroneously attributing negative interactions among prey species to niche partitioning and competitive exclusion, ignoring the role of food-limited predators. Apparent competition can help shape a species' realized niche, or the area or resources the species can actually persist due to interspecific interactions. The effect on realized niches could be incredibly strong, especially when there is an absence of more traditional interference or exploitative competition. A real-world example was studied in the late 1960s, when the introduction of snowshoe hares (Lepus americanus) to Newfoundland reduced the habitat range of native arctic hares (Lepus arcticus). While some ecologists hypothesized that this was due to an overlap in the niche, other ecologists argued that the more plausible mechanism was that snowshoe hare populations led to an explosion in food-limited lynx populations, a shared predator of both prey species. Since the arctic hare has a relatively weaker defense tactic than the snowshoe hare, they were excluded from woodland areas on the basis of differential predation. However, both apparent competition and exploitation competition might help explain the situation to some degree. Support for the impact of competition on the breadth of the realized niche with respect to diet is becoming more common in a variety of systems based upon isotopic and spatial data, including both carnivores and small mammals. Apparent competition can be symmetric or asymmetric. Symmetric apparent competition negatively impacts both species equally (-,-), from which it can be inferred that both species will persist. However, asymmetric apparent competition occurs when one species is affected less than the other. The most extreme scenario of asymmetric apparent competition is when one species is not affected at all by the increase in the predator, which can be seen as a form of amensalism (0, -). Human impacts on endangered prey species have been characterized by conservation scientists as an extreme form of asymmetric apparent competition, often through introducing predator species into ecosystems or resource subsidies. An example of fully asymmetric apparent competition which often occurs near urban centers is subsidies in the form of human garbage or waste. In the early 2000s, the common raven (Corvus corax) population in the Mojave Desert increased due to an influx of human garbage, leading to an indirect negative effect on juvenile desert tortoises (Gopherus agassizii). Asymmetry in apparent competition can also arise as a consequence of resource competition. An empirical example is provided by two small fish species in postglacial lakes in Western Canada, where resource competition between prickly sculpin and threespine stickleback fish leads to a spatial niche shift mainly in threespine stickleback. As a consequence of this shift, predation by a shared trout predator increases for stickleback but decreases for sculpin in lakes where the two species co-occur compared to lakes in which each species occurs on its own together with trout predators. Because sharing predators often comes together with competition for shared food resources, apparent competition and resource competition may often interplay in nature. Apparent competition has also been viewed in and on the human body. The human immune system can acts as the generalist predator, and a high abundance of a certain bacteria may induce an immune response, damaging all pathogens in the body. Another example of this is of two populations of bacteria that can both support a predatory bacteriophage. In most situations, the one that is most resistant to infection by the shared predator will replace the other. Apparent competition has also been suggested as an exploitable phenomenon for cancer treatments. Highly specialized viruses that are developed to target malignant cancer cells often go locally extinct prior to eradicating all cancer. However, if a virus were developed that targets both healthy and unhealthy host cells to some degree, the large number of healthy cells would support the predatory virus for long enough to eliminate all malignant cells. Size-asymmetric competition Competition can be either complete symmetric (all individuals receive the same amount of resources, irrespective of their size), perfectly size symmetric (all individuals exploit the same amount of resource per unit biomass), or absolutely size-asymmetric (the largest individuals exploit all the available resource). Among plants, size asymmetry is context-dependent and competition can be both asymmetric and symmetric depending on the most limiting resource. In forest stands, below-ground competition for nutrients and water is size-symmetric, because a tree's root system is typically proportionate to the biomass of the entire tree. Conversely, above-ground competition for light is size-asymmetric — since light has directionality, the forest canopy is dominated entirely by the largest trees. These trees disproportionately exploit most of the resource for their biomass, making the interaction size asymmetric. Whether above-ground or below-ground resources are more limiting can have major effects on the structure and diversity of ecological communities; in mixed beech stands, for example, size-asymmetric competition for light is a stronger predictor of growth compared with competition for soil resources. Within and between species Competition can occur between individuals of the same species, called intraspecific competition, or between different species, called interspecific competition. Studies show that intraspecific competition can regulate population dynamics (changes in population size over time). This occurs because individuals become crowded as the population grows. Since individuals within a population require the same resources, crowding causes resources to become more limited. Some individuals (typically small juveniles) eventually do not acquire enough resources and die or do not reproduce. This reduces population size and slows population growth.[citation needed] Species also interact with other species that require the same resources. Consequently, interspecific competition can alter the sizes of many species populations at the same time. Experiments demonstrate that when species compete for a limited resource, one species eventually drives the populations of other species extinct. These experiments suggest that competing species cannot coexist (they cannot live together in the same area) because the best competitor will exclude all other competing species.[citation needed] Intraspecific competition occurs when members of the same species compete for the same resources in an ecosystem. A simple example is a stand of equally-spaced plants, which are all of the same age. The higher the density of plants, the more plants will be present per unit ground area, and the stronger the competition will be for resources such as light, water, or nutrients. Interspecific competition may occur when individuals of two separate species share a limiting resource in the same area. If the resource cannot support both populations, then lowered fecundity, growth, or survival may result in at least one species. Interspecific competition has the potential to alter populations, communities, and the evolution of interacting species. An example among animals could be the case of cheetahs and lions; since both species feed on similar prey, they are negatively impacted by the presence of the other because they will have less food, however, they still persist together, despite the prediction that under competition one will displace the other. In fact, lions sometimes steal prey items killed by cheetahs. Potential competitors can also kill each other, in so-called 'intraguild predation'. For example, in southern California coyotes often kill and eat gray foxes and bobcats, all three carnivores sharing the same stable prey (small mammals). An example among protozoa involves Paramecium aurelia and Paramecium caudatum. Russian ecologist, Georgy Gause, studied the competition between the two species of Paramecium that occurred as a result of their coexistence. Through his studies, Gause proposed the Competitive exclusion principle, observing the competition that occurred when their different ecological niches overlapped. Competition has been observed between individuals, populations, and species, but there is little evidence that competition has been the driving force in the evolution of large groups. For example, mammals lived beside reptiles for many millions of years of time but were unable to gain a competitive edge until dinosaurs were devastated by the Cretaceous–Paleogene extinction event. Evolutionary strategies In evolutionary contexts, competition is related to the concept of r/K selection theory, which relates to the selection of traits which promote success in particular environments. The theory originates from work on island biogeography by the ecologists Robert MacArthur and E. O. Wilson. In r/K selection theory, selective pressures are hypothesized to drive evolution in one of two stereotyped directions: r- or K-selection. These terms, r, and K, are derived from standard ecological algebra, as illustrated in the simple Verhulst equation of population dynamics: where r is the growth rate of the population (N), and K is the carrying capacity of its local environmental setting. Typically, r-selected species exploit empty niches, and produce many offspring, each of whom has a relatively low probability of surviving to adulthood. In contrast, K-selected species are strong competitors in crowded niches, and invest more heavily in much fewer offspring, each with a relatively high probability of surviving to adulthood. Competitive exclusion principle To explain how species coexist, in 1934 Georgii Gause proposed the competitive exclusion principle which is also called the Gause principle: species cannot coexist if they have the same ecological niche. The word "niche" refers to a species' requirements for survival and reproduction. These requirements include both resources (like food) and proper habitat conditions (like temperature or pH). Gause reasoned that if two species had identical niches (required identical resources and habitats) they would attempt to live in exactly the same area and would compete for exactly the same resources. If this happened, the species that was the best competitor would always exclude its competitors from that area. Therefore, species must at least have slightly different niches in order to coexist. Character displacement Competition can cause species to evolve differences in traits. This occurs because the individuals of a species with traits similar to competing species always experience strong interspecific competition. These individuals have less reproduction and survival than individuals with traits that differ from their competitors. Consequently, they will not contribute many offspring to future generations. For example, Darwin's finches can be found alone or together on the Galapagos Islands. Both species populations actually have more individuals with intermediate-sized beaks when they live on islands without the other species present. However, when both species are present on the same island, competition is intense between individuals that have intermediate-sized beaks of both species because they all require intermediate-sized seeds. Consequently, individuals with small and large beaks have greater survival and reproduction on these islands than individuals with intermediate-sized beaks. Different finch species can coexist if they have traits—for instance, beak size—that allow them to specialize in particular resources. When Geospiza fortis and Geospiza fuliginosa are present on the same island, G. fuliginosa tends to evolve a small beak and G. fortis a large beak. The observation that competing species' traits are more different when they live in the same area than when competing species live in different areas is called character displacement. For the two finch species, beak size was displaced: Beaks became smaller in one species and larger in the other species. Studies of character displacement are important because they provide evidence that competition is important in determining ecological and evolutionary patterns in nature. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Joke#cite_note-FOOTNOTEFreud1905-94] | [TOKENS: 8460]
Contents Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Frigg] | [TOKENS: 4427]
Contents Frigg Frigg (/frɪɡ/; Old Norse: [ˈfriɡː]) is a goddess, one of the Æsir, in Germanic mythology. In Norse mythology, the source of most surviving information about her, she is associated with marriage, prophecy, clairvoyance and motherhood, and dwells in the wetland halls of Fensalir. In wider Germanic mythology, she is known in Old High German as Frīja, in Langobardic as Frēa, in Old English as Frīg, in Old Frisian as Frīa, and in Old Saxon as Frī, in archaic Swedish as Frigg, Frigga, Frigge, Friggie (Runic: ᚠᚱᚤᚼ, Frygh; Old Swedish: Frigg, Frigh, Freghe, Frege, Freye, Frey, Freya, Frea, Fria, etc), all ultimately stemming from the Proto-Germanic theonym *Frijjō. Nearly all sources portray her as the wife of the god Odin. In Old High German and Old Norse sources, she is specifically connected with Fulla, but she is also associated with the goddesses Lofn, Hlín, Gná, and ambiguously with the Earth, otherwise personified as an apparently separate entity Jörð (Old Norse: 'Earth'). The children of Frigg and Odin include the gleaming god Baldr. The English weekday name Friday (ultimately meaning 'Frigg's Day') bears her name. After Christianization, the mention of Frigg continued to occur in Scandinavian folklore. During modern times, Frigg has appeared in popular culture, has been the subject of art and receives veneration in Germanic Neopaganism. Name and origin The theonyms Frigg (Old Norse), Frīja (Old High German), Frīg (Old English), Frīa (Old Frisian), and Frī (Old Saxon) are cognates (linguistic siblings from the same origin). They stem from the Proto-Germanic feminine noun *Frijjō, which emerged as a substantivized form of the adjective *frijaz ('free') via Holtzmann's law. In a clan-based societal system, the meaning 'free' arose from the meaning 'related'. The name is indeed etymologically close to the Sanskrit priyā and the Avestan fryā ('own, dear, beloved'), all ultimately descending from the Proto-Indo-European stem *priH-o-, denoting 'one's own, beloved'. The Proto-Germanic verb *frijōnan ('to love'), as well as the nouns *frijōndz ('friend') and *frijađwō ('friendship, peace'), are also related. An -a suffix has been sometimes applied by modern editors to denote femininity, resulting in the form Frigga. This spelling also serves the purpose of distancing the goddess from the English word frig, with a primary meaning of masturbate or to the common alternative to the English profanity fuck. Several place names refer to Frigg in what are now Norway and Sweden, although her name is altogether absent in recorded place names in Denmark. The connection with and possible earlier identification of the goddess Freyja with Frigg in the Proto-Germanic period is a matter of scholarly debate (see Frigg and Freyja common origin hypothesis). Like the name of the group of gods to which Freyja belongs, the Vanir, the name Freyja is not attested outside of Scandinavia. This is in contrast to the name of the goddess Frigg, who is also attested as a goddess among West Germanic peoples. Evidence is lacking for the existence of a common Germanic goddess from which Old Norse Freyja descends, but scholars have commented that this may simply be due to the scarcity of surviving sources. Regarding the Freyja–Frigg common origin hypothesis, scholar Stephan Grundy writes that "the problem of whether Frigg or Freyja may have been a single goddess originally is a difficult one, made more so by the scantiness of pre-Viking Age references to Germanic goddesses, and the diverse quality of the sources. The best that can be done is to survey the arguments for and against their identity, and to see how well each can be supported." The English weekday name Friday comes from Old English Frīġedæġ, meaning 'day of Frig'. It is cognate with Old Frisian Frīadei (≈ Fri(g)endei), Middle Dutch Vridach (≈ Vriendach), Middle Low German Vrīdach (≈ Vrīgedach), and Old High German Frîatac. The Old Norse Frjádagr was borrowed from a West Germanic language. All of these terms derive from Late Proto-Germanic *Frijjōdag ('Day of Frijjō'), a calque of Latin Veneris dies ('Day of Venus'; cf. modern Italian venerdì, French vendredi, Spanish viernes). The Germanic goddess' name has substituted for the Roman name of a comparable deity, a practice known as interpretatio germanica. Although the Old English theonym Frīg is only found in the name of the weekday, it is also attested as a common noun in frīg ('love, affections [plural], embraces [in poetry]'). The Old Norse weekday Freyjudagr, a rare synonym of Frjádagr, saw the replacement of the first element with the genitive of Freyja. Attestations The 7th-century Origo Gentis Langobardorum, and Paul the Deacon's 8th-century Historia Langobardorum derived from it, recount a founding myth of the Langobards, a Germanic people who ruled a region of what is now Italy (see Lombardy). According to this legend, a "small people" known as the Winnili were ruled by a woman named Gambara who had two sons, Ybor and Agio. The Vandals, ruled by Ambri and Assi, came to the Winnili with their army and demanded that they pay them tribute or prepare for war. Ybor, Agio, and their mother Gambara rejected their demands for tribute. Ambra and Assi then asked the god Godan for victory over the Winnili, to which Godan responded (in the longer version in the Origo): "Whom I shall first see when at sunrise, to them will I give the victory." Meanwhile, Ybor and Agio called upon Frea, Godan's wife. Frea counseled them that "at sunrise the Winnil[i] should come, and that their women, with their hair let down around the face in the likeness of a beard should also come with their husbands". At sunrise, Frea turned Godan's bed around to face east and woke him. Godan saw the Winnili, including their whiskered women, and asked "who are those Long-beards?" Frea responded to Godan, "As you have given them a name, give them also the victory". Godan did so, "so that they should defend themselves according to his counsel and obtain the victory". Thenceforth the Winnili were known as the Langobards (Langobardic "long-beards"). A 10th-century manuscript found in what is now Merseburg, Germany, features an invocation known as the Second Merseburg Incantation. The incantation calls upon various continental Germanic gods, including Old High German Frija and a goddess associated with her—Volla, to assist in healing a horse: In the Poetic Edda, compiled during the 13th century from earlier traditional material, Frigg is mentioned in the poems Völuspá, Vafþrúðnismál, the prose of Grímnismál, Lokasenna, and Oddrúnargrátr. Frigg receives three mentions in the Poetic Edda poem Völuspá. In the first mention the poem recounts that Frigg wept for the death of her son Baldr in Fensalir. Later in the poem, when the future death of Odin is foretold, Odin is referred to as the "beloved of Frigg" and his future death is referred to as the "second grief of Frigg". Like the reference to Frigg weeping in Fensalir earlier in the poem, the implied "first grief" is a reference to the grief she felt upon the death of her son, Baldr. Frigg plays a prominent role in the prose introduction to the poem, Grímnismál. The introduction recounts that two sons of king Hrauðungr, Agnar (age 10) and Geirröðr (age 8), once sailed out with a trailing line to catch small fish, but wind drove them out into the ocean and, during the darkness of night, their boat wrecked. The brothers went ashore, where they met a crofter. They stayed on the croft for one winter, during which the couple separately fostered the two children: the old woman fostered Agnar and the old man fostered Geirröðr. Upon the arrival of spring, the old man brought them a ship. The old couple took the boys to the shore, and the old man took Geirröðr aside and spoke to him. The boys entered the boat and a breeze came. The boat returned to the harbor of their father. Geirröðr, forward in the ship, jumped to shore and pushed the boat, containing his brother, out and said "go where an evil spirit may get thee." Away went the ship and Geirröðr walked to a house, where he was greeted with joy; while the boys were gone, their father had died, and now Geirröðr was king. He "became a splendid man." The scene switches to Odin and Frigg sitting in Hliðskjálf, "look[ing] into all the worlds." Odin says: "'Seest thou Agnar, thy foster-son, where he is getting children a giantess [Old Norse gȳgi] in a cave? while Geirröd, my foster son, is a king residing in his country.' Frigg answered, 'He is so inhospitable that he tortures his guests, if he thinks that too many come.'" Odin replied that this was a great untruth and so the two made a wager. Frigg sent her "waiting-maid" Fulla to warn Geirröðr to be wary, lest a wizard who seeks him should harm him, and that he would know this wizard by the refusal of dogs, no matter how ferocious, to attack the stranger. While it was not true that Geirröðr was inhospitable with his guests, Geirröðr did as instructed and had the wizard arrested. Upon being questioned, the wizard, wearing a blue cloak, said no more than that his name is Grímnir. Geirröðr has Grímnir tortured and sits him between two fires for 8 nights. Upon the 9th night, Grímnir is brought a full drinking horn by Geirröðr's son, Agnar (so named after Geirröðr's brother), and the poem continues without further mention or involvement of Frigg. In the poem Lokasenna, where Loki accuses nearly every female in attendance of promiscuity and/or unfaithfulness, an aggressive exchange occurs between the god Loki and the goddess Frigg (and thereafter between Loki and the goddess Freyja about Frigg). A prose introduction to the poem describes that numerous gods and goddesses attended a banquet held by Ægir. These gods and goddesses include Odin and, "his wife", Frigg. In the poem Oddrúnargrátr, Oddrún helps Borgny give birth to twins. In thanks, Borgny invokes vættir, Frigg, Freyja, and other unspecified deities. Frigg is mentioned throughout the Prose Edda, compiled in the 13th century by Snorri Sturluson. Frigg is first mentioned in the Prose Edda Prologue, wherein a euhemerized account of the Norse gods is provided. The author describes Frigg as the wife of Odin, and, in a case of folk etymology, the author attempts to associate the name Frigg with the Latin-influenced form Frigida. The Prologue adds that both Frigg and Odin "had the gift of prophecy." In the next section of the Prose Edda, Gylfaginning, High tells Gangleri (the king Gylfi in disguise) that Frigg, daughter of Fjörgynn (Old Norse Fjörgynsdóttir) is married to Odin and that the Æsir are descended from the couple, and adds that "the earth [Jörðin] was [Odin's] daughter and his wife." According to High, the two had many sons, the first of which was the mighty god Thor. Later in Gylfaginning, Gangleri asks about the ásynjur, a term for Norse goddesses. High says that "highest" among them is Frigg and that only Freyja "is highest in rank next to her." Frigg dwells in Fensalir "and it is very splendid." In this section of Gylfaginning, Frigg is also mentioned in connection to other ásynjur: Fulla carries Frigg's ashen box, "looks after her footwear and shares her secrets;" Lofn is given special permission by Frigg and Odin to "arrange unions" among men and women; Hlín is charged by Frigg to protect those that Frigg deem worthy of keeping from danger; and Gná is sent by Frigg "into various worlds to carry out her business." In section 49 of Gylfaginning, a narrative about the fate of Frigg's son Baldr is told. According to High, Baldr once started to have dreams indicating that his life was in danger. When Baldr told his fellow Æsir about his dreams, the gods met together for a thing and decided that they should "request immunity for Baldr from all kinds of danger." Frigg subsequently receives promises from the elements, the environment, diseases, animals, and stones, amongst other things. The request successful, the Æsir make sport of Baldr's newfound invincibility; shot or struck, Baldr remained unharmed. However, Loki discovers this and is not pleased by this turn of events, so, in the form of a woman, he goes to Frigg in Fensalir. There, Frigg asks this female visitor what the Æsir are up to assembled at the thing. The woman says that all of the Æsir are shooting at Baldr and yet he remains unharmed. Frigg explains that "Weapons and wood will not hurt Baldr. I have received oaths from them all." The woman asks Frigg if all things have sworn not to hurt Baldr, to which Frigg notes one exception; "there grows a shoot of a tree to the west of Val-hall. It is called mistletoe. It seemed young to me to demand the oath from." Loki immediately disappears. Now armed with mistletoe, Loki arrives at the thing where the Æsir are assembled and tricks the blind Höðr, Baldr's brother, into shooting Baldr with a mistletoe projectile. To the horror of the assembled gods, the mistletoe goes directly through Baldr, killing him. Standing in horror and shock, the gods are initially only able to weep due to their grief. Frigg speaks up and asks "who there was among the Æsir who wished to earn all her love and favour and was willing to ride the road to Hel and try if he could find Baldr, and offer Hel a ransom if she would let Baldr go back to Asgard." Hermóðr, Baldr's brother, accepts Frigg's request and rides to Hel. Meanwhile, Baldr is given a grand funeral attended by many beings—foremost mentioned of which are his mother and father, Frigg and Odin. During the funeral, Nanna dies of grief and is placed in the funeral pyre with Baldr, her dead husband. Hermóðr locates Baldr and Nanna in Hel. Hermodr secures an agreement for the return of Baldr and with Hermóðr Nanna sends gifts to Frigg (a linen robe) and Fulla (a finger-ring). Hermóðr rides back to the Æsir and tells them what has happened. However, the agreement fails due to the sabotage of a jötunn in a cave named Þökk (Old Norse 'thanks'), described as perhaps Loki in disguise. Frigg is mentioned several times in the Prose Edda section Skáldskaparmál. The first mention occurs at the beginning of the section, where the Æsir and Ásynjur are said to have once held a banquet in a hall in a land of gods, Asgard. Frigg is one of the twelve ásynjur in attendance. In Ynglinga saga, the first book of Heimskringla, a Euhemerized account of the origin of the gods is provided. Frigg is mentioned once. According to the saga, while Odin was away, Odin's brothers Vili and Vé oversaw Odin's holdings. Once, while Odin was gone for an extended period, the Æsir concluded that he was not coming back. His brothers started to divvy up Odin's inheritance, "but his wife Frigg they shared between them. However, a short while afterwards, [Odin] returned and took possession of his wife again. In Völsunga saga, the great king Rerir and his wife (unnamed) are unable to conceive a child; "that lack displeased them both, and they fervently implored the gods that they might have a child. It is said that Frigg heard their prayers and told Odin what they asked." Archaeological record A 12th century depiction of a cloaked but otherwise nude woman riding a large cat appears on a wall in the Schleswig Cathedral in Schleswig-Holstein, Northern Germany. Beside her is similarly a cloaked yet otherwise nude woman riding a distaff. Due to iconographic similarities to the literary record, these figures have been theorized as depictions of Freyja and Frigg respectively. Scholarly reception and interpretation Due to numerous similarities, some scholars have proposed that the Old Norse goddesses Frigg and Freyja descend from a common entity from the Proto-Germanic period. Regarding a Freyja-Frigg common origin hypothesis, scholar Stephan Grundy comments that "the problem of whether Frigg or Freyja may have been a single goddess originally is a difficult one, made more so by the scantiness of pre-Viking Age references to Germanic goddesses, and the diverse quality of the sources. The best that can be done is to survey the arguments for and against their identity, and to see how well each can be supported." Unlike Frigg but like the name of the group of gods to which Freyja belongs, the Vanir, the name Freyja is not attested outside of Scandinavia, as opposed to the name of the goddess Frigg, who is attested as a goddess common among the Germanic peoples, and whose name is reconstructed as Proto-Germanic *Frijjō. Similar proof for the existence of a common Germanic goddess from which Freyja descends does not exist, but scholars have commented that this may simply be due to the scarcity of evidence outside of the North Germanic record. Modern influence Frigg is referenced in art and literature into the modern period. In the 18th century, Gustav III of Sweden, king of Sweden, composed Friggja, a play, so named after the goddess, and H. F. Block and Hans Friedrich Blunck's Frau Frigg und Doktor Faust in 1937. Richard Wagner included Fricka in his 1870 opera Die Walküre, the second of his Der Ring des Nibelungen cycle, as the goddess wife of Wotan in a key scene for the plot of the whole cycle. Other examples include fine art works by K. Ehrenberg (Frigg, Freyja, drawing, 1883), John Charles Dollman (Frigga Spinning the Clouds, painting, c. 1900), Emil Doepler (Wodan und Frea am Himmelsfenster, painting, 1901), and H. Thoma (Fricka, drawing, date not provided). See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Scandinavia] | [TOKENS: 6465]
Contents Scandinavia Nordic territories that are not part of Scandinavia: History by country Scandinavia is a subregion of northern Europe that most commonly comprises Denmark, Norway, and Sweden, which share strong historical, cultural, and linguistic ties. It may also refer to the Scandinavian Peninsula (which excludes Denmark but includes part of northern Finland). In English usage, the term is also used as a synonym for the Nordic countries. Iceland and the Faroe Islands are sometimes included, due to their ethnolinguistic ties to Sweden, Norway, and Denmark. Although Finland differs from the other Nordic countries in this respect, some authors consider it Scandinavian because of its economic and cultural similarities. The geography of the region is varied, from the Norwegian fjords in the west and Scandinavian mountains covering parts of Norway and Sweden, to the low and flat areas of Denmark in the south, as well as archipelagos and lakes in the east. Most of the population in the region live in the more temperate southern regions, with the northern parts having long, cold winters. During the Viking Age Scandinavian peoples participated in large-scale raiding, conquest, colonization and trading mostly throughout Europe. They also used their longships for exploration, becoming the first Europeans to reach North America. These exploits saw the establishment of the North Sea Empire which comprised large parts of Scandinavia and Great Britain, though it was relatively short-lived. Scandinavia was eventually Christianized, and the coming centuries saw various unions of Scandinavian nations, most notably the Kalmar Union of Denmark, Norway and Sweden, which lasted for over 100 years until the Swedish king Gustav I led Sweden out of the union. Denmark and Norway, as well as Schleswig-Holstein, were then united until 1814 as Denmark–Norway. Numerous wars between the nations followed, which shaped the modern borders and led to the establishment of the Swedish Empire in the 17th and early 18th centuries. The most recent Scandinavian union was the union between Sweden and Norway, which ended in 1905. In modern times the region has prospered, with the economies of the countries being amongst the strongest in Europe. Sweden, Denmark, Norway, Iceland, and Finland all maintain welfare systems considered to be generous, with the economic and social policies of the countries being dubbed the "Nordic model". Geography The geography of Scandinavia is extremely varied. Notable are the Norwegian fjords, the Scandinavian Mountains covering much of Norway and parts of Sweden, the flat, low areas in Denmark and the archipelagos of Finland, Norway and Sweden. Finland and Sweden have many lakes and moraines, legacies of the ice age, which ended about ten millennia ago. The southern regions of Scandinavia, which are also the most populous regions, have a temperate climate. Scandinavia extends north of the Arctic Circle, but has relatively mild weather for its latitude due to the Gulf Stream. Many of the Scandinavian mountains have an alpine tundra climate. The climate varies from north to south and from west to east: a marine west coast climate (Cfb) typical of western Europe dominates in Denmark, the southernmost part of Sweden and along the west coast of Norway reaching north to 65°N, with orographic lift giving more mm/year precipitation (<5000 mm) in some areas in western Norway. The central part – from Oslo to Stockholm – has a humid continental climate (Dfb), which gradually gives way to subarctic climate (Dfc) further north and cool marine west coast climate (Cfc) along the northwestern coast. A small area along the northern coast east of the North Cape has tundra climate (Et) as a result of a lack of summer warmth. The Scandinavian Mountains block the mild and moist air coming from the southwest, thus northern Sweden and the Finnmarksvidda plateau in Norway receive little precipitation and have cold winters. Large areas in the Scandinavian mountains have alpine tundra climate. The warmest temperature ever recorded in Scandinavia is 38.0 °C in Målilla (Sweden). The coldest temperature ever recorded is −52.6 °C in Vuoggatjålme, Arjeplog (Sweden). The coldest month was February 1985 in Vittangi (Sweden) with a mean of −27.2 °C. Southwesterly winds further warmed by foehn wind can give warm temperatures in narrow Norwegian fjords in winter. Tafjord has recorded 17.9 °C in January and Sunndal 18.9 °C in February. Scandinavia as a concept The words Scandinavia and Scania (Skåne, the southernmost province of Sweden) are both thought to go back to the Proto-Germanic compound *Skaðin-awjō (the ð represented in Latin by t or d), which appears later in Old English as Scedenig and in Old Norse as Skáney. The earliest identified source for the name Scandinavia is Pliny the Elder's Natural History, dated to the 1st century AD. Various references to the region can also be found in Pytheas, Pomponius Mela, Tacitus, Ptolemy, Procopius and Jordanes, usually in the form of Scandza. It is believed that the name used by Pliny may be of West Germanic origin, originally denoting Scania. According to some scholars, the Germanic stem can be reconstructed as *skaðan-, meaning "danger" or "damage". The second segment of the name has been reconstructed as *awjō, meaning "land on the water" or "island". The name Scandinavia would then mean "dangerous island", which is considered to refer to the treacherous sandbanks surrounding Scania. Skanör in Scania, with its long Falsterbo reef, has the same stem (skan) combined with -ör, which means "sandbanks". Alternatively, Sca(n)dinavia and Skáney, along with the Old Norse goddess name Skaði, may be related to Proto-Germanic *skaðwa- (meaning "shadow"). John McKinnell comments that this etymology suggests that the goddess Skaði may have once been a personification of the geographical region of Scandinavia or associated with the underworld. Another possibility is that all or part of the segments of the name came from the pre-Germanic Mesolithic people inhabiting the region. In modernity, Scandinavia is a peninsula, but between approximately 10,300 and 9,500 years ago the southern part of Scandinavia was an island separated from the northern peninsula, with water exiting the Baltic Sea through the area where Stockholm is now located. The Latin names in Pliny's text gave rise to different forms in medieval Germanic texts. In Jordanes' history of the Goths (AD 551), the form Scandza is the name used for their original home, separated by sea from the land of Europe (chapter 1, 4). Where Jordanes meant to locate this quasi-legendary island is still a hotly debated issue, both in scholarly discussions and in the nationalistic discourse of various European countries. The form Scadinavia as the original home of the Langobards appears in Paul the Deacon' Historia Langobardorum, but in other versions of Historia Langobardorum appear the forms Scadan, Scandanan, Scadanan and Scatenauge. Frankish sources used Sconaowe and Aethelweard, an Anglo-Saxon historian, used Scani. In Beowulf, the forms Scedenige and Scedeland are used while the Alfredian translation of Orosius and Wulfstan's travel accounts used the Old English Sconeg. The earliest Sámi joik texts written down refer to the world as Skadesi-suolu in Northern Sámi and Skađsuâl in Skolt Sámi, meaning "Skaði's island". Svennung considers the Sámi name to have been introduced as a loanword from the North Germanic languages; "Skaði" is the jötunn stepmother of Freyr and Freyja in Norse mythology. It has been suggested that Skaði to some extent is modelled on a Sámi woman. The name for Skaði's father Þjazi is known in Sámi as Čáhci, "the waterman"; and her son with Odin, Sæmingr, can be interpreted as a descendant of Saam, the Sámi population. Older joik texts give evidence of the old Sámi belief about living on an island and state that the wolf is known as suolu gievra, meaning "the strong one on the island". The Sámi place name Sulliidčielbma means "the island's threshold" and Suoločielgi means "the island's back". In recent substrate studies, Sámi linguists have examined the initial cluster sk- in words used in the Sámi languages and concluded that sk- is a phonotactic structure of non-Sámi origin. Although the term Scandinavia used by Pliny the Elder probably originated in the ancient Germanic languages, the modern form Scandinavia does not descend directly from the ancient Germanic term. Rather the word was brought into use in Europe by scholars borrowing the term from ancient sources like Pliny, and was used vaguely for Scania and the southern region of the peninsula. The term was popularised by the linguistic and cultural Scandinavist movement, which asserted the common heritage and cultural unity of the Scandinavian countries and rose to prominence in the 1830s. The popular usage of the term in Sweden, Denmark and Norway as a unifying concept became established in the 19th century through poems such as Hans Christian Andersen's "I am a Scandinavian" of 1839. After a visit to Sweden, Andersen became a supporter of early political Scandinavism. In a letter describing the poem to a friend, he wrote: "All at once I understood how related the Swedes, the Danes and the Norwegians are, and with this feeling I wrote the poem immediately after my return: 'We are one people, we are called Scandinavians!'". The influence of Scandinavism as a Scandinavist political movement peaked in the middle of the 19th century, between the First Schleswig War (1848–1850) and the Second Schleswig War (1864).[citation needed] Charles XV, king of Sweden, also proposed a unification of Denmark, Norway and Sweden into a single united kingdom. The background for the proposal was the tumultuous events during the Napoleonic Wars in the beginning of the century. This war resulted in Finland (formerly the eastern third of Sweden) becoming the Russian Grand Duchy of Finland in 1809 and Norway (de jure in union with Denmark since 1387, although de facto treated as a province) becoming independent in 1814, but thereafter swiftly forced to accept a personal union with Sweden. The dependent territories Iceland, the Faroe Islands and Greenland, historically part of Norway, remained with Denmark in accordance with the Treaty of Kiel. Sweden and Norway were thus united under the Swedish monarch, but Finland's inclusion in the Russian Empire excluded any possibility for a political union between Finland and any of the other Nordic countries. The end of the Scandinavian political movement came when Denmark was denied the military support promised from Sweden and Norway to annex the (Danish) Duchy of Schleswig, which together with the (German) Duchy of Holstein had been in personal union with Denmark. The Second war of Schleswig followed in 1864, a brief but disastrous war between Denmark and Prussia (supported by Austria). Schleswig-Holstein was conquered by Prussia and after Prussia's success in the Franco-Prussian War a Prussian-led German Empire was created and a new power-balance in the Baltic region was established. The Scandinavian Monetary Union, established in 1873, lasted until World War I. The term Scandinavia (sometimes specified in English as Continental Scandinavia or mainland Scandinavia) is ordinarily used locally for Denmark, Norway and Sweden as a subset of the Nordic countries (known in Norwegian, Danish, and Swedish as Norden; Finnish: Pohjoismaat, Icelandic: Norðurlöndin, Faroese: Norðurlond). However, in English usage, the term Scandinavia is sometimes used as a synonym or near-synonym for what are known locally as Nordic countries. Usage in English is different from usage in the Scandinavian languages themselves (which use Scandinavia in the narrow meaning), and by the fact that the question of whether a country belongs to Scandinavia is politicised. People from the Nordic world beyond Norway, Denmark and Sweden may be offended at being either included in or excluded from the category of "Scandinavia". Nordic countries is used unambiguously for Denmark, Norway, Sweden, Finland and Iceland, including their associated territories Greenland, the Faroe Islands and the Åland Islands. The geological term Fennoscandia refers to the Fennoscandian Shield (or Baltic Shield), which includes the Scandinavian Peninsula, Finland and Karelia, and excludes Denmark and other parts of the wider Nordic world. The term Fennoscandia is sometimes used in a political sense to refer to Norway, Sweden, Denmark, and Finland. The term Scandinavian may be used with two principal meanings, in an ethnic or cultural sense and as a modern and more inclusive demonym. In the ethnic or cultural sense, the term Scandinavian traditionally refers to speakers of Scandinavian languages, who are mainly descendants of the peoples historically known as Norsemen. In this sense the term refers primarily to native Danes, Norwegians and Swedes as well as descendants of Scandinavian settlers such as the Icelanders and the Faroese. The term is also used in this ethnic sense, to refer to the modern descendants of the Norse, in studies of linguistics and culture. Additionally the term Scandinavian is used demonymically to refer to all modern inhabitants or citizens of Scandinavian countries. Within Scandinavia the demonymic term primarily refers to inhabitants or citizens of Denmark, Norway and Sweden. In English usage inhabitants or citizens of Iceland, the Faroe Islands and Finland are sometimes included as well. English general dictionaries often define the noun Scandinavian demonymically as meaning any inhabitant of Scandinavia (which might be narrowly conceived or broadly conceived). There is a certain ambiguity and political contestation as to which peoples should be referred to as Scandinavian in this broader sense. Sámi people who live in Norway and Sweden are generally included as Scandinavians in the demonymic sense; the Sámi of Finland may be included in English usage, but usually not in local usage; the Sámi of Russia are not included. However, the use of the term "Scandinavian" with reference to the Sámi is complicated by the historical attempts by Scandinavian majority peoples and governments in Norway and Sweden to assimilate the Sámi people into the Scandinavian culture and languages, making the inclusion of the Sámi as "Scandinavians" controversial among many Sámi. Modern Sámi politicians and organizations often stress the status of the Sámi as a people separate from and equal to the Scandinavians, with their own language and culture, and are apprehensive about being included as "Scandinavians" in light of earlier Scandinavian assimilation policies. Languages Two language groups have coexisted in Scandinavia since prehistory—the North Germanic languages (Scandinavian languages) and the Uralic languages, Sámi and Finnish. Most people in Scandinavia today speak Scandinavian languages that evolved from Old Norse, originally spoken by ancient Germanic tribes in southern Scandinavia. The Continental Scandinavian languages—Danish, Norwegian and Swedish—form a dialect continuum and are considered mutually intelligible. The Insular Scandinavian languages—Faroese and Icelandic—on the other hand, are only partially intelligible to speakers of the continental Scandinavian languages. The Uralic languages are linguistically unrelated to the Scandinavian languages. Finnish is the majority language in Finland, and a recognized minority language in Sweden. Meänkieli and Kven, sometimes considered as dialects of Finnish, are recognized minority languages in Sweden and Norway, respectively. The Sámi languages are indigenous minority languages in Scandinavia, spoken by the Sámi people in northern Scandinavia. The North Germanic languages of Scandinavia are traditionally divided into an East Scandinavian branch (Danish and Swedish) and a West Scandinavian branch (Norwegian, Icelandic and Faroese), but because of changes appearing in the languages since 1600 the East Scandinavian and West Scandinavian branches are now usually reconfigured into Insular Scandinavian (ö-nordisk/øy-nordisk) featuring Icelandic and Faroese and Continental Scandinavian (Skandinavisk), comprising Danish, Norwegian and Swedish. The modern division is based on the degree of mutual comprehensibility between the languages in the two branches. The populations of the Scandinavian countries, with common Scandinavian roots in language, can—at least with some training—understand each other's standard languages as they appear in print and are heard on radio and television. The reason Danish, Swedish and the two official written versions of Norwegian (Nynorsk and Bokmål) are traditionally viewed as different languages, rather than dialects of one common language, is that each is a well-established standard language in its respective country. Danish, Swedish and Norwegian have since medieval times been influenced to varying degrees by Middle Low German and standard German. That influence was due not only to proximity, but also to the rule of Denmark—and later Denmark-Norway—over the German-speaking region of Holstein, and to Sweden's close trade with the Hanseatic League. Norwegians are accustomed to variation and may perceive Danish and Swedish only as slightly more distant dialects. This is because they have two official written standards, in addition to the habit of strongly holding on to local dialects. The people of Stockholm, Sweden and Copenhagen, Denmark have the greatest difficulty in understanding other Scandinavian languages. In the Faroe Islands and Iceland, learning Danish is mandatory. This causes Faroese people as well as Icelandic people to become bilingual in two very distinct North Germanic languages, making it relatively easy for them to understand the other two Mainland Scandinavian languages. Although Iceland was under the political control of Denmark until a much later date (1918), very little influence and borrowing from Danish has occurred in the Icelandic language. Icelandic remained the preferred language among the ruling classes in Iceland. Danish was not used for official communications, most of the royal officials were of Icelandic descent and the language of the church and law courts remained Icelandic. Finland has a Swedish-speaking minority which constitutes approximately 5% of the total population. The Swedish-speakers live mainly on the coastline starting from approximately the city of Porvoo (Sw: Borgå) (in the Gulf of Finland) up to the city of Kokkola (Sw: Karleby) (in the Bay of Bothnia).[citation needed] The coastal region of Ostrobothnia has a Swedish-speaking majority, whereas plenty of areas on this coastline are nearly unilingually Finnish, like the region of Satakunta.[citation needed] Swedish spoken in today's Finland includes a lot of words that are borrowed from Finnish, whereas the written language remains closer to that of Sweden. Åland, an autonomous region of Finland situated in the archipelago between Finland and Sweden, is entirely Swedish-speaking. The Scandinavian languages are (as a language family) unrelated to Finnish and the Sámi languages, which as Uralic languages are distantly related to each other. Owing to the close proximity, there is a great deal of borrowing from the Swedish and Norwegian languages in Finnish and Sámi. Finnish is the majority language of Finland, spoken by 95% of the population. Swedish has had a strong influence on Finnish because it served as the dominant administrative and cultural language during the centuries when Finland belonged to the Swedish realm, and it retained a strong position during the subsequent Russian period. Finnish-speakers often needed to learn Swedish in order to pursue higher-status positions. Finland is officially bilingual: Finnish and Swedish are both national languages, with equal legal status. Children are taught the other official language at school: for Swedish-speakers this is Finnish (usually from the 3rd grade), while for Finnish-speakers it is Swedish (usually from the 3rd, 5th or 7th grade).[citation needed] Finnish speakers constitute a language minority in both Sweden and Norway. Meänkieli and Kven are Finnish dialects mainly spoken in the Swedish part of the Torne Valley and surrounding areas, and in the Norwegian counties of Troms and Finnmark, respectively. Meänkieli has held an official status as a minority language in Sweden since 2000, and Kven in Norway since 2005. Karelian is a language closely related to Finnish. In Finland, it has an official status as a non-territorial minority language within the framework of the European Charter for Regional or Minority Languages. The Sámi languages are indigenous minority languages in Scandinavia. They belong to their own branch of the Uralic language family and are unrelated to the North Germanic languages other than by limited grammatical (particularly lexical) characteristics resulting from prolonged contact. Sámi is divided into several languages or dialects. Consonant gradation is a feature in both Finnish and northern Sámi dialects, but it is not present in southern Sámi, which is considered to have a different language history. According to the Sámi Information Centre of the Sámi Parliament of Sweden, southern Sámi may have originated in an earlier migration from the south into the Scandinavian Peninsula. German is a recognized minority language in Denmark. Yiddish, Romani Chib/Romanes, Scandoromani are amongst the languages protected in parts of Scandinavia under the European Charter for Regional or Minority Languages. Recent migration has added even more languages. History A key ancient description of Scandinavia was provided by Pliny the Elder, though his mentions of Scatinavia and surrounding areas are not always easy to decipher. Writing in the capacity of a Roman admiral, he introduces the northern region by declaring to his Roman readers that there are 23 islands "Romanis armis cognitae" ("known to Roman arms") in this area. According to Pliny, the "clarissima" ("most famous") of the region's islands is Scatinavia, of unknown size. There live the Hilleviones. The belief that Scandinavia was an island became widespread among classical authors during the 1st century and dominated descriptions of Scandinavia in classical texts during the centuries that followed. Pliny begins his description of the route to Scatinavia by referring to the mountain of Saevo (mons Saevo ibi), the Codanus Bay ("Codanus sinus") and the Cimbrian promontory. The geographical features have been identified in various ways. By some scholars, Saevo is thought to be the mountainous Norwegian coast at the entrance to Skagerrak and the Cimbrian peninsula is thought to be Skagen, the north tip of Jutland, Denmark. As described, Saevo and Scatinavia can also be the same place. Pliny mentions Scandinavia one more time: in Book VIII he says that the animal called achlis (given in the accusative, achlin, which is not Latin) was born on the island of Scandinavia. The animal grazes, has a big upper lip and some mythical attributes. The name Scandia, later used as a synonym for Scandinavia, also appears in Pliny's Naturalis Historia (Natural History), but is used for a group of Northern European islands which he locates north of Britannia. Scandia thus does not appear to be denoting the island Scadinavia in Pliny's text. The idea that Scadinavia may have been one of the Scandiae islands was instead introduced by Ptolemy (c. 90 – c. 168 AD), a mathematician, geographer and astrologer of Roman Egypt. He used the name Skandia for the biggest, most easterly of the three Scandiai islands, which according to him were all located east of Jutland. The Viking age in Scandinavia lasted from approximately 793–1066 AD and saw Scandinavians participate in large scale raiding, colonization, conquest and trading throughout Europe and beyond. The period saw a big expansion of Scandinavian-conquered territory and of exploration. Utilizing their advanced longships, they reached as far as North America, being the first Europeans to do so. During this time Scandinavians were drawn to wealthy towns, monasteries and petty kingdoms overseas in places such as the British Isles, Ireland, the Baltic coast and Normandy, all of which made profitable targets for raids. Scandinavians, primarily from modern day Sweden, known as Varangians also ventured east into what is now Russia raiding along river trade routes. During this period unification also took place between different Scandinavian kingdoms culminating in the peak of the North Sea Empire which included large parts of Scandinavia and Great Britain. This expansion and conquest led to the formation of several kingdoms, earldoms and settlements throughout Europe such as the Kingdom of the Isles, Earldom of Orkney, Scandinavian York, Danelaw, Kingdom of Dublin, the Duchy of Normandy and the Kievan Rus'. The Faroe Islands, Iceland and Greenland were also settled by the Scandinavians during this time. The Normans, Rus' people, Faroe Islanders, Icelanders and Norse-Gaels all emerged from these Scandinavian expansions. During a period of Christianization and state formation in the 10th–13th centuries, numerous Germanic petty kingdoms and chiefdoms were unified into three kingdoms: According to historian Sverre Bagge, the division into three Scandinavian kingdoms (Denmark, Sweden, Norway) makes sense geographically, as forests, mountains, and uninhabited land divided them from one another. Control of Norway was enabled through seapower, whereas control of the great lakes in Sweden enabled control of the kingdom, and control of Jutland was sufficient to control Denmark. The most contested area was the coastline from Oslo to Öresund, where the three kingdoms met. The three Scandinavian kingdoms joined in 1397 in the Kalmar Union under Queen Margaret I of Denmark. Sweden left the union in 1523 under King Gustav I of Sweden. In the aftermath of Sweden's secession from the Kalmar Union, civil war broke out in Denmark and Norway—the Protestant Reformation followed. When things had settled, the Norwegian privy council was abolished—it assembled for the last time in 1537. A personal union, entered into by the kingdoms of Denmark and Norway in 1536, lasted until 1814. Three sovereign successor states have subsequently emerged from this unequal union: Denmark, Norway and Iceland. The borders between Denmark, Norway and Sweden acquired their present shape in the middle of the 17th century: In the 1645 Treaty of Brömsebro, Denmark–Norway ceded the Norwegian provinces of Jämtland, Härjedalen and Idre and Särna, as well as the Baltic Sea islands of Gotland and Ösel (in Estonia) to Sweden. The Treaty of Roskilde, signed in 1658, forced Denmark–Norway to cede the Danish provinces Scania, Blekinge, Halland, Bornholm and the Norwegian provinces of Båhuslen and Trøndelag to Sweden. The 1660 Treaty of Copenhagen forced Sweden to return Bornholm and Trøndelag to Denmark–Norway, and to give up its recent claims to the island Funen. In the east, Finland was a fully incorporated part of Sweden from medieval times until the Napoleonic wars, when it was ceded to Russia. Despite many wars over the years since the formation of the three kingdoms, Scandinavia has been politically and culturally close. Denmark–Norway as a historiographical name refers to the former political union consisting of the kingdoms of Denmark and Norway, including the Norwegian dependencies of Iceland, Greenland and the Faroe Islands. The corresponding adjective and demonym is Dano-Norwegian. During Danish rule, Norway kept its separate laws, coinage and army as well as some institutions such as a royal chancellor. Norway's old royal line had died out with the death of Olav IV in 1387, but Norway's remaining a hereditary kingdom became an important factor for the Oldenburg dynasty of Denmark–Norway in its struggles to win elections as kings of Denmark. The Treaty of Kiel (14 January 1814) formally dissolved the Dano-Norwegian union and ceded the territory of Norway proper to the King of Sweden, but Denmark retained Norway's overseas possessions. However, widespread Norwegian resistance to the prospect of a union with Sweden induced the governor of Norway, crown prince Christian Frederick (later Christian VIII of Denmark), to call a constituent assembly at Eidsvoll in April 1814. The assembly drew up a liberal constitution and elected Christian Frederick to the throne of Norway. Following a Swedish invasion during the summer, the peace conditions of the Convention of Moss (14 August 1814) specified that king Christian Frederik had to resign, but Norway would keep its independence and its constitution within a personal union with Sweden. Christian Frederik formally abdicated on 10 August 1814 and returned to Denmark. The Norwegian parliament Storting elected king Charles XIII of Sweden as king of Norway on 4 November. The Storting dissolved the union between Sweden and Norway in 1905, after which the Norwegians elected Prince Charles of Denmark as king of Norway: he reigned as Haakon VII. Economy Measured in per capita GDP, the Nordic countries are among the richest in the world. There is a generous welfare system in Denmark, Finland, Iceland, Norway and Sweden. These economies have been marked by large public sectors, extensive and generous welfare systems, a high level of taxation and considerable state involvement. Various promotional agencies of the Nordic countries such as the Norwegian Trekking Association, the Swedish Tourist Association, and in the United States (The American-Scandinavian Foundation established in 1910 by the Danish American industrialist Niels Poulsen) serve to promote market and tourism interests in the region. Today, the five Nordic heads of state act as the organization's patrons and according to the official statement by the organization its mission is "to promote the Nordic region as a whole while increasing the visibility of Denmark, Finland, Iceland, Norway and Sweden in New York City and the United States". The official tourist boards of Scandinavia sometimes cooperate under one umbrella, such as the Scandinavian Tourist Board. The cooperation was introduced for the Asian market in 1986, when the Swedish national tourist board joined the Danish national tourist board to coordinate intergovernmental promotion of the two countries. Norway's government entered one year later. All five Nordic governments participate in the joint promotional efforts in the United States through the Scandinavian Tourist Board of North America. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Stephen_Moylan] | [TOKENS: 864]
Contents Stephen Moylan Stephen Moylan (1737 – April 11, 1811) was an Irish-American patriot leader during the American Revolutionary War. He had several positions in the Continental Army, including Muster-Master General, Secretary and Aide to General George Washington, 2nd Quartermaster General, Commander of the Fourth Continental Light Dragoons, and Commander of the Cavalry of the Continental Army. In January 1776, he wrote a letter using the term "United States of America", the earliest known use of that phrase. Early life Moylan was born to a Catholic family in Cork, Kingdom of Ireland in 1737. His father, John Moylan, a well-to-do merchant of Shandon. Stephen's older brother Francis became Bishop of Cork. His family sent him to be educated in Paris. Moylan then worked in Lisbon for three years in the family shipping firm. He settled in Philadelphia in 1768 to organize his own firm. He was one of the organizers of the Friendly Sons of St. Patrick, an Irish-American fraternal organization, and served as its first president. American Revolution Moylan joined the American Continental Army in 1775 and upon the recommendation of John Dickinson, was appointed Muster-Master General on August 11, 1775. His brother John, acted during the war as United States Clothier General. Stephen Moylan's experience in the shipping industry afforded the United States a well qualified ship outfitter, who would help fit out the first ships of the Continental Navy. On March 5, 1776, he became secretary to General George Washington with the rank of lieutenant colonel. He was appointed Quartermaster General in the American Continental Army on June 5, 1776, succeeding Thomas Mifflin. He resigned from this office on September 28, 1776. However, he continued to serve as a volunteer in General Washington's staff through December 1776. He then raised a troop of light dragoons, the 4th Continental Light Dragoons, also known as Moylan's Horse, on January 3, 1777, at Philadelphia. The regiment would be noted for taking the field in captured British uniforms. They engaged in military action at the Battle of Brandywine on September 11, 1777, and then at the Battle of Germantown on October 4, 1777. By the end of 1777, they were engaged in defending the cantonment at Valley Forge. Col. Moylan succeeded General Pulaski as Commander of the Cavalry in March 1778. Moylan's Horse would see action at the Battle of Monmouth on June 28, 1778. In the campaign of 1779, Moylan and the 4th Dragoons were stationed at Pound Ridge, New York, and saw military action at the Battle of Norwalk on July 11, 1779. Col. Moylan and the 4th Dragoons took part in the Battle of Springfield in New Jersey, on June 23, 1780, and General Anthony Wayne's expedition at Bull's Ferry, New Jersey, on July 20, 1780. Col. Moylan commanded his Dragoons at the Siege of Yorktown in October 1781, after which he was to take the cavalry to the Southern Campaign. However, his failing health caused him to leave the field and return to Philadelphia, where he constantly appealed to the Continental Congress to man, equip and maintain the Continental Dragoon Regiments. He was rewarded for his service by being breveted to brigadier general on November 3, 1783. Personal life Moylan married Mary Ricketts Van Horne on September 12, 1778, and they had two daughters, Elizabeth Catherine and Maria. Their two sons died as children. Moylan died on April 11, 1811, in Philadelphia, and is buried there in St. Mary's Churchyard. See also References This article incorporates text from a publication now in the public domain: Herbermann, Charles, ed. (1913). "Stephen Moylan". Catholic Encyclopedia. New York: Robert Appleton Company. External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/The_Washington_Post] | [TOKENS: 8181]
Contents The Washington Post The Washington Post (locally known as The Post and, informally, WaPo or WP) is an American daily newspaper published in Washington, D.C. It is the most widely circulated newspaper in the Washington metropolitan area and is considered a newspaper of record in the United States. In 2023, the Post had 130,000 print subscribers and 2.5 million digital subscribers, both ranking third among American newspapers after The New York Times and The Wall Street Journal. In 2025, the number of print subscribers sank below 100,000 for the first time in 55 years. The Post was founded in 1877. In its early years, it went through several owners and struggled financially and editorially. In 1933, financier Eugene Meyer purchased it out of bankruptcy and revived its health and reputation; his successors Katharine and Phil Graham, Meyer's daughter and son-in-law, bought out several rival publications and continued this work. The Post's 1971 printing of the Pentagon Papers helped spur opposition to the Vietnam War. Reporters Bob Woodward and Carl Bernstein led the investigation into the break-in at the Democratic National Committee, which developed into the Watergate scandal and the 1974 resignation of President Richard Nixon. In October 2013, the Graham family sold the newspaper to Nash Holdings, a holding company owned by Jeff Bezos, for US$250 million. The newspaper has won 76 Pulitzer Prizes, second only to The New York Times. Washington Post journalists have received 18 Nieman Fellowships and 368 White House News Photographers Association awards. Well-known for its political reporting in the U.S., it is one of the few American newspapers that still operate foreign bureaus, with international breaking news hubs in London and Seoul. Bureaus and circulation As of 2021, the newspaper had 21 foreign bureaus: Baghdad, Beijing, Beirut, Berlin, Brussels, Cairo, Dakar, Hong Kong, Islamabad, Istanbul, Jerusalem, London, Mexico City, Moscow, Nairobi, New Delhi, Rio de Janeiro, Rome, Seoul, Tokyo, and Toronto. The newspaper has local bureaus in Maryland (Annapolis, Montgomery County, Prince George's County, and Southern Maryland) and Virginia (Alexandria, Fairfax, Loudoun County, Richmond, and Prince William County). In 2009, the newspaper closed three U.S. regional bureaus—in Chicago, Los Angeles, and New York City—as part of an increased focus on Washington, D.C.–based political stories and local news. The Washington Post does not print an edition for distribution outside the East Coast. In 2009, the newspaper ceased publication of its National Weekly Edition due to shrinking circulation. The majority of its newsprint readership is in Washington, D.C., and its suburbs in Maryland and Northern Virginia. As of March 2023, the Post's average printed weekday circulation was 139,232, making it the third-largest newspaper in the country by circulation. For many decades, the Post had its main office at 1150 15th Street NW. This real estate remained with Graham Holdings when the newspaper was sold to Jeff Bezos' Nash Holdings in 2013. Graham Holdings sold 1150 15th Street, along with 1515 L Street, 1523 L Street, and land beneath 1100 15th Street, for $159 million in November 2013. The Post continued to lease space at 1150 L Street NW. In May 2014, The Post leased the west tower of One Franklin Square, a high-rise building at 1301 K Street NW in Washington, D.C. The Post has its own ZIP Code, 20071. History The newspaper was founded in 1877 by Stilson Hutchins (1838–1912); in 1880, it added a Sunday edition, becoming the city's first newspaper to publish seven days a week. In April 1878, about four months into publication, The Washington Post purchased The Washington Union, a competing newspaper which was founded by John Lynch in late 1877. The Union had only been in operation about six months at the time of the acquisition. The combined newspaper was published from the Globe Building as The Washington Post and Union beginning on April 15, 1878, with a circulation of 13,000. The Post and Union name was used about two weeks until April 29, 1878, returning to the original masthead the following day. In 1889, Hutchins sold the newspaper to Frank Hatton, a former Postmaster General, and Beriah Wilkins, a former Democratic congressman from Ohio. To promote the newspaper, the new owners requested the leader of the United States Marine Band, John Philip Sousa, to compose a march for the newspaper's essay contest awards ceremony. Sousa composed "The Washington Post". It became the standard music to accompany the two-step, a late 19th-century dance craze, and remains one of Sousa's best-known works. In 1893, the newspaper moved to a building at 14th and E streets NW, where it would stay until 1950. This building combined all functions of the newspaper into one headquarters – newsroom, advertising, typesetting, and printing – that ran 24 hours per day. In 1898, during the Spanish–American War, the Post printed Clifford K. Berryman's classic illustration Remember the Maine, which became the battle-cry for American sailors during the War. In 1902, Berryman published another famous cartoon in the Post – Drawing the Line in Mississippi. This cartoon depicts President Theodore Roosevelt showing compassion for a small bear cub and inspired New York store owner Morris Michtom to create the teddy bear. Wilkins acquired Hatton's share of the newspaper in 1894 at Hatton's death. After Wilkins died in 1903, his sons John and Robert ran the Post for two years before selling it in 1905 to John Roll McLean, owner of the Cincinnati Enquirer. During the Wilson presidency, the Post was credited with the "most famous newspaper typo" in D.C. history according to Reason magazine; the Post intended to report that President Wilson had been "entertaining" his future-wife Mrs. Galt, but instead wrote that he had been "entering" Mrs. Galt. When McLean died in 1916, he put the newspaper in a trust, having little faith that his playboy son Edward "Ned" McLean could manage it as part of his inheritance. Ned went to court and broke the trust, but, under his management, the newspaper slumped toward ruin. He bled the paper for his lavish lifestyle and used it to promote political agendas. During the Red Summer of 1919 the Post supported the white mobs and even ran a front-page story which advertised the location at which white servicemen were planning to meet to carry out attacks on black Washingtonians. In 1929, financier Eugene Meyer, who had run the War Finance Corp. since World War I, secretly made an offer of $5 million for the Post, but he was rebuffed by Ned McLean. On June 1, 1933, Meyer bought the paper at a bankruptcy auction for $825,000 three weeks after stepping down as Chairman of the Federal Reserve. He had bid anonymously, and was prepared to go up to $2 million, far higher than the other bidders. These included William Randolph Hearst, who had long hoped to shut down the ailing Post to benefit his own Washington newspaper presence. The Post's health and reputation were restored under Meyer's ownership. In 1946, he was succeeded as publisher by his son-in-law, Philip Graham. Meyer eventually gained the last laugh over Hearst, who had owned the old Washington Times and the Herald before their 1939 merger that formed the Times-Herald. This was, in turn, bought by and merged into the Post in 1954. The combined paper was officially named The Washington Post and Times-Herald until 1973, although the Times-Herald portion of the nameplate became less and less prominent over time. The merger left the Post with two remaining local competitors, the Washington Star (Evening Star) and The Washington Daily News. In 1972, the two competitors merged, forming the Washington Star-News. After Graham died in 1963, control of The Washington Post Company passed to his wife, Katharine Graham (1917–2001), who was also Eugene Meyer's daughter. Few women had run prominent national newspapers in the United States, and Katharine Graham said that she was particularly anxious about assuming this role. She served as publisher from 1969 to 1979. Graham took The Washington Post Company public on June 15, 1971, in the midst of the Pentagon Papers controversy. A total of 1,294,000 shares were offered to the public at $26 per share. By the end of Graham's tenure as CEO in 1991, the stock was worth $888 per share, not counting the effect of an intermediate 4:1 stock split. Graham also oversaw the Post company's diversification purchase of the for-profit education and training company Kaplan, Inc. for $40 million in 1984. Twenty years later, Kaplan had surpassed the Post newspaper as the company's leading contributor to income, and by 2010 Kaplan accounted for more than 60% of the entire company revenue stream. Executive editor Ben Bradlee put the newspaper's reputation and resources behind reporters Bob Woodward and Carl Bernstein, who, in a long series of articles, chipped away at the story behind the 1972 burglary of Democratic National Committee offices in the Watergate complex in Washington. The Post's dogged coverage of the story, the outcome of which ultimately played a major role in the resignation of President Richard Nixon, won the newspaper a Pulitzer Prize in 1973. In 1972, the "Book World" section was introduced with Pulitzer Prize-winning critic William McPherson as its first editor. It featured Pulitzer Prize-winning critics such as Jonathan Yardley and Michael Dirda, the latter of whom established his career as a critic at the Post. In 2009, after 37 years, with great reader outcries and protest, The Washington Post Book World as a standalone insert was discontinued, the last issue being Sunday, February 15, 2009, along with a general reorganization of the paper, such as placing the Sunday editorials on the back page of the main front section rather than the "Outlook" section and distributing some other locally oriented "op-ed" letters and commentaries in other sections. However, book reviews are still published in the Outlook section on Sundays and in the Style section the rest of the week, as well as online. Donald E. Graham, Katharine's son, succeeded her as a publisher in 1979. In 1995, the domain name washingtonpost.com was purchased. That same year, a failed effort to create an online news repository called Digital Ink launched. The following year, it was shut down, and the first website was launched in June 1996. In August 2013, Jeff Bezos purchased The Washington Post and other local publications, websites, and real estate for US$250 million, transferring ownership to Nash Holdings LLC, Bezos's private investment company. The paper's former parent company, which retained some other assets such as Kaplan and a group of TV stations, was renamed Graham Holdings shortly after the sale. Nash Holdings, which includes the Post, is operated separately from technology company Amazon, which Bezos founded and where he is as of 2022[update] executive chairman and the largest single shareholder, with 12.7% of voting rights. Bezos said he has a vision that recreates "the 'daily ritual' of reading the Post as a bundle, not merely a series of individual stories..." He has been described as a "hands-off owner", holding teleconference calls with executive editor Martin Baron every two weeks. Bezos appointed Fred Ryan (founder and CEO of Politico) to serve as publisher and chief executive officer. This signaled Bezos' intent to shift the Post to a more digital focus with a national and global readership. In 2015, the Post moved from the building it owned at 1150 15th Street to a leased space three blocks away at One Franklin Square on K Street. Since 2014 the Post has launched an online personal finance section, a blog, and a podcast with a retro theme. The Post won the 2020 Webby People's Voice Award for News & Politics in the Social and Web categories. In 2017, the newspaper hired Jamal Khashoggi as a columnist. In 2018, Khashoggi was murdered by Saudi agents in Istanbul. In October 2023, the Post announced it would cut 240 jobs across the organization by offering voluntary separation packages to employees. In a staff-wide email announcing the job cuts, interim CEO Patty Stonesifer wrote, "Our prior projections for traffic, subscriptions and advertising growth for the past two years — and into 2024 — have been overly optimistic". The Post has lost around 500,000 subscribers since the end of 2020 and was set to lose $100 million in 2023, according to The New York Times. The layoffs prompted Dan Froomkin of Presswatchers to suggest that the decline in readership could be reversed by focusing on the rise of authoritarianism (in a fashion similar to the role the Post played during the Watergate scandal) instead of staying strictly neutral, which Froomkin says places the paper into an undistinguished secondary role in competition with other contemporary media. As part of the shift in tone, in 2023 the paper closed down the "KidsPost" column for children, the "Skywatch" astronomy column, and the "John Kelly's Washington" column about local history and sights, which had been running under different bylines since 1947. In May 2024, CEO and publisher William Lewis announced that the organization would embrace artificial intelligence to improve the paper's financial situation, telling staff it would seek "AI everywhere in our newsroom." In June 2024, Axios reported the Post faced significant internal turmoil and financial challenges. The new CEO, Lewis, has already generated controversy with his leadership style and proposed restructuring plans. The abrupt departure of executive editor Buzbee and the appointment of two white men to top editorial positions have sparked internal discontent, particularly given the lack of consideration for the Post's senior female editors, as well as allegations that in March 2024, Lewis put pressure on Buzbee to bury a story about his involvement in a British phone-hacking scandal. Additionally, Lewis' proposed division for social media and service journalism has met with resistance from staff. Recent reports alleging Lewis' attempts to influence editorial decisions, including pressuring Buzbee to drop a story about his past ties to a phone hacking scandal, and offering NPR's media correspondent an exclusive interview about the Post's future in exchange for not publishing similar allegations, have further shaken the newsroom's morale. Staffers also became worried about Lewis' drinking and uninvolved role in the newsroom. Lewis continued to grapple with declining revenue and audience on the business front, and sought strategies to regain subscribers lost since the Trump era. Later that month, the paper ran a story allegedly exposing a connection between incoming editor Robert Winnett and John Ford, a man who "admitted to an extensive career using deception and illegal means to obtain confidential information." Winnett withdrew from the position shortly thereafter. In January 2025, the Post announced it will layoff 4% of its staff, less than 100 people. Newsroom employees will not be affected. On January 14, 2026, the FBI raided the apartment of a Post journalist, Hannah Natanson, and seized her phone, two laptops, and a smartwatch. Investigators said to Natanson that the focus of the probe was not her but Aurelio Perez-Lugones, a system administrator with top-secret security clearance, under investigation for taking home classified intelligence reports. The day after, the Post's editorial board called the search an "aggressive attack on the press freedom of all journalists." On February 4, 2026, it was announced that around 300 Post employees would be laid off. The paper's sports and books coverage are expected to be closed entirely, and its local news coverage will be substantially cut. Additionally, their daily news "Post Reports" podcast, which ran for seven years, has been suspended. Several foreign bureaus were closed, and at least one correspondent in Ukraine was laid off. The layoffs were driven by a reported $100 million of losses in 2024, subscriber drops following the paper's refusal to endorse a presidential candidate in the 2024 US election, and falling search traffic from AI tools. On February 7, 2026, it was announced that Will Lewis, the paper's publisher, would step down and be replaced in the interim by Jeff D'Onofrio, who served as the company's chief financial officer. The Washington Post Guild employees union welcomed the change in leadership, stating that Lewis will be remembered for "the attempted destruction of a great American journalism institution", and urged Bezos to "sell the paper to someone willing to invest in its future". In January 2025, editorial cartoonist Ann Telnaes resigned from The Washington Post. In a blog post titled "Why I'm quitting the Washington Post", she said the paper refused to run a cartoon that criticized the relationship between American billionaires and President Donald Trump, a decision she called "dangerous for a free press". The post and cartoon sparked conversations about the paper's ownership under Bezos. In February 2025, Bezos announced that the opinion section of the Post would publish only pieces that support "personal liberties and free markets". David Shipley, The Post's opinion editor, resigned after trying to persuade Bezos to reconsider the new direction. Within two days of the announcement, it was reported that more than 75,000 digital subscribers had canceled their subscriptions. The following month, publisher Will Lewis killed a column by Opinion columnist and editor Ruth Marcus criticizing the new direction. Marcus resigned, ending her 40-year tenure with the newspaper. In the aftermath of the killing of Charlie Kirk, the Post fired columnist and founding global opinion editor Karen Attiah in September 2025, citing violations of its social media policy. According to Attiah, she was punished for "speaking out against political violence, racial double standards, and America's apathy toward guns", arguing the US "accepts and worships" gun violence. Politico noted that only one post mentioned Kirk, which referenced his prior claim that Black women "do not have the brain processing power" to be taken seriously. Attiah said her posts "made clear that not performing over-the-top grief for white men who espouse violence was not the same as endorsing violence against them", and described being pushed out after 11 years of service "for doing my job as a journalist" as a deeply "cruel 180". She was the last Black full-time writer at the opinion desk. Political positions In 1933, financier Eugene Meyer bought the bankrupt Post, and assured the public that neither he nor the newspaper would be beholden to any political party. But as a leading Republican who had been appointed Chairman of the Federal Reserve by Herbert Hoover in 1930, his opposition to Roosevelt's New Deal colored the paper's editorials and news coverage, including editorializing news stories written by Meyer under a pseudonym. His wife Agnes Ernst Meyer was a journalist from the other end of the spectrum politically. The Post ran many of her pieces including tributes to her personal friends John Dewey and Saul Alinsky. In 1946, Meyer was appointed head of World Bank, and he named his son-in-law Phil Graham to succeed him as Post publisher. The post-war years saw the developing friendship of Phil and Kay Graham with the Kennedys, the Bradlees and the rest of the "Georgetown Set", including many Harvard University alumni that would color the Post's political orientation. Kay Graham's most memorable Georgetown soirée guest list included British diplomat and communist spy Donald Maclean. The Post is credited with coining the term "McCarthyism" in a 1950 editorial cartoon by Herbert Block. Depicting buckets of tar, it made fun of Sen. Joseph McCarthy's "tarring" tactics, i.e., smear campaigns and character assassination against those targeted by his accusations. Sen. McCarthy was attempting to do for the Senate what the House Un-American Activities Committee had been doing for years—investigating Soviet espionage in America. The HUAC made Richard Nixon nationally known for his role in the Hiss/Chambers case that exposed communist spying in the State Department. The committee had evolved from the McCormack-Dickstein Committee of the 1930s. Phil Graham's friendship with John F. Kennedy remained strong until they died in 1963. FBI Director J. Edgar Hoover reportedly told the new President Lyndon B. Johnson, "I don't have much influence with the Post because I frankly don't read it. I view it like the Daily Worker." Ben Bradlee became the editor-in-chief in 1968, and Kay Graham officially became the publisher in 1969, paving the way for the aggressive reporting of the Pentagon Papers and Watergate scandals. The Post strengthened public opposition to the Vietnam War in 1971 when it published the Pentagon Papers. In the mid-1970s, some conservatives referred to the Post as "Pravda on the Potomac" because of its perceived left-wing bias in both reporting and editorials. Since then, the appellation has been used by both liberal and conservative critics of the newspaper. In the PBS documentary Buying the War, journalist Bill Moyers said in the year before the Iraq War there were 27 editorials supporting the Bush administration's desire to invade Iraq. National security correspondent Walter Pincus reported that he had been ordered to cease his reports that were critical of the administration. According to author and journalist Greg Mitchell: "By the Post's own admission, in the months before the war, it ran more than 140 stories on its front page promoting the war, while contrary information got lost". On March 23, 2007, Chris Matthews said on his television program, "The Washington Post is not the liberal newspaper it was [...] I have been reading it for years and it is a neocon newspaper". It has regularly published a mixture of op-ed columnists, with some of them left-leaning (including E. J. Dionne, Dana Milbank, Greg Sargent, and Eugene Robinson), and some of them right-leaning (including George Will, Marc Thiessen, Michael Gerson and Charles Krauthammer). Responding to criticism of the newspaper's coverage during the run-up to the 2008 presidential election, former Post ombudsman Deborah Howell wrote: "The opinion pages have strong conservative voices; the editorial board includes centrists and conservatives; and there were editorials critical of Obama. Yet opinion was still weighted toward Obama." According to a 2009 Oxford University Press book by Richard Davis on the impact of blogs on American politics, liberal bloggers link to The Washington Post and The New York Times more often than other major newspapers; however, conservative bloggers also link predominantly to liberal newspapers. Since 2011, the Post has been running a column called "The Fact Checker" that the Post describes as a "truth squad". The Fact Checker received a $250,000 grant from Google News Initiative/YouTube to expand production of video fact checks. In mid-September 2016, Matthew Ingram of Forbes joined Glenn Greenwald of The Intercept, and Trevor Timm of The Guardian in criticizing The Washington Post for "demanding that [former National Security Agency contractor Edward] Snowden ... stand trial on espionage charges". In February 2017, the Post adopted the slogan "Democracy Dies in Darkness" for its masthead. In February 2025, Jeff Bezos announced that the paper's opinion pages would endorse "personal liberties and free markets" to the exclusion of other views. According to the NPR, the announcement suggested the Post was adopting a libertarian line. In October 2025, columnist Marc Thiessen stated that the paper's opinion section was now conservative. In the vast majority of U.S. elections, for federal, state, and local office, the Post editorial board has endorsed Democratic candidates. The paper's editorial board and endorsement decision-making are separate from newsroom operations. Until 1976, the Post did not regularly make endorsements in presidential elections. Since it endorsed Jimmy Carter in 1976, the Post has endorsed Democrats in presidential elections, and has never endorsed a Republican for president in the general election, although in the 1988 presidential election, the Post declined to endorse either Governor Michael Dukakis (the Democratic candidate) or Vice President George H. W. Bush (the Republican candidate). The Post editorial board endorsed Barack Obama in 2008 and 2012; Hillary Clinton in 2016; and Joe Biden in 2020. In 2024, the Post controversially announced that it would no longer publish presidential endorsements. While the newspaper predominantly endorses Democrats in congressional, state, and local elections, it has occasionally endorsed Republican candidates. It endorsed Maryland Governor Robert Ehrlich's unsuccessful bid for a second term in 2006. In 2006, it repeated its historic endorsements of every Republican incumbent for Congress in Northern Virginia. The Post editorial board endorsed Virginia's Republican U.S. Senator John Warner in his Senate reelection campaign in 1990, 1996 and 2002; the paper's most recent endorsement of a Maryland Republican for U.S. Senate was in the 1980s, when the paper endorsed Senator Charlies "Mac" Mathias Jr. In U.S. House of Representatives elections, moderate Republicans in Virginia and Maryland, including Wayne Gilchrest, Thomas M. Davis, and Frank Wolf, have enjoyed the support of the Post; the Post also endorsed Republican Carol Schwartz in her campaign in Washington, D.C. Eleven days before the 2024 presidential election, CEO and publisher William Lewis announced that the Post would not endorse a candidate for 2024. It was the first time since the 1988 presidential election that the paper did not endorse the Democratic candidate. Lewis also said that the paper would not make endorsements in any future presidential election. Lewis stated that the paper was "returning to our roots" of not endorsing candidates, and explained that the move was "a statement in support of our readers' ability to make up their own minds", and "consistent with the values the Post has always stood for and what we hope for in a leader: character and courage in service to the American ethic, veneration for the rule of law, and respect for human freedom in all its aspects." Sources familiar with the situation stated that the Post editorial board had drafted an endorsement for Kamala Harris, but that it had been blocked by order of the Post's owner Jeff Bezos. The move was criticized by former executive editor Martin Baron, who considered it "disturbing spinelessness at an institution famed for courage", and suggested that Bezos was fearing retaliation from 2024 Republican candidate Donald Trump that could impact Bezos's other businesses if Trump were elected. Editor-at-large Robert Kagan and columnist Michele Norris resigned in the wake of the decision, and editor David Maraniss said that the paper was "dying in darkness", a reference to the paper's current slogan. Post opinion columnists jointly authored an article calling the decision to not endorse a "terrible mistake", and it was condemned by the Washington Post Guild, a union unit representing Post employees. More than 250,000 people (about ten percent of the Post's subscribers) cancelled their subscriptions, and three members of the editorial board left the board, though they remain with the Post in other positions. An endorsement of Harris was subsequently published by the paper's humorist Alexandra Petri, who explained that "if I were the paper, I would be a little embarrassed that it has fallen to me, the humor columnist, to make our presidential endorsement", and that "I only know what's happening because our actual journalists are out there reporting, knowing that their editors have their backs, that there's no one too powerful to report on, that we would never pull a punch out of fear." Condemning the Post's decision, several columnists, including Will Bunch, Jonathan Last, Dan Froomkin, Donna Ladd and Sewell Chan, described it as an example of what historian Timothy Snyder calls anticipatory obedience. Snyder himself criticized the decision, asserting that "'do not obey in advance' is the main lesson of the twentieth century." Andrew Koppelman, in an opinion piece for The Hill, sarcastically praised the Post for "show[ing] us a glimpse of the authoritarian dystopia that Trump wants". Incidents and concerns In September 1980, a Sunday feature story appeared on the front page of the Post titled "Jimmy's World" in which reporter Janet Cooke wrote a profile of the life of an eight-year-old heroin addict. Although some within the Post doubted the story's veracity, the paper's editors defended it, and assistant managing editor Bob Woodward submitted the story to the Pulitzer Prize Board at Columbia University for consideration. Cooke was awarded the Pulitzer Prize for Feature Writing on April 13, 1981. The story was subsequently found to be a complete fabrication, and the Pulitzer was returned. In July 2009, amid an intense debate over health care reform, Politico reported that a health-care lobbyist had received an "astonishing" offer of access to the Post's "health-care reporting and editorial staff". Post publisher Katharine Weymouth had planned a series of exclusive dinner parties or "salons" at her private residence, to which she had invited prominent lobbyists, trade group members, politicians, and business people. Participants were to be charged $25,000 to sponsor a single salon, and $250,000 for 11 sessions, which were closed to the public and to the non-Post press. The idea drew swift criticism as an apparent ploy to allow insiders to purchase face time with Post staff. Weymouth quickly canceled the salons, saying, "This should never have happened." White House counsel Gregory B. Craig reminded officials that under federal ethics rules, they need advance approval for such events. Post Executive Editor Marcus Brauchli, who was named on the flier as one of the salon's "Hosts and Discussion Leaders", said he was "appalled" by the plan, adding, "It suggests that access to Washington Post journalists was available for purchase." Dating back to 2011, The Washington Post began to include "China Watch" advertising supplements provided by China Daily, an English language newspaper owned by the Publicity Department of the Chinese Communist Party, on the print and online editions. Although the header to the online "China Watch" section included the text "A Paid Supplement to The Washington Post", James Fallows of The Atlantic suggested that the notice was not clear enough for most readers to see. Distributed to the Post and multiple newspapers around the world, the "China Watch" advertising supplements range from four to eight pages and appear at least monthly. According to a 2018 report by The Guardian, "China Watch" uses "a didactic, old-school approach to propaganda." In 2020, a report by Freedom House, titled "Beijing's Global Megaphone", criticized the Post and other newspapers for distributing "China Watch". In the same year, 35 Republican members of the U.S. Congress wrote a letter to the U.S. Department of Justice in February 2020 calling for an investigation of potential FARA violations by China Daily. The letter named an article that appeared in the Post, "Education Flaws Linked to Hong Kong Unrest", as an example of "articles [that] serve as cover for China's atrocities, including ... its support for the crackdown in Hong Kong." According to The Guardian, the Post had already stopped running "China Watch" in 2019. In 2020, The Post suspended reporter Felicia Sonmez after she posted a series of tweets about the 2003 rape allegation against basketball star Kobe Bryant after Bryant's death. She was reinstated after over 200 Post journalists wrote an open letter criticizing the paper's decision. In July 2021, Sonmez sued The Post and several of its top editors, alleging workplace discrimination; the suit was dismissed in March 2022, with the court determining that Sonmez had failed to make plausible claims. In June 2022, Sonmez engaged in a Twitter feud with fellow Post staffers David Weigel, criticizing him over what he later described as "an offensive joke", and Jose A. Del Real, who accused Sonmez of "engaging in repeated and targeted public harassment of a colleague". Following the feud, the newspaper suspended Weigel for a month for violating the company's social media guidelines, and the newspaper's executive editor Sally Buzbee sent out a newsroom-wide memorandum directing employees to "Be constructive and collegial" in their interactions with colleagues. The newspaper fired Sonmez, writing in an emailed termination letter that she had engaged in "misconduct that includes insubordination, maligning your co-workers online and violating The Post's standards on workplace collegiality and inclusivity." The Post faced criticism from the Post Guild after refusing to go to arbitration over the dismissal, stating that the expiration of the Post's contract "does not relieve the Post from its contractual obligation to arbitrate grievances filed prior to expiration." In 2019, Covington Catholic High School student Nick Sandmann filed a defamation lawsuit against the Post, alleging that it libeled him in seven articles regarding the January 2019 Lincoln Memorial confrontation between Covington students and the Indigenous Peoples March. A federal judge dismissed the case, ruling that 30 of the 33 statements in the Post that Sandmann alleged were libelous were not, but allowed Sandmann to file an amended complaint as to three statements. After Sandmann's lawyers amended the complaint, the suit was reopened on October 28, 2019. In 2020, The Post settled the lawsuit brought by Sandmann for an undisclosed amount. Several Washington Post op-eds and columns have prompted criticism, including a number of comments on race by columnist Richard Cohen over the years, and a controversial 2014 column on campus sexual assault by George Will. The Post's decision to run an op-ed by Mohammed Ali al-Houthi, a leader in Yemen's Houthi movement, was criticized by some activists on the basis that it provided a platform to an "anti-Western and antisemitic group supported by Iran." In 2022, actor Johnny Depp successfully sued ex-wife Amber Heard for an op-ed she wrote in The Washington Post where she described herself as a public figure representing domestic abuse two years after she had publicly accused him of domestic violence. Speaking on behalf of President Nixon, White House Press Secretary Ron Ziegler infamously accused The Washington Post of "shabby journalism" for their focus on Watergate only to apologize when the damning reporting on Nixon was proved correct. 45th/47th president Donald Trump repeatedly spoke out against The Washington Post on his Twitter account, having "tweeted or retweeted criticism of the paper, tying it to Amazon more than 20 times since his campaign for president" by August 2018. In addition to often attacking the paper itself, Trump used Twitter to blast various Post journalists and columnists. During the 2020 Democratic Party presidential primaries, Senator Bernie Sanders repeatedly criticized The Washington Post, saying that its coverage of his campaign was slanted against him and attributing this to Jeff Bezos' purchase of the newspaper. Sanders' criticism was echoed by the socialist magazine Jacobin and the progressive journalist watchdog Fairness and Accuracy in Reporting. Washington Post executive editor Martin Baron responded by saying that Sanders' criticism was "baseless and conspiratorial". An investigation by The Intercept, The Nation, and DeSmog found that The Washington Post is one of the leading media outlets that publishes advertising for the fossil fuel industry. Journalists who cover climate change for The Washington Post are concerned that conflicts of interest with the companies and industries that caused climate change and obstructed action will reduce the credibility of their reporting on climate change and cause readers to downplay the climate crisis. Organization Major stockholders Publishers Executive editors Current journalists at The Washington Post include: Yasmeen Abutaleb, Dan Balz, Will Englund, Marc Fisher, Robin Givhan, David Ignatius, Ellen Nakashima, Ashley Parker, Sally Quinn, Michelle Singletary, Ishaan Tharoor, and Joe Yonan. Former journalists of The Washington Post include: Scott Armstrong, Melissa Bell, Ann Devroy, Edward T. Folliard, Malvina Lindsay, Mary McGrory, Christine Emba, Walter Pincus, and Bob Woodward. Arc XP is a department of The Washington Post, which provides a publishing system and software for news and media organizations such as the Boston Globe, Le Parisien, The Irish Times, Libération, Dallas Morning News, The Globe and Mail, Record, Graham Media Group, and Sky News. Mary Jordan was the founding editor, head of content, and moderator for Washington Post Live, the Post's editorial events business, which organizes political debates, conferences and news events for the media company. This includes "The 40th Anniversary of Watergate" in June 2012, that featured key Watergate figures including former White House counsel John Dean, Washington Post editor Ben Bradlee, and reporters Bob Woodward and Carl Bernstein, which was held at the Watergate hotel. Regular hosts include Frances Stead Sellers. Lois Romano was formerly the editor of Washington Post Live. In 1975, the Washington Post pressmen's union went on strike. The Post hired replacement workers to replace the pressmen's union, and other unions returned to work in February 1976. In 1986, during negotiations between the Post and the Newspaper Guild union over a new contract, five employees, including Newspaper Guild unit chairman Thomas R. Sherwood and assistant Maryland editor Claudia Levy, sued the Post for overtime pay, stating that the newspaper had claimed that budgets did not allow for overtime wages. In June 2018, over 400 employees of The Washington Post signed an open letter to the owner Jeff Bezos demanding "fair wages; fair benefits for retirement, family leave and health care; and a fair amount of job security." The open letter was accompanied by video testimonials from employees, who alleged "shocking pay practices" despite record growth in subscriptions at the newspaper, with salaries rising an average of $10 per week, which the letter claimed was less than half the rate of inflation. The petition followed on a year of unsuccessful negotiations between The Washington Post Guild and upper management over pay and benefit increases. As of 2023, the Washington Post Guild represented around 1,000 staff members at the Post. In December 2023, more than 750 journalist and staffers at the Post went on strike, accusing the company of refusing to "bargain in good faith" on issues including issues including pay increases, pay equity, remote work policies, and mental health resources. Later the same month, the Washington Post Guild won a new three-year contract with the paper, ending 18 months of negotiations. In May 2025, a majority of technology workers at the Post voted to unionize as the Washington Post Tech Guild, representing more than 300 engineering, product design, and data workers at the Post. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Lod#cite_note-67] | [TOKENS: 4733]
Contents Lod Lod (Hebrew: לוד, fully vocalized: לֹד), also known as Lydda (Ancient Greek: Λύδδα) and Lidd (Arabic: اللِّدّ, romanized: al-Lidd, or اللُّدّ, al-Ludd), is a city 15 km (9+1⁄2 mi) southeast of Tel Aviv and 40 km (25 mi) northwest of Jerusalem in the Central District of Israel. It is situated between the lower Shephelah on the east and the coastal plain on the west. The city had a population of 90,814 in 2023. Lod has been inhabited since at least the Neolithic period. It is mentioned a few times in the Hebrew Bible and in the New Testament. Between the 5th century BCE and up until the late Roman period, it was a prominent center for Jewish scholarship and trade. Around 200 CE, the city became a Roman colony and was renamed Diospolis (Ancient Greek: Διόσπολις, lit. 'city of Zeus'). Tradition identifies Lod as the 4th century martyrdom site of Saint George; the Church of Saint George and Mosque of Al-Khadr located in the city is believed to have housed his remains. Following the Arab conquest of the Levant, Lod served as the capital of Jund Filastin; however, a few decades later, the seat of power was transferred to Ramla, and Lod slipped in importance. Under Crusader rule, the city was a Catholic diocese of the Latin Church and it remains a titular see to this day.[citation needed] Lod underwent a major change in its population in the mid-20th century. Exclusively Palestinian Arab in 1947, Lod was part of the area designated for an Arab state in the United Nations Partition Plan for Palestine; however, in July 1948, the city was occupied by the Israel Defense Forces, and most of its Arab inhabitants were expelled in the Palestinian expulsion from Lydda and Ramle. The city was largely resettled by Jewish immigrants, most of them expelled from Arab countries. Today, Lod is one of Israel's mixed cities, with an Arab population of 30%. Lod is one of Israel's major transportation hubs. The main international airport, Ben Gurion Airport, is located 8 km (5 miles) north of the city. The city is also a major railway and road junction. Religious references The Hebrew name Lod appears in the Hebrew Bible as a town of Benjamin, founded along with Ono by Shamed or Shamer (1 Chronicles 8:12; Ezra 2:33; Nehemiah 7:37; 11:35). In Ezra 2:33, it is mentioned as one of the cities whose inhabitants returned after the Babylonian captivity. Lod is not mentioned among the towns allocated to the tribe of Benjamin in Joshua 18:11–28. The name Lod derives from a tri-consonantal root not extant in Northwest Semitic, but only in Arabic (“to quarrel; withhold, hinder”). An Arabic etymology of such an ancient name is unlikely (the earliest attestation is from the Achaemenid period). In the New Testament, the town appears in its Greek form, Lydda, as the site of Peter's healing of Aeneas in Acts 9:32–38. The city is also mentioned in an Islamic hadith as the location of the battlefield where the false messiah (al-Masih ad-Dajjal) will be slain before the Day of Judgment. History The first occupation dates to the Neolithic in the Near East and is associated with the Lodian culture. Occupation continued in the Levant Chalcolithic. Pottery finds have dated the initial settlement in the area now occupied by the town to 5600–5250 BCE. In the Early Bronze, it was an important settlement in the central coastal plain between the Judean Shephelah and the Mediterranean coast, along Nahal Ayalon. Other important nearby sites were Tel Dalit, Tel Bareqet, Khirbat Abu Hamid (Shoham North), Tel Afeq, Azor and Jaffa. Two architectural phases belong to the late EB I in Area B. The first phase had a mudbrick wall, while the late phase included a circulat stone structure. Later excavations have produced an occupation later, Stratum IV. It consists of two phases, Stratum IVb with mudbrick wall on stone foundations and rounded exterior corners. In Stratum IVa there was a mudbrick wall with no stone foundations, with imported Egyptian potter and local pottery imitations. Another excavations revealed nine occupation strata. Strata VI-III belonged to Early Bronze IB. The material culture showed Egyptian imports in strata V and IV. Occupation continued into Early Bronze II with four strata (V-II). There was continuity in the material culture and indications of centralized urban planning. North to the tell were scattered MB II burials. The earliest written record is in a list of Canaanite towns drawn up by the Egyptian pharaoh Thutmose III at Karnak in 1465 BCE. From the fifth century BCE until the Roman period, the city was a centre of Jewish scholarship and commerce. According to British historian Martin Gilbert, during the Hasmonean period, Jonathan Maccabee and his brother, Simon Maccabaeus, enlarged the area under Jewish control, which included conquering the city. The Jewish community in Lod during the Mishnah and Talmud era is described in a significant number of sources, including information on its institutions, demographics, and way of life. The city reached its height as a Jewish center between the First Jewish-Roman War and the Bar Kokhba revolt, and again in the days of Judah ha-Nasi and the start of the Amoraim period. The city was then the site of numerous public institutions, including schools, study houses, and synagogues. In 43 BC, Cassius, the Roman governor of Syria, sold the inhabitants of Lod into slavery, but they were set free two years later by Mark Antony. During the First Jewish–Roman War, the Roman proconsul of Syria, Cestius Gallus, razed the town on his way to Jerusalem in Tishrei 66 CE. According to Josephus, "[he] found the city deserted, for the entire population had gone up to Jerusalem for the Feast of Tabernacles. He killed fifty people whom he found, burned the town and marched on". Lydda was occupied by Emperor Vespasian in 68 CE. In the period following the destruction of Jerusalem in 70 CE, Rabbi Tarfon, who appears in many Tannaitic and Jewish legal discussions, served as a rabbinic authority in Lod. During the Kitos War, 115–117 CE, the Roman army laid siege to Lod, where the rebel Jews had gathered under the leadership of Julian and Pappos. Torah study was outlawed by the Romans and pursued mostly in the underground. The distress became so great, the patriarch Rabban Gamaliel II, who was shut up there and died soon afterwards, permitted fasting on Ḥanukkah. Other rabbis disagreed with this ruling. Lydda was next taken and many of the Jews were executed; the "slain of Lydda" are often mentioned in words of reverential praise in the Talmud. In 200 CE, emperor Septimius Severus elevated the town to the status of a city, calling it Colonia Lucia Septimia Severa Diospolis. The name Diospolis ("City of Zeus") may have been bestowed earlier, possibly by Hadrian. At that point, most of its inhabitants were Christian. The earliest known bishop is Aëtius, a friend of Arius. During the following century (200-300CE), it's said that Joshua ben Levi founded a yeshiva in Lod. In December 415, the Council of Diospolis was held here to try Pelagius; he was acquitted. In the sixth century, the city was renamed Georgiopolis after St. George, a soldier in the guard of the emperor Diocletian, who was born there between 256 and 285 CE. The Church of Saint George and Mosque of Al-Khadr is named for him. The 6th-century Madaba map shows Lydda as an unwalled city with a cluster of buildings under a black inscription reading "Lod, also Lydea, also Diospolis". An isolated large building with a semicircular colonnaded plaza in front of it might represent the St George shrine. After the Muslim conquest of Palestine by Amr ibn al-'As in 636 CE, Lod which was referred to as "al-Ludd" in Arabic served as the capital of Jund Filastin ("Military District of Palaestina") before the seat of power was moved to nearby Ramla during the reign of the Umayyad Caliph Suleiman ibn Abd al-Malik in 715–716. The population of al-Ludd was relocated to Ramla, as well. With the relocation of its inhabitants and the construction of the White Mosque in Ramla, al-Ludd lost its importance and fell into decay. The city was visited by the local Arab geographer al-Muqaddasi in 985, when it was under the Fatimid Caliphate, and was noted for its Great Mosque which served the residents of al-Ludd, Ramla, and the nearby villages. He also wrote of the city's "wonderful church (of St. George) at the gate of which Christ will slay the Antichrist." The Crusaders occupied the city in 1099 and named it St Jorge de Lidde. It was briefly conquered by Saladin, but retaken by the Crusaders in 1191. For the English Crusaders, it was a place of great significance as the birthplace of Saint George. The Crusaders made it the seat of a Latin Church diocese, and it remains a titular see. It owed the service of 10 knights and 20 sergeants, and it had its own burgess court during this era. In 1226, Ayyubid Syrian geographer Yaqut al-Hamawi visited al-Ludd and stated it was part of the Jerusalem District during Ayyubid rule. Sultan Baybars brought Lydda again under Muslim control by 1267–8. According to Qalqashandi, Lydda was an administrative centre of a wilaya during the fourteenth and fifteenth century in the Mamluk empire. Mujir al-Din described it as a pleasant village with an active Friday mosque. During this time, Lydda was a station on the postal route between Cairo and Damascus. In 1517, Lydda was incorporated into the Ottoman Empire as part of the Damascus Eyalet, and in the 1550s, the revenues of Lydda were designated for the new waqf of Hasseki Sultan Imaret in Jerusalem, established by Hasseki Hurrem Sultan (Roxelana), the wife of Suleiman the Magnificent. By 1596 Lydda was a part of the nahiya ("subdistrict") of Ramla, which was under the administration of the liwa ("district") of Gaza. It had a population of 241 households and 14 bachelors who were all Muslims, and 233 households who were Christians. They paid a fixed tax-rate of 33,3 % on agricultural products, including wheat, barley, summer crops, vineyards, fruit trees, sesame, special product ("dawalib" =spinning wheels), goats and beehives, in addition to occasional revenues and market toll, a total of 45,000 Akçe. All of the revenue went to the Waqf. In 1051 AH/1641/2, the Bedouin tribe of al-Sawālima from around Jaffa attacked the villages of Subṭāra, Bayt Dajan, al-Sāfiriya, Jindās, Lydda and Yāzūr belonging to Waqf Haseki Sultan. The village appeared as Lydda, though misplaced, on the map of Pierre Jacotin compiled in 1799. Missionary William M. Thomson visited Lydda in the mid-19th century, describing it as a "flourishing village of some 2,000 inhabitants, imbosomed in noble orchards of olive, fig, pomegranate, mulberry, sycamore, and other trees, surrounded every way by a very fertile neighbourhood. The inhabitants are evidently industrious and thriving, and the whole country between this and Ramleh is fast being filled up with their flourishing orchards. Rarely have I beheld a rural scene more delightful than this presented in early harvest ... It must be seen, heard, and enjoyed to be appreciated." In 1869, the population of Ludd was given as: 55 Catholics, 1,940 "Greeks", 5 Protestants and 4,850 Muslims. In 1870, the Church of Saint George was rebuilt. In 1892, the first railway station in the entire region was established in the city. In the second half of the 19th century, Jewish merchants migrated to the city, but left after the 1921 Jaffa riots. In 1882, the Palestine Exploration Fund's Survey of Western Palestine described Lod as "A small town, standing among enclosure of prickly pear, and having fine olive groves around it, especially to the south. The minaret of the mosque is a very conspicuous object over the whole of the plain. The inhabitants are principally Moslim, though the place is the seat of a Greek bishop resident of Jerusalem. The Crusading church has lately been restored, and is used by the Greeks. Wells are found in the gardens...." From 1918, Lydda was under the administration of the British Mandate in Palestine, as per a League of Nations decree that followed the Great War. During the Second World War, the British set up supply posts in and around Lydda and its railway station, also building an airport that was renamed Ben Gurion Airport after the death of Israel's first prime minister in 1973. At the time of the 1922 census of Palestine, Lydda had a population of 8,103 inhabitants (7,166 Muslims, 926 Christians, and 11 Jews), the Christians were 921 Orthodox, 4 Roman Catholics and 1 Melkite. This had increased by the 1931 census to 11,250 (10,002 Muslims, 1,210 Christians, 28 Jews, and 10 Bahai), in a total of 2475 residential houses. In 1938, Lydda had a population of 12,750. In 1945, Lydda had a population of 16,780 (14,910 Muslims, 1,840 Christians, 20 Jews and 10 "other"). Until 1948, Lydda was an Arab town with a population of around 20,000—18,500 Muslims and 1,500 Christians. In 1947, the United Nations proposed dividing Mandatory Palestine into two states, one Jewish state and one Arab; Lydda was to form part of the proposed Arab state. In the ensuing war, Israel captured Arab towns outside the area the UN had allotted it, including Lydda. In December 1947, thirteen Jewish passengers in a seven-car convoy to Ben Shemen Youth Village were ambushed and murdered.In a separate incident, three Jewish youths, two men and a woman were captured, then raped and murdered in a neighbouring village. Their bodies were paraded in Lydda’s principal street. The Israel Defense Forces entered Lydda on 11 July 1948. The following day, under the impression that it was under attack, the 3rd Battalion was ordered to shoot anyone "seen on the streets". According to Israel, 250 Arabs were killed. Other estimates are higher: Arab historian Aref al Aref estimated 400, and Nimr al Khatib 1,700. In 1948, the population rose to 50,000 during the Nakba, as Arab refugees fleeing other areas made their way there. A key event was the Palestinian expulsion from Lydda and Ramle, with the expulsion of 50,000-70,000 Palestinians from Lydda and Ramle by the Israel Defense Forces. All but 700 to 1,056 were expelled by order of the Israeli high command, and forced to walk 17 km (10+1⁄2 mi) to the Jordanian Arab Legion lines. Estimates of those who died from exhaustion and dehydration vary from a handful to 355. The town was subsequently sacked by the Israeli army. Some scholars, including Ilan Pappé, characterize this as ethnic cleansing. The few hundred Arabs who remained in the city were soon outnumbered by the influx of Jews who immigrated to Lod from August 1948 onward, most of them from Arab countries. As a result, Lod became a predominantly Jewish town. After the establishment of the state, the biblical name Lod was readopted. The Jewish immigrants who settled Lod came in waves, first from Morocco and Tunisia, later from Ethiopia, and then from the former Soviet Union. Since 2008, many urban development projects have been undertaken to improve the image of the city. Upscale neighbourhoods have been built, among them Ganei Ya'ar and Ahisemah, expanding the city to the east. According to a 2010 report in the Economist, a three-meter-high wall was built between Jewish and Arab neighbourhoods and construction in Jewish areas was given priority over construction in Arab neighborhoods. The newspaper says that violent crime in the Arab sector revolves mainly around family feuds over turf and honour crimes. In 2010, the Lod Community Foundation organised an event for representatives of bicultural youth movements, volunteer aid organisations, educational start-ups, businessmen, sports organizations, and conservationists working on programmes to better the city. In the 2021 Israel–Palestine crisis, a state of emergency was declared in Lod after Arab rioting led to the death of an Israeli Jew. The Mayor of Lod, Yair Revivio, urged Prime Minister of Israel Benjamin Netanyahu to deploy Israel Border Police to restore order in the city. This was the first time since 1966 that Israel had declared this kind of emergency lockdown. International media noted that both Jewish and Palestinian mobs were active in Lod, but the "crackdown came for one side" only. Demographics In the 19th century and until the Lydda Death March, Lod was an exclusively Muslim-Christian town, with an estimated 6,850 inhabitants, of whom approximately 2,000 (29%) were Christian. According to the Israel Central Bureau of Statistics (CBS), the population of Lod in 2010 was 69,500 people. According to the 2019 census, the population of Lod was 77,223, of which 53,581 people, comprising 69.4% of the city's population, were classified as "Jews and Others", and 23,642 people, comprising 30.6% as "Arab". Education According to CBS, 38 schools and 13,188 pupils are in the city. They are spread out as 26 elementary schools and 8,325 elementary school pupils, and 13 high schools and 4,863 high school pupils. About 52.5% of 12th-grade pupils were entitled to a matriculation certificate in 2001.[citation needed] Economy The airport and related industries are a major source of employment for the residents of Lod. Other important factories in the city are the communication equipment company "Talard", "Cafe-Co" - a subsidiary of the Strauss Group and "Kashev" - the computer center of Bank Leumi. A Jewish Agency Absorption Centre is also located in Lod. According to CBS figures for 2000, 23,032 people were salaried workers and 1,405 were self-employed. The mean monthly wage for a salaried worker was NIS 4,754, a real change of 2.9% over the course of 2000. Salaried men had a mean monthly wage of NIS 5,821 (a real change of 1.4%) versus NIS 3,547 for women (a real change of 4.6%). The mean income for the self-employed was NIS 4,991. About 1,275 people were receiving unemployment benefits and 7,145 were receiving an income supplement. Art and culture In 2009-2010, Dor Guez held an exhibit, Georgeopolis, at the Petach Tikva art museum that focuses on Lod. Archaeology A well-preserved mosaic floor dating to the Roman period was excavated in 1996 as part of a salvage dig conducted on behalf of the Israel Antiquities Authority and the Municipality of Lod, prior to widening HeHalutz Street. According to Jacob Fisch, executive director of the Friends of the Israel Antiquities Authority, a worker at the construction site noticed the tail of a tiger and halted work. The mosaic was initially covered over with soil at the conclusion of the excavation for lack of funds to conserve and develop the site. The mosaic is now part of the Lod Mosaic Archaeological Center. The floor, with its colorful display of birds, fish, exotic animals and merchant ships, is believed to have been commissioned by a wealthy resident of the city for his private home. The Lod Community Archaeology Program, which operates in ten Lod schools, five Jewish and five Israeli Arab, combines archaeological studies with participation in digs in Lod. Sports The city's major football club, Hapoel Bnei Lod, plays in Liga Leumit (the second division). Its home is at the Lod Municipal Stadium. The club was formed by a merger of Bnei Lod and Rakevet Lod in the 1980s. Two other clubs in the city play in the regional leagues: Hapoel MS Ortodoxim Lod in Liga Bet and Maccabi Lod in Liga Gimel. Hapoel Lod played in the top division during the 1960s and 1980s, and won the State Cup in 1984. The club folded in 2002. A new club, Hapoel Maxim Lod (named after former mayor Maxim Levy) was established soon after, but folded in 2007. Notable people Twin towns-sister cities Lod is twinned with: See also References Bibliography External links
========================================