text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/File_transfer] | [TOKENS: 139]
Contents File transfer File transfer is the transmission of a computer file through a communication channel from one computer system to another. Typically, file transfer is mediated by a communications protocol. In the history of computing, numerous file transfer protocols have been designed for different contexts. Protocols A file transfer protocol is a convention that describes how to transfer files between two computing endpoints. As well as the stream of bits from a file stored as a single unit in a file system, some may also send relevant metadata such as the filename, file size and timestamp – and even file-system permissions and file attributes. Some examples: See also References This computer networking article is a stub. You can help Wikipedia by adding missing information.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Serial_I/O] | [TOKENS: 927]
Contents Serial communication In telecommunication and data transmission, serial communication is the process of sending data one bit at a time, sequentially, over a communication channel or computer bus. This is in contrast to parallel communication, where several bits are sent as a whole, on a link with several parallel channels. Serial communication is used for all long-haul communication and most computer networks, where the cost of cable and difficulty of synchronization make parallel communication impractical. Serial computer buses have become more common even at shorter distances, as improved signal integrity and transmission speeds in newer serial technologies have begun to outweigh the parallel bus's advantage of simplicity (no need for serializer and deserializer, or SerDes) and to outstrip its disadvantages (clock skew, interconnect density). The migration from PCI to PCI Express (PCIe) is an example. Modern high speed serial interfaces such as PCIe send data several bits at a time using modulation/encoding techniques such as PAM4 which groups 2 bits at a time into a single symbol, and several symbols are still sent one at a time. This replaces PAM2 or non return to zero (NRZ) which only sends one bit at a time, or in other words one bit per symbol. The symbols are sent at a speed known as the symbol rate or the baud rate. Cables Many serial communication systems were originally designed to transfer data over relatively large distances through some sort of data cable. Practically all long-distance communication transmits data one bit at a time, rather than in parallel, because it reduces the cost of the cable. The cables that carry this data (other than "the" serial cable) and the computer ports they plug into are usually referred to with a more specific name, to reduce confusion. Keyboard and mouse cables and ports are almost invariably serial—such as PS/2 port, Apple Desktop Bus and USB. The cables that carry digital video are also mostly serial—such as coax cable plugged into a HD-SDI port, a webcam plugged into a USB port or FireWire port, Ethernet cable connecting an IP camera to a Power over Ethernet port, FPD-Link, digital telephone lines (ex. ISDN), etc. Other such cables and ports, transmitting data one bit at a time, include Serial ATA, Serial SCSI, Ethernet cable plugged into Ethernet ports, the Display Data Channel using previously reserved pins of the VGA connector or the DVI port or the HDMI port. Serial buses Many communication systems were generally designed to connect two integrated circuits on the same printed circuit board, connected by signal traces on that board (rather than external cables). Integrated circuits are more expensive when they have more pins. To reduce the number of pins in a package, many ICs use a serial bus to transfer data when speed is not important. Some examples of such low-cost lower-speed serial buses include RS-232, DALI, SPI, CAN bus, I²C, UNI/O, and 1-Wire. Higher-speed serial buses include USB, SATA and PCI Express. Serial versus parallel The communication links, across which computers (or parts of computers) talk to one another, may be either serial or parallel. A parallel link transmits several streams of data simultaneously along multiple channels (e.g., wires, printed circuit tracks, or optical fibers); whereas, a serial link transmits only a single stream of data. The rationale for parallel communication was the added benefit of having Direct Memory Access to the 8-bit or 16-bit registry addresses at a time when mapping direct data lanes was more convenient and faster than synchronizing data serially.[citation needed] Although a serial link may seem inferior to a parallel one, since it can transmit less data per clock cycle, it is often the case that serial links can be clocked considerably faster than parallel links in order to achieve a higher data rate. Several factors allow serial to be clocked at a higher rate: The transition from parallel to serial buses was allowed by Moore's law which allowed for the incorporation of SerDes in integrated circuits. An electrical serial link only requires a pair of wires, whereas a parallel link requires several. Thus serial links can save on costs (also known as the Bill of Materials). Differential signalling uses length-matched wires or conductors and are used in high speed serial links. Length-matching is easier to perform on serial links as they require fewer conductors. In many cases, serial is cheaper to implement than parallel. Many ICs have serial interfaces, as opposed to parallel ones, so that they have fewer pins and are therefore less expensive. Examples of architectures See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Chapbook] | [TOKENS: 2663]
Contents Chapbook A chapbook is a type of small printed booklet that was a popular medium for street literature throughout early modern Europe. Chapbooks were usually produced cheaply, illustrated with crude woodcuts and printed on a single sheet folded into 8, 12, 16, or 24 pages, sometimes bound with a saddle stitch. Printers provided chapbooks on credit to chapmen, who sold them both from door to door and at markets and fairs, then paying for the stock they sold. The tradition of chapbooks emerged during the 16th century as printed books were becoming affordable, with the medium ultimately reaching its height of popularity during the 17th and 18th centuries. Various ephemera and popular or folk literature were published as chapbooks, such as almanacs, children's literature, folklore, ballads, nursery rhymes, pamphlets, poetry, and political and religious tracts. The term chapbook remains in use by publishers to refer to short, inexpensive booklets. Terminology Chapbook is first attested in English in 1824, and seemingly derives from chapman, the word for the itinerant salesmen who, after printing was invented, would sell such books. The first element of chapman comes in turn from Old English cēap 'barter', 'business', 'dealing', from which the modern adjective cheap was ultimately derived. Chapbooks correspond to Portuguese Cordel literature, and to French bibliothèque bleue 'blue library' literature, because they were often wrapped in cheap blue paper that was usually reserved as a wrapping for sugar. Chapbooks are called Volksbuch 'people's book' in German, and as pliegos sueltos 'loose sheets' in Spanish, with the latter name referring to their method of assembly. Lubok books are the Russian equivalent. History Broadside ballads were popular songs, sold for a penny or halfpenny in the streets of towns and villages around Britain between the 16th and the early 20th centuries. They preceded chapbooks but had similar content, marketing, and distribution systems. There are records from Cambridgeshire as early as in 1553 of a man offering a scurrilous ballad "maistres mass" at an alehouse, and a pedlar selling "lytle books" to people, including a patcher of old clothes in 1578. These sales are probably characteristic of the market for chapbooks. The form factor originated in Britain, but was also used in North America. Chapbooks gradually disappeared from the mid-19th century in the face of competition from cheap newspapers and, especially in Scotland, from tract societies that regarded them as ungodly. Chapbooks were generally aimed at buyers who did not maintain libraries, and due to their flimsy construction they rarely survive as individual items. In an era when paper was expensive, chapbooks were sometimes used for wrapping, baking, or as toilet paper. Many of the surviving chapbooks come from the collections of Samuel Pepys between 1661 and 1688 which are now held at Magdalene College, Cambridge. The antiquary Anthony Wood also collected 65 chapbooks, including 20 from before 1660, which are now in the Bodleian Library. There are also significant Scottish collections, such as those held by the University of Glasgow and the National Library of Scotland. Modern collectors such as Peter Opie, have chiefly a scholarly interest in the form. Modern small literary presses, such as Louffa Press, Black Lawrence Press and Ugly Duckling Presse, continue to issue several small editions of chapbooks a year, updated in technique and materials, often to high fabrication standards, such as letterpress. Production and distribution Chapbooks were cheap, anonymous publications that were the usual reading material for lower-class people who could not afford books. Members of the upper classes occasionally owned chapbooks, and sometimes bound them in leather. Printers typically tailored their texts for the popular market. Chapbooks were usually between four and twenty-four pages long, and produced on rough paper with crude, frequently recycled, woodcut illustrations. Millions of chapbooks were sold each year. After 1696, English chapbook peddlers had to be licensed, and 2,500 of them were then authorized, 500 in London alone. In France, there were 3,500 licensed colporteurs by 1848, and they sold 40 million books annually. The centre of the chapbook and ballad production was London, and printers were based around London Bridge until the Great Fire of London in 1666. Still, a feature of the emergence of chapbooks is the proliferation of provincial printers, especially in Scotland and Newcastle upon Tyne. The first Scottish publication was the tale of Tom Thumb, in 1682. Content Chapbooks were an important medium for the dissemination of popular culture to the common people, especially in rural areas. They were a medium of entertainment and information. Though the content of chapbooks has been criticized as unsophisticated narratives which were heavily loaded with repetition and emphasized adventure through mostly anecdotal structures, they are valued as a record of popular culture, preserving cultural artefacts that may not survive in any other form. Chapbooks were priced for sales to workers, although their market was not limited to the working classes. Broadside ballads were sold for a halfpenny, or a few pence. Prices of chapbooks were from 2d. to 6d., when agricultural labourers' wages were 12d. per day. The literacy rate in England in the 1640s was around 30 percent for males and rose to 60 percent in the mid-18th century. Many working people were readers, if not writers, and pre-industrial working patterns provided periods during which they could read. Chapbooks were used for reading to family groups or groups in alehouses. They contributed to the development of literacy, and there is evidence of their use by autodidacts. In the 1660s, as many as 400,000 almanacs were printed annually, enough for one family in three in England. One 17th-century publisher of chapbooks in London stocked one book for every 15 families in the country.[clarification needed] In the 1520s the Oxford bookseller John Dorne noted in his day-book selling up to 190 ballads a day at a halfpenny each. The probate inventory of the stock of Charles Tias, of The sign of the Three Bibles on London Bridge, in 1664 included books and printed sheets to make approximately 90,000 chapbooks (including 400 reams of paper) and 37,500 ballad sheets. This was not regarded as an outstanding figure in the trade. The inventory of Josiah Blare, of The Sign of the Looking Glass on London Bridge, in 1707 listed 31,000 books, plus 257 reams of printed sheets. A conservative estimate of sales in Scotland alone in the second half of the 18th century was over 200,000 per year. Printers provided chapbooks on credit to chapmen, who sold them both from door to door and at markets and fairs, then paying for the stock they sold. This facilitated wide distribution and large sales with minimum outlay, and also provided the printers with feedback about what titles were most popular. Popular works were reprinted, pirated, edited, and produced in different editions. Publishers also issued catalogues, and chapbooks are found in the libraries of provincial yeomen and gentry. John Whiting, a Quaker yeoman imprisoned at Ilchester, Somerset, in the 1680s had books sent by carrier from London, and left for him at an inn. Samuel Pepys had a collection of ballads bound into volumes, under the following classifications, into which could fit the subject matter of most chapbooks: Stories in many chapbooks have much earlier origins. Bevis of Hampton was an Anglo-Norman romance of the 13th century, which probably drew on earlier themes. The structure of The Seven Sages of Rome was of Eastern origin, and was used by Geoffrey Chaucer. Many jests about ignorant and greedy clergy in chapbooks were taken from The Friar and the Boy printed about 1500 by Wynkyn de Worde, and The Sackfull of News (1557). Historical stories set in a mythical and fantastical past were popular, while many significant historical figures and events appear rarely or not at all: in the Pepys collection, Charles I, and Oliver Cromwell do not appear as historical figures, The Wars of the Roses and the English Civil War do not appear at all, Elizabeth I appears only once, and Henry VIII and Henry II appear in disguise, standing up for the right[clarification needed] with cobblers and millers and then inviting them to court and rewarding them. There was a pattern of high born heroes overcoming reduced circumstances by valour, such as Saint George, Guy of Warwick, Robin Hood, and heroes of low birth who achieve status through force of arms, such as Clim of Clough, and William of Cloudesley. Clergy often appear as figures of fun, and foolish countrymen were also popular (e.g., The Wise Men of Gotham). Other works were aimed at regional and rural audience (e.g., The Country Mouse and the Town Mouse). From 1597, works were published that were aimed at specific trades, such as cloth merchants, weavers and shoemakers. The latter were commonly literate.[clarification needed] Thomas Deloney, a weaver, wrote Thomas of Reading, about six clothiers from Reading, Gloucester, Worcester, Exeter, Salisbury and Southampton, traveling together and meeting at Basingstoke their fellows from Kendal, Manchester and Halifax. In his Jack of Newbury, set during Henry VIII's reign, an apprentice to a broadcloth weaver takes over his business and marries his widow on his death. On achieving success, he is liberal to the poor and refuses a knighthood for his substantial services to the king. Other examples from the Pepys collection include The Countryman's Counsellor, or Everyman his own Lawyer, and Sports and Pastimes, written for schoolboys, including magic tricks, like how to "fetch a shilling out of a handkerchief", write invisibly, make roses out of paper, snare wild duck, and make a maid-servant fart uncontrollably. The provinces and Scotland had their own local heroes. Robert Burns commented that one of the first two books he read in private was "the history of Sir William Wallace ... poured a Scottish prejudice in my veins which will boil along there till the flood-gates of life shut in eternal rest". Influence Chapbooks had a wide and continuing influence. Eighty percent of English folk songs collected by early-20th-century collectors have been linked to printed broadsides, including over 90 of which could only be derived from those printed before 1700. It has been suggested the majority of surviving ballads can be traced to 1550–1600 by internal evidence. One of the most popular and influential chapbooks was Richard Johnson's Seven Champions of Christendom (1596), believed to be the source for the introduction of Saint George into English folk plays. Robert Greene's 1588 novel Dorastus and Fawnia, the basis of Shakespeare's The Winter's Tale, was still being published in cheap editions in the 1680s. Some stories were still being published in the 19th century, (e.g., Jack of Newbury, Friar Bacon, Dr Faustus and The Seven Champions of Christendom). Later production Chapbook is also a term currently used to denote publications of up to about 40 pages, usually poetry bound with some form of saddle stitch, though many are perfect bound, folded, or wrapped. These publications range from low-cost productions to finely produced, hand-made editions that may sell to collectors for hundreds of dollars. More recently,[when?] the popularity of fiction and non-fiction chapbooks has also increased. In the UK they are more often referred to as pamphlets. The genre has been revitalized in the past 40 years by the widespread availability of first mimeograph technology, then low-cost copy centres and digital printing, and by the cultural revolutions spurred by both zines and poetry slams, the latter generating hundreds upon hundreds of self-published chapbooks that are used to fund tours. The Center for the Humanities at the City University of New York Graduate Center has held the NYC/CUNY Chapbook Festival, focused on "the chapbook as a work of art, and as a medium for alternative and emerging writers and publishers." For example, Lucia Berlin's story "Manual for Cleaning Women" was first published as a chapbook; Berlin published it later as part of a collection that became a bestseller. Collections See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Google_DeepMind] | [TOKENS: 7442]
Contents Google DeepMind DeepMind Technologies Limited, trading as Google DeepMind or simply DeepMind, is a British-American artificial intelligence research laboratory which serves as a subsidiary of Alphabet Inc. Founded in the UK in 2010, it was acquired by Google in 2014 and merged with Google AI's Google Brain division to become Google DeepMind in April 2023. The company is headquartered in London, with research centres in the United States, Canada, France, Germany, and Switzerland. In 2014, DeepMind introduced neural Turing machines (neural networks that can access external memory like a conventional Turing machine). The company has created many neural network models trained with reinforcement learning to play video games and board games. It made headlines in 2016 after its AlphaGo program beat Lee Sedol, a Go world champion, in a five-game match, which was later featured in the documentary AlphaGo. A more general program, AlphaZero, beat the most powerful programs playing go, chess and shogi (Japanese chess) after a few days of play against itself using reinforcement learning. DeepMind has since trained models for game-playing (MuZero, AlphaStar), for geometry (AlphaGeometry), and for algorithm discovery (AlphaEvolve, AlphaDev, AlphaTensor). In 2020, DeepMind made significant advances in the problem of protein folding with AlphaFold, which achieved state of the art records on benchmark tests for protein folding prediction. In July 2022, it was announced that over 200 million predicted protein structures, representing virtually all known proteins, would be released on the AlphaFold database. Google DeepMind has become responsible for the development of Gemini (Google's family of large language models) and other generative AI tools, such as the text-to-image model Imagen, the text-to-video model Veo, and the text-to-music model Lyria. History The start-up was founded by Demis Hassabis, Shane Legg and Mustafa Suleyman in November 2010. Hassabis and Legg first met at the Gatsby Computational Neuroscience Unit at University College London (UCL). Demis Hassabis has said that the start-up began working on artificial intelligence technology by teaching it how to play old games from the seventies and eighties, which are relatively primitive compared to the ones that are available today. Some of those games included Breakout, Pong, and Space Invaders. AI was introduced to one game at a time, without any prior knowledge of its rules. After spending some time on learning the game, AI would eventually become an expert in it. "The cognitive processes which the AI goes through are said to be very like those of a human who had never seen the game would use to understand and attempt to master it." The goal of the founders is to create a general-purpose AI that can be useful and effective for almost anything. Major venture capital firms Horizons Ventures and Founders Fund invested in the company, as well as entrepreneurs Scott Banister, Peter Thiel, and Elon Musk. Jaan Tallinn was an early investor and an adviser to the company. On 26 January 2014, Google confirmed its acquisition of DeepMind for a price reportedly ranging between $400 million and $650 million. and that it had agreed to take over DeepMind Technologies. The sale to Google took place after Facebook reportedly ended negotiations with DeepMind Technologies in 2013. The company was afterwards renamed Google DeepMind and kept that name for about two years. In 2014, DeepMind received the "Company of the Year" award from Cambridge Computer Laboratory. In September 2015, DeepMind and the Royal Free NHS Trust signed their initial information sharing agreement to co-develop a clinical task management app, Streams. After Google's acquisition the company established an artificial intelligence ethics board. The ethics board for AI research remains a mystery, with both Google and DeepMind declining to reveal who sits on the board. DeepMind has opened a new unit called DeepMind Ethics and Society and focused on the ethical and societal questions raised by artificial intelligence featuring prominent philosopher Nick Bostrom as advisor. In October 2017, DeepMind launched a new research team to investigate AI ethics. In December 2019, co-founder Suleyman announced he would be leaving DeepMind to join Google, working in a policy role. In March 2024, Microsoft appointed him as the EVP and CEO of its newly created consumer AI unit, Microsoft AI. In April 2023, DeepMind merged with Google AI's Google Brain division to form Google DeepMind, as part of the company's continued efforts to accelerate work on AI in response to OpenAI's ChatGPT. This marked the end of a years-long struggle from DeepMind executives to secure greater autonomy from Google. Products and technologies As of 2020, DeepMind has published over a thousand papers, including thirteen papers that were accepted by Nature or Science. DeepMind received media attention during the AlphaGo period; according to a LexisNexis search, 1842 published news stories mentioned DeepMind in 2016, declining to 1363 in 2019. Unlike earlier AIs, such as IBM's Deep Blue or Watson, which were developed for a pre-defined purpose and only function within that scope, DeepMind's initial algorithms were intended to be general. They used reinforcement learning, an algorithm that learns from experience using only raw pixels as data input. Their initial approach used deep Q-learning with a convolutional neural network. They tested the system on video games, notably early arcade games, such as Space Invaders or Breakout. Without altering the code, the same AI was able to play certain games more efficiently than any human ever could. In July 2018, researchers from DeepMind trained one of its systems to play the computer game Quake III Arena. In 2013, DeepMind published research on an AI system that surpassed human abilities in games such as Pong, Breakout and Enduro, while surpassing state of the art performance on Seaquest, Beamrider, and Q*bert. This work reportedly led to the company's acquisition by Google. DeepMind's AI had been applied to video games made in the 1970s and 1980s; work was ongoing for more complex 3D games such as Quake, which first appeared in the 1990s. In 2020, DeepMind published Agent57, an AI Agent which surpasses human level performance on all 57 games of the Atari 2600 suite. In July 2022, DeepMind announced the development of DeepNash, a model-free multi-agent reinforcement learning system capable of playing the board game Stratego at the level of a human expert. In October 2015, a computer Go program called AlphaGo, developed by DeepMind, beat the European Go champion Fan Hui, a 2 dan (out of 9 dan possible) professional, five to zero. This was the first time an artificial intelligence (AI) defeated a professional Go player. Previously, computers were only known to have played Go at "amateur" level. Go is considered much more difficult for computers to win compared to other games like chess, due to the much larger number of possibilities, making it prohibitively difficult for traditional AI methods such as brute-force. In March 2016 it beat Lee Sedol, a 9-dan professional player, with a score of 4 to 1 in a five-game match. In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie, who had been the world's highest-ranked player for two years. In 2017, an improved version, AlphaGo Zero, defeated AlphaGo in a hundred out of a hundred games. Later that year, AlphaZero, a modified version of AlphaGo Zero, gained superhuman abilities at chess and shogi. In 2019, DeepMind released a new model named MuZero that mastered the domains of Go, chess, shogi, and Atari 2600 games without human data, domain knowledge, or known rules. AlphaGo technology was developed based on deep reinforcement learning, making it different from the AI technologies then on the market. The data fed into the AlphaGo algorithm consisted of various moves based on historical tournament data. The number of moves was increased gradually until over 30 million of them were processed. The aim was to have the system mimic the human player, as represented by the input data, and eventually become better. It played against itself and learned from the outcomes; thus, it learned to improve itself over the time and increased its winning rate as a result. AlphaGo used two deep neural networks: a policy network to evaluate move probabilities and a value network to assess positions. The policy network trained via supervised learning, and was subsequently refined by policy-gradient reinforcement learning. The value network learned to predict winners of games played by the policy network against itself. After training, these networks employed a lookahead Monte Carlo tree search, using the policy network to identify candidate high-probability moves, while the value network (in conjunction with Monte Carlo rollouts using a fast rollout policy) evaluated tree positions. In contrast, AlphaGo Zero was trained without being fed data of human-played games. Instead it generated its own data, playing millions of games against itself. It used a single neural network, rather than separate policy and value networks. Its simplified tree search relied upon this neural network to evaluate positions and sample moves. A new reinforcement learning algorithm incorporated lookahead search inside the training loop. AlphaGo Zero employed around 15 people and millions in computing resources. Ultimately, it needed much less computing power than AlphaGo, running on four specialized AI processors (Google TPUs), instead of AlphaGo's 48. It also required less training time, being able to beat its predecessor after just three days, compared with months required for the original AlphaGo. Similarly, AlphaZero also learned via self-play. Researchers applied MuZero to solve the real world challenge of video compression with a set number of bits with respect to Internet traffic on sites such as YouTube, Twitch, and Google Meet. The goal of MuZero is to optimally compress the video so the quality of the video is maintained with a reduction in data. The final result using MuZero was a 6.28% average reduction in bitrate. In 2016, Hassabis discussed the game StarCraft as a future challenge, since it requires strategic thinking and handling imperfect information. In January 2019, DeepMind introduced AlphaStar, a program playing the real-time strategy game StarCraft II. AlphaStar used reinforcement learning based on replays from human players, and then played against itself to enhance its skills. At the time of the presentation, AlphaStar had knowledge equivalent to 200 years of playing time. It won 10 consecutive matches against two professional players, although it had the unfair advantage of being able to see the entire field, unlike a human player who has to move the camera manually. A preliminary version in which that advantage was fixed lost a subsequent match. In July 2019, AlphaStar began playing against random humans on the public 1v1 European multiplayer ladder. Unlike the first iteration of AlphaStar, which played only Protoss v. Protoss, this one played as all of the game's races, and had earlier unfair advantages fixed. By October 2019, AlphaStar had reached Grandmaster level on the StarCraft II ladder on all three StarCraft races, becoming the first AI to reach the top league of a widely popular esport without any game restrictions. In 2014, a datacenter engineer at Google began using supervised machine learning to predict power usage effectiveness (PUE) of datacenters at Google. The system was deployed in production to allow operators to simulate control strategies and pick the one that saves the most energy. In 2016, inspired by AlphaGo, he contacted DeepMind to apply reinforcement learning (RL) to train a system that could also recommend actions. It was tested on a live datacenter. The system read from sensor readings and recommended actions to take, and human engineers would implement the actions. Though the human engineers found its recommendations unintuitive, they satisfied all safety constraints, and led to a 15% saving in PUE. The system was deployed more widely across Google, with datacenter controllers receiving email recommendations from the system every 15 minutes. Eventually a more mature and more autonomous system was deployed, where the AI's actions are checked against safety constraints and implemented autonomously if verified safe, and human operators would supervise the AI and may override. The system led to a 30% saving in PUE. The system produced cooling strategies that surprised long-time operators, such as exploiting winter conditions to produce colder than normal water. Google subsequently collaborated with Trane Technologies to deploy similar RL-based systems on HVAC of facilities outside of Google. In 2016, DeepMind turned its artificial intelligence to protein folding, a long-standing problem in molecular biology. In December 2018, DeepMind's AlphaFold won the 13th Critical Assessment of Techniques for Protein Structure Prediction (CASP) by successfully predicting the most accurate structure for 25 out of 43 proteins. "This is a lighthouse project, our first major investment in terms of people and resources into a fundamental, very important, real-world scientific problem," Hassabis said to The Guardian. In 2020, in the 14th CASP, AlphaFold's predictions achieved an accuracy score regarded as comparable with lab techniques. Andriy Kryshtafovych, one of the panel of scientific adjudicators, described the achievement as "truly remarkable", and said the problem of predicting how proteins fold had been "largely solved". In July 2021, the open-source RoseTTAFold and AlphaFold2 were released to allow scientists to run their own versions of the tools. A week later DeepMind announced that AlphaFold had completed its prediction of nearly all human proteins as well as the entire proteomes of 20 other widely studied organisms. The structures were released on the AlphaFold Protein Structure Database. In July 2022, it was announced that the predictions of over 200 million proteins, representing virtually all known proteins, would be released on the AlphaFold database. The most recent update, AlphaFold3, was released in May 2024, predicting the interactions of proteins with DNA, RNA, and various other molecules. In a particular benchmark test on the problem of DNA interactions, AlphaFold3's attained an accuracy of 65%, significantly improving the previous state of the art of 28%. In October 2024, Hassabis and John Jumper received half of the 2024 Nobel Prize in Chemistry jointly for protein structure prediction, citing AlphaFold2 achievement. In 2016, DeepMind introduced WaveNet, a text-to-speech system. It was originally too computationally intensive for use in consumer products, but in late 2017 it became ready for use in consumer applications such as Google Assistant. In 2018 Google launched a commercial text-to-speech product, Cloud Text-to-Speech, based on WaveNet. In 2018, DeepMind introduced a more efficient model called WaveRNN co-developed with Google AI. In 2020 WaveNetEQ, a packet loss concealment method based on a WaveRNN architecture, was presented. In 2019, Google started to roll WaveRNN with WavenetEQ out to Google Duo users. Released in May 2022, Gato is a polyvalent multimodal model. It was trained on 604 tasks, such as image captioning, dialogue, or stacking blocks. On 450 of these tasks, Gato outperformed human experts at least half of the time, according to DeepMind. Unlike models like MuZero, Gato does not need to be retrained to switch from one task to the other. Sparrow is an artificial intelligence-powered chatbot developed by DeepMind to build safer machine learning systems by using a mix of human feedback and Google search suggestions. Chinchilla is a language model developed by DeepMind. DeepMind posted a blog post on 28 April 2022 on a single visual language model (VLM) named Flamingo that can accurately describe a picture of something with just a few training images. In 2022, DeepMind unveiled AlphaCode, an AI-powered coding engine that creates computer programs at a rate comparable to that of an average programmer, with the company testing the system against coding challenges created by Codeforces utilized in human competitive programming competitions. AlphaCode earned a rank equivalent to 54% of the median score on Codeforces after being trained on GitHub data and Codeforce problems and solutions. The program was required to come up with a unique solution and stopped from duplicating answers. Gemini is a multimodal large language model which was released on 6 December 2023. It is the successor of Google's LaMDA and PaLM 2 language models and sought to challenge OpenAI's GPT-4. Gemini comes in 3 sizes: Nano, Pro, and Ultra. Gemini is also the name of the chatbot that integrates Gemini (and which was previously called Bard). On 12 December 2024, Google released Gemini 2.0 Flash, the first model in the Gemini 2.0 series. It notably features expanded multimodality, with the ability to also generate images and audio, and is part of Google's broader plans to integrate advanced AI into autonomous agents. On 25 March 2025, Google released Gemini 2.5, a reasoning model that stops to "think" before giving a response. Google announced that all future models will also have reasoning ability. On 30 March 2025, Google released Gemini 2.5 to all free users. On 18 November 2025, Google released Gemini 3 Pro, a reasoning model which is fully multimodal. It was fully integrated with Google Search and AI Mode the same day. Gemma is a collection of open-weight large language models. The first ones were released on 21 February 2024 and are available in two distinct sizes: a 7 billion parameter model optimized for GPU and TPU usage, and a 2 billion parameter model designed for CPU and on-device applications. Gemma models were trained on up to 6 trillion tokens of text, employing similar architectures, datasets, and training methodologies as the Gemini model set. In June 2024, Google started releasing Gemma 2 models. In December 2024, Google introduced PaliGemma 2, an upgraded vision-language model. In February 2025, they launched PaliGemma 2 Mix, a version fine-tuned for multiple tasks. It is available in 3B, 10B, and 28B parameters with 224px and 448px resolutions. In March 2025, Google released Gemma 3, calling it the most capable model that can be run on a single GPU. It has four available sizes: 1B, 4B, 12B, and 27B. In March 2025, Google introduced TxGemma, an open-source model designed to improve the efficiency of therapeutics development. In April 2025, Google introduced DolphinGemma, a research artificial intelligence model designed to hopefully decode dolphin communication. They want to train a foundation model that can learn the structure of dolphin vocalizations and generate novel dolphin-like sound sequences. In March 2024, DeepMind introduced Scalable Instructable Multiword Agent, or SIMA, an AI agent capable of understanding and following natural language instructions to complete tasks across various 3D virtual environments. Trained on nine video games from eight studios and four research environments, SIMA demonstrated adaptability to new tasks and settings without requiring access to game source code or APIs. The agent comprises pre-trained computer vision and language models fine-tuned on gaming data, with language being crucial for understanding and completing given tasks as instructed. DeepMind's research aimed to develop more helpful AI agents by translating advanced AI capabilities into real-world actions through a language interface. In 2024, Google Deepmind published the results of an experiment where they trained two large language models to help identify and present areas of overlap among a few thousand group members they had recruited online using techniques like sortition to get a representative sample of participants. The project is named in honor of Jürgen Habermas. In one experiment, the participants rated the summaries by the AI higher than the human moderator 56% of the time. In May 2024, a multimodal video generation model called Veo was announced at Google I/O 2024. Google claimed that it could generate 1080p videos beyond a minute long. In December 2024, Google released Veo 2, available via VideoFX. It supports 4K resolution video generation, and has an improved understanding of physics. In April 2025, Google announced that Veo 2 became available for advanced users on Gemini App. In May 2025, Google released Veo 3, which not only generates videos but also creates synchronized audio — including dialogue, sound effects, and ambient noise — to match the visuals. Google also announced Flow, a video-creation tool powered by Veo and Imagen. Google DeepMind developed Lyria, a text-to-music model. As of August 2025, it is available on Vertex AI and the Gemini API. On February 18, 2026, DeepMind released Lyria 3. In March 2024, DeepMind introduced "Genie" (Generative Interactive Environments), an AI model that can generate game-like, action-controllable virtual worlds based on textual descriptions, images, or sketches. Built as an autoregressive latent diffusion model, Genie enables frame-by-frame interactivity without requiring labeled action data for training. Its successor, Genie 2, released in December 2024, expanded these capabilities to generate diverse and interactive 3D environments. Genie 3 was released in August 2025, with higher-resolution world generations and multiple minutes of visual consistency. On January 29, 2026, DeepMind released Project Genie to AI Ultra subscribers. Released in June 2023, RoboCat is an AI model that can control robotic arms. The model can adapt to new models of robotic arms, and to new types of tasks. In March 2025, DeepMind launched two AI models, Gemini Robotics and Gemini Robotics-ER, aimed at improving how robots interact with the physical world and released Gemini Robotics 1.5 in September 2025. DeepMind researchers have applied machine learning models to the sport of football, often referred to as soccer in North America, modelling the behaviour of football players, including the goalkeeper, defenders, and strikers during different scenarios such as penalty kicks. The researchers used heat maps and cluster analysis to organize players based on their tendency to behave a certain way during the game when confronted with a decision on how to score or prevent the other team from scoring. The researchers mention that machine learning models could be used to democratize the football industry by automatically selecting interesting video clips of the game that serve as highlights. This can be done by searching videos for certain events, which is possible because video analysis is an established field of machine learning. This is also possible because of extensive sports analytics based on data including annotated passes or shots, sensors that capture data about the players movements many times over the course of a game, and game theory models. Google has unveiled a new archaeology document program, named Ithaca after the Greek island in Homer's Odyssey. This deep neural network helps researchers restore the empty text of damaged Greek documents, and to identify their date and geographical origin. The work builds on another text analysis network that DeepMind released in 2019, named Pythia. Ithaca achieves 62% accuracy in restoring damaged texts and 71% location accuracy, and has a dating precision of 30 years. The authors claimed that the use of Ithaca by "expert historians" raised the accuracy of their work from 25 to 72 percent. However, Eleanor Dickey noted that this test was actually only made of students, saying that it wasn't clear how helpful Ithaca would be to "genuinely qualified editors". The team is working on extending the model to other ancient languages, including Demotic, Akkadian, Hebrew, and Mayan. In November 2023, Google DeepMind announced an Open Source Graph Network for Materials Exploration (GNoME). The tool proposes millions of materials previously unknown to chemistry, including several hundred thousand stable crystalline structures, of which 736 had been experimentally produced by the Massachusetts Institute of Technology, at the time of the release. However, according to Anthony Cheetham, GNoME did not make "a useful, practical contribution to the experimental materials scientists." A review article by Cheetham and Ram Seshadri were unable to identify any "strikingly novel" materials found by GNoME, with most being minor variants of already-known materials. In October 2022, DeepMind released AlphaTensor, which used reinforcement learning techniques similar to those in AlphaGo, to find novel algorithms for matrix multiplication. In the special case of multiplying two 4×4 matrices with integer entries, where only the evenness or oddness of the entries is recorded, AlphaTensor found an algorithm requiring only 47 distinct multiplications; the previous optimum, known since 1969, was the more general Strassen algorithm, using 49 multiplications. Computer scientist Josh Alman described AlphaTensor as "a proof of concept for something that could become a breakthrough", while Vassilevska Williams called it "a little overhyped" despite also acknowledging its basis in reinforcement learning as "something completely different" from previous approaches. AlphaGeometry is a neuro-symbolic AI that was able to solve 25 out of 30 geometry problems of the International Mathematical Olympiad, a performance comparable to that of a gold medalist. Traditional geometry programs are symbolic engines that rely exclusively on human-coded rules to generate rigorous proofs, which makes them lack flexibility in unusual situations. AlphaGeometry combines such a symbolic engine with a specialized large language model trained on synthetic data of geometrical proofs. When the symbolic engine doesn't manage to find a formal and rigorous proof on its own, it solicits the large language model, which suggests a geometrical construct to move forward. However, it is unclear how applicable this method is to other domains of mathematics or reasoning, because symbolic engines rely on domain-specific rules and because of the need for synthetic data. AlphaProof is an AI model, which couples a pre-trained language model with the AlphaZero reinforcement learning algorithm. AlphaZero has previously taught itself how to master games. The pre-trained language model used in this combination is the fine-tuning of a Gemini model to automatically translate natural language problem statements into formal statements, creating a large library of formal problems of varying difficulty. For this purpose, mathematical statements are defined in the formal language Lean. At the 2024 International Mathematical Olympiad, AlphaProof together with an adapted version of AlphaGeometry have reached the same level of solving problems in the combined categories as a silver medalist in that competition for the first time. In June 2023, Deepmind announced that AlphaDev, which searches for improved computer science algorithms using reinforcement learning, discovered a more efficient way of coding a sorting algorithm and a hashing algorithm. The new sorting algorithm was 70% faster for shorter sequences and 1.7% faster for sequences exceeding 250,000 elements, and the new hashing algorithm was 30% faster in some cases. The sorting algorithm was accepted into the C++ Standard Library sorting algorithms, and was the first change to those algorithms in more than a decade and the first update to involve an algorithm discovered using AI. The hashing algorithm was released to an opensource library. Google estimates that these two algorithms are used trillions of times every day. In May 2025, Google DeepMind unveiled AlphaEvolve, an evolutionary coding agent using LLMs like Gemini to design optimized algorithms. AlphaEvolve begins each optimization process with an initial algorithm and metrics to evaluate the quality of a solution. At each step, it uses the LLM to generate variations of the algorithms or combine them, and selects the best candidates for further iterations. AlphaEvolve has made several algorithmic discoveries, including in matrix multiplication. According to Google, when tested on 50 open mathematical problems, AlphaEvolve was able to match the efficiency of state-of-the-art algorithms in 75% of cases, and discovered improved solutions 20% of the time, such as with the kissing number problem in 11 dimensions. It also developed a new heuristic for data centre scheduling, recovering on average 0.7% of Google's worldwide compute resources. AlphaChip is a reinforcement learning-based neural architecture that guides the task of chip placement. DeepMind claimed that the technique reduced the time needed to create chip layouts from weeks to hours. According to the company, its chip designs were used in every Tensor Processing Unit (TPU) iteration since 2020. Multiple independent researchers remained unconvinced, citing a lack of direct public benchmarks and independent proof of its claimed superiority over existing commercial chip design tools. The TPU chips were co-designed with Broadcom. Communications of the ACM noted that despite substantial publicity, DeepMind had not provided the comparative benchmarks long requested by experts, leaving some skepticism in the field. Similarly, New Scientist reported that while Google claims AlphaChip has produced “superhuman” chip layouts now used in production, external specialists called for transparent performance data to substantiate these assertions and enable fair comparisons with current state-of-the-art methods. Google Research released a paper in 2016 regarding AI safety and avoiding undesirable behaviour during the AI learning process. In 2017 DeepMind released GridWorld, an open-source testbed for evaluating whether an algorithm learns to disable its kill switch or otherwise exhibits certain undesirable behaviours. Google DeepMind developed an AI-based weather prediction system called Weather Lab, which significantly improved tropical cyclone forecasting. Launched in mid-2025, this model utilized stochastic neural networks trained on 45 years of global weather and cyclone data, enabling it to predict cyclone formation, track, intensity, and structure with multiple probabilistic forecasts up to 15 days in advance. During the 2025 Atlantic hurricane season, DeepMind's Weather Lab outperformed traditional physics-based models, including the US National Weather Service's Global Forecast System, in both track and intensity predictions, earning notable recognition from meteorologists and aiding hurricane forecasting efforts by the US National Hurricane Center. This marked a substantial advancement in weather modeling, demonstrating the potential for AI to enhance the speed and accuracy of severe weather forecasts. DeepMind (alongside other Alphabet AI researchers) assists Google Play's personalized app recommendations. DeepMind has also collaborated with the Android team at Google for the creation of two new features which were made available to people with devices running Android Pie, the ninth installment of Google's mobile operating system. These features, Adaptive Battery and Adaptive Brightness, use machine learning to conserve energy and make devices running the operating system easier to use. It is the first time DeepMind has used these techniques on such a small scale, with typical machine learning applications requiring orders of magnitude more computing power. DeepMind Health In July 2016, a collaboration between DeepMind and Moorfields Eye Hospital was announced to develop AI applications for healthcare. DeepMind would be applied to the analysis of anonymised eye scans, searching for early signs of diseases leading to blindness. In August 2016, a research programme with University College London Hospital was announced with the aim of developing an algorithm that can automatically differentiate between healthy and cancerous tissues in head and neck areas. There are also projects with the Royal Free London NHS Foundation Trust and Imperial College Healthcare NHS Trust to develop new clinical mobile apps linked to electronic patient records. Staff at the Royal Free Hospital were reported as saying in December 2017 that access to patient data through the app had saved a 'huge amount of time' and made a 'phenomenal' difference to the management of patients with acute kidney injury. Test result data is sent to staff's mobile phones and alerts them to changes in the patient's condition. It also enables staff to see if someone else has responded, and to show patients their results in visual form. In November 2017, DeepMind announced a research partnership with the Cancer Research UK Centre at Imperial College London with the goal of improving breast cancer detection by applying machine learning to mammography. Additionally, in February 2018, DeepMind announced it was working with the U.S. Department of Veterans Affairs in an attempt to use machine learning to predict the onset of acute kidney injury in patients, and also more broadly the general deterioration of patients during a hospital stay so that doctors and nurses can more quickly treat patients in need. DeepMind developed an app called Streams, which sends alerts to doctors about patients at risk of acute kidney injury. On 13 November 2018, DeepMind announced that its health division and the Streams app would be absorbed into Google Health. Privacy advocates said the announcement betrayed patient trust and appeared to contradict previous statements by DeepMind that patient data would not be connected to Google accounts or services. A spokesman for DeepMind said that patient data would still be kept separate from Google services or projects. In April 2016, New Scientist obtained a copy of a data sharing agreement between DeepMind and the Royal Free London NHS Foundation Trust. The latter operates three London hospitals where an estimated 1.6 million patients are treated annually. The agreement shows DeepMind Health had access to admissions, discharge and transfer data, accident and emergency, pathology and radiology, and critical care at these hospitals. This included personal details such as whether patients had been diagnosed with HIV, suffered from depression or had ever undergone an abortion in order to conduct research to seek better outcomes in various health conditions. A complaint was filed to the Information Commissioner's Office (ICO), arguing that the data should be pseudonymised and encrypted. In May 2016, New Scientist published a further article claiming that the project had failed to secure approval from the Confidentiality Advisory Group of the Medicines and Healthcare products Regulatory Agency. In 2017, the ICO concluded a year-long investigation that focused on how the Royal Free NHS Foundation Trust tested the app, Streams, in late 2015 and 2016. The ICO found that the Royal Free failed to comply with the Data Protection Act when it provided patient details to DeepMind, and found several shortcomings in how the data was handled, including that patients were not adequately informed that their data would be used as part of the test. DeepMind published its thoughts on the investigation in July 2017, saying "we need to do better" and highlighting several activities and initiatives they had initiated for transparency, oversight and engagement. This included developing a patient and public involvement strategy and being transparent in its partnerships. In May 2017, Sky News published a leaked letter from the National Data Guardian, Dame Fiona Caldicott, revealing that in her "considered opinion" the data-sharing agreement between DeepMind and the Royal Free took place on an "inappropriate legal basis". The Information Commissioner's Office ruled in July 2017 that the Royal Free hospital failed to comply with the Data Protection Act when it handed over personal data of 1.6 million patients to DeepMind. DeepMind Ethics and Society In October 2017, DeepMind announced a new research unit, DeepMind Ethics & Society. Their goal is to fund external research of the following themes: privacy, transparency, and fairness; economic impacts; governance and accountability; managing AI risk; AI morality and values; and how AI can address the world's challenges. As a result, the team hopes to further understand the ethical implications of AI and aid society to seeing AI can be beneficial. This new subdivision of DeepMind is a completely separate unit from the partnership of leading companies using AI, academia, civil society organizations and nonprofits of the name Partnership on Artificial Intelligence to Benefit People and Society of which DeepMind is also a part. The DeepMind Ethics and Society board is also distinct from the mooted AI Ethics Board that Google originally agreed to form when acquiring DeepMind. DeepMind Professors of machine learning DeepMind sponsors three chairs of machine learning: See also References External links
========================================
[SOURCE: https://he.wikipedia.org/wiki/מים] | [TOKENS: 22701]
תוכן עניינים מים מַיִם הם תרכובת כימית המהווה בצורתה הנוזלית בסיס לכל צורות החיים המוכרות. מולקולת מים מורכבת משני אטומים של מימן ומאטום של חמצן. הנוסחה הכימית של המים היא H2O או בסימון (aq). התכונות הכימיות של המים מאפשרות להם להיות ממס לתרכובות רבות בטבע, ובהתאם לכך המים בטבע הם תמיסה העשויה ממים שבהם מומסים מינרלים שונים. מים מזוקקים הם תרכובת טהורה של מים, שאין בהם מומס. הם מתקבלים מעיבוי של אדי מים במערכת סגורה. תמיסה היא מצב שבו אטומים ומולקולות מתחברים למים לפי הקוטביות שלהם, אבל שומרים על תכונותיהם. אפשר להפריד את המומס על-ידי אידוי. תרחיף הוא מצב שבו חלקיקי חומר, (קולואידים) מרחפים במים אך לא יוצרים תמיסה. תרכובת היא תהליך של תגובה כימית בין המים למומס, ומתקבלת תרכובת חדשה שתכונותיה שונים מהמים או מהמומס. תרכובת היא תמיד תהליך הקשור במעברי אנרגיה. תכונות פיזיקליות למים שלושה מצבי צבירה – מוצק "קרח", נוזל, שהוא מצב הצבירה בטמפרטורת החדר וגז, שאז הם נקראים אדי מים". ערפל או ענן הוא תרחיף של טיפות מים הנישאים באוויר. בתנאים רגילים, נקודת הקיפאון של המים היא אפס מעלות צלזיוס, ונקודת הרתיחה שלהם היא מאה מעלות צלזיוס. בתנאי לחץ אטמוספירי של אטמוספירה אחת, המוגדר כלחץ האוויר בגובה פני הים על כדור הארץ. כמו בכל נוזל, מעבר ממצב צבירה למשנהו מותנה בשלושה פרמטרים: לחץ, טמפרטורה ונפח. כאשר הנפח קבוע, ישתנה מצב הצבירה בהתאם ללחץ והטמפרטורה. (לכן בסיר לחץ שבו הנפח קבוע והלחץ 1.2 אטמ".[דרושה הבהרה] תהיה הרתיחה ב-120 מעלות). לכל חומר טמפרטורה קריטית, למעבר מגז לנוזל ומנוזל למוצק. הטמפרטורה הקריטית של מים היא 374 מעלות, מעל טמפרטורה זו לא ניתן לעבות מים לנוזל, הלחץ הקריטי של מים הוא 218 אטמוספירות. כלומר בכל טמפרטורה מעל לחץ זה, לא ניתן להבחין בין נוזל וגז. כך - בגובה רב מעל פני הים של כדור הארץ (למשל בפסגות ההימלאיה) יכולה טמפרטורת הרתיחה לרדת עד 70 מעלות צלזיוס, ועל פני מאדים, שם הלחץ האטמוספירי הוא רק אחוז אחד מזה ששורר על פני כדור הארץ, טמפרטורת הרתיחה היא 4 מעלות. נקודת התגרענות היא המקום שבו מתחילים מים להתגבש לקרח (נקודת התגבשות) או המקום שבו מים מתחילים לרתוח (נקודת רתיחה).מים מזוקקים בלחץ שלילי (ואקום) ירתחו בטמפרטורה של 7 מעלות צלזיוס בלבד. בהיעדר נקודת התגרענות עשויים המים להישאר במצב נוזלי עד טמפרטורות נמוכות בהרבה. בלחץ שלילי גבוה ניתן לקרר אותם עד ל־38 מעלות צלזיוס מתחת לאפס עד שיעברו למצב מוצק. המים יכולים להישאר במצב נוזלי בצורת טיפות זעירות עד לטמפרטורה נמוכה עוד יותר (כ-°C= -41). האנומליה של המים למים מספר תכונות כימיות ופיזיקליות הנובעות בעיקר מהקיטוב של מולקולת המים: המבנה המולקולרי של המים מולקולת המים בנויה משני אטומי מימן ואטום אחד של חמצן. המרחק בין אטומי המימן לאטום החמצן הוא 0.957854 אנגסטרום. אטומי המימן מוסחים מאטום המימן בזווית של 104.45 מעלות (ראו בתמונה שבתבנית בראש הערך). המים הם תרכובת אי-אורגנית קוטבית. רוב השדה החשמלי השלילי, מיוצר על ידי אטום החמצן, ולכן האלקטרונים נמשכים יותר לאטום החמצן מאשר לאטומי המימן. על כן רוב הזמן (לפי הסבירות) האלקטרונים נמצאים בסמיכות לאטום החמצן, ומסביב לאטומי המימן האלקטרונים סובבים לעיתים רחוקות יחסית. כך נוצר שדה חשמלי שלילי מסביב לאטום החמצן, ושדה חשמלי חיובי מסביב לאטומי המימן. השדה שיוצרים האלקטרונים סביב אטומי H הם שווים בממוצע ל-3.3 חלקי עשר מיליארד משדה חשמלי של אלקטרון נורמלי. לכן שדה חשמלי חיובי (+) סביב אטומי H הוא גדול מאוד, בממוצע 99.999999967% משדה חשמלי של פרוטון רגיל. אפשר לומר, אם כן, שאטום O הוא יון שלילי 2 - (כמעט) ואטומי H הם בעצם יונים של 1+ (כמעט). לפיכך, כשמולקולת מים תתקרב עם הפינה של אטום O (שבו יש מטען שלילי) לפינה של מולקולה אחרת לאטום H (שבו יש מטען חיובי), יימשכו שתיהן אחת לשנייה. בגבישי קרח, כל מולקולת מים מושכת אליה 4 מולקולות שכנות (שני אטומי H של המולקולה נמשכים לשני אטומי O של המולקולות השכנות, ואטום O של המולקולה נמשך לשני אטומי H של שתי מולקולות שכנות). המרחקים בין אטומי H לאטומי O במים בנוזל ובמוצק שונים מהמרחקים בגז. הסיבה לכך היא שבמוצק ובנוזל (בניגוד לגז) המולקולות נמשכות אחת לשנייה, ולכן "נמתחות", ובגז אין משיכה ולכן המרחקים מתקצרים מעט (המרחק בין אטום H לאטום O הוא 0.95718 אנגסטרום בגז, ו-0.95784 אנגסטרום במוצק ובנוזל). מסיסות היא תכונה של פיזור של תרכובת בתוך נוזל, לייצור תמיסה הומוגנית. מים הם ממס לחומרים שהמולקולות שלהם מתפזרות בתוך המים מבלי לשקוע. החומר המומס יכול להתפרק ליונים (למשל מלח (NaCl)) או מולקולות קוטביות שאינן יונים (כמו סוכר). כאשר כמות המומס עולה על נקודת המסיסות שלו הוא מתגבש. ככל שהטמפרטורה עולה המסיסות גוברת. מסיסות נחשבת לתכונה פיזיקלית כיוון שהמרכיבים שומרים על תכונותיהם. תרכובות נוצרות בתהליך כימי. קשרים בין-מולקולריים ההרכב הכימי של מים (H2O) יוצר מולקולה שיש לה קוטב חיובי וקוטב שלילי, הנובע מחלוקת המטענים של אטומי המימן והחמצן. כך נוצר מבנה המבוסס על קשרי מימן בין המולקולות שלו. קשר נוסף בין מולקולות הנושאות מטענים קוטביים הוא קשר ואן דר ואלס. שבו שתי מולקולות נמשכות זו לזו בקשר אלקטרוסטטי. קשר זה חלש יותר מקשר מימני, אבל בהינתן שיש קשרים רבים כאלה בין מולקולות (כמו DNA וקולואידים) התאחיזה חזקה. חומרים היכולים לייצר קשרי מימן או ואן דר ואלס עם מים הם הידרופילים, בעוד חומרים הדוחים קשרים אלה בדרך כלל ההידרופוביים פירוש המילה מיוונית - " אוהב מים" שטח הפנים של חומר נובע מהכח הבין-מולקולרי של החומר עצמו, הנקרא קוהזיה במגע עם שטח פנים של חומר אחר, (בדרך כלל מוצק) נוצר כוח משיכה בין פני השטח של שני החומרים הנקרא אדהזיה. ככל שהאדהזיה חזקה מהקוהזיה, ייווצר משטח של הנוזל על פני המוצק, מצב הנקרא הידרופילי. ככל שכוחות הקוהזיה חזקים יותר מהאדהזיה תיווצר טיפה כדורית על פני השטח, והשטח יוגדר כהידרופובי. חומר יכול להיות הידרופילי אבל גם קשה - תמס (נמס בקושי) כמו עמילן. לכן יש להתייחס למסיסות כתכונה נפרדת מהידרופיליות. המילה "הידרופובי" מגיעה מיוונית: $ὕδωρ$ (הידרו) – מים, ו - $\phiόβος$ (פובוס) – פחד. חומרים הידרופוביים הם חומרים שאינם מסיסים במים או כאלה שיוצרים גבול ברור כשהם נמצאים במים (דוגמה: שמן), מים נוטים להתקבץ למבנה כדורי כדי למזער את שטח המגע שלהם עם מוצק הידרופובי. בנוזלים שלהם קוהזיה גבוהה ביחס לגז המשיק להם (כמו אוויר), נוצרת שכבה שמהווה ממברנה הנצמדת לנוזל. כאשר המגע עם מוצק הופך מתח הפנים לכוח משיכה בין המוצק לנוזל, הנקרא אדהזיה. משטח (surfactant) הוא חומר פעיל – שטח המוריד את מתח הפנים שבין מוצק לנוזל או גז, ולכן מגדיל את האדהזיה וההידרופיליות. משמש בעיקר לריסוס לקבלת משטח מגע אחיד בין העלה לחומר המרוסס. חומר מתחלב (emulsifier) גורם לחומר הידרופובי, להתפזר בתוך מים תוך יצירת בועות זעירות הנקראות מיצלות. נוזלים נבדלים ביניהם לפי חוק ניוטון לזרימה דינמית: τ = μ d v d y {\displaystyle \tau =\mu {\frac {dv}{dy}}} (כאשר מתח פנים טאו) שווה למקדם הצמיגות (מיו) כפול העבודה (W) חלקי השינוי בשטח הפנים (dv\dy). קוהזיה היא הכוח הקושר בין מולקולות הנוזל. הכוחות בנוזלים הידרופוביים הם כוחות בין-מולקולריים הנקראים כוחות ואן-דר-ואלס, המבוססים על משיכה הדדית בין מולקולרית של מטען חשמלי חיובי למטען שלילי. בנוזלים הידרופוביים (כמו שמן) הכוחות הם בעיקר כוחות ואן - דר - ואלס וכוחות התלכיד (מיצלות). γ = W Δ A {\displaystyle \gamma ={\frac {W}{\Delta A}}} . מתח פנים gamma שווה לעבודה W חלקי השינוי בשטח הפנים (Delta A) אדהזיה היא כוח המשיכה הפועל בין מולקולות שונות בדרך כלל בין נוזל לבין משטח מוצק (כגון דופן של צינור, זכוכית, או בד). חומר המגדיל את האדהזיה בין מים לשמן נקרא חומר מתחלב, (אמולסיפייר) שיוצר בועות זעירות של השמן בתוך המים (מיצלות). כאשר הנוזל הוא מים חומרים שבהם האדהזיה חזקה מהקוהזיה של המים נקראים הידרופיליים וכאשר במגע עם מים הקוהזיה שלהם חזקה מהאדהזיה הם הידרופוביים כאשר טובלים צינורית זכוכית דקה במים, המים יעלו בצינורית, כתוצאה ממשיכת האדהזיה בין המים לזכוכית. בחומר הידרופובי יהיה ה כח הקוהזי חזק מהאדהזי, הנוזל יידחה את הזכוכית, פני הנוזל יהיה קמורים והוא ישקע בצינורית. קשרים כימיים המבנה הקוטבי (פאראמגנטי) של המים נוטה לייצר קשרים יוניים עם יסודות ותרכובות על-ידי פירוק לפרוטון +H והידרוקסיל [ -OH]. פירוק חופשי זה קובע גם את pH של המים. H 2 O ↽ − ⇀ H + OH − {\displaystyle {\ce {{H2O<<=>H+}OH-}}} במצב זה בנוכחות מלח או מומס שבו קשרים של העברת אלקטרון מאטום לאטום (קו - וולנטיים) הם יהפכו ליונים, ויגיבו תמיד עם יונים נגדיים NaCl ↽ − − ⇀ Na + + Cl − {\displaystyle {\ce {NaCl <=> Na+ +Cl-}}} . (אלקטרוליטים) חימצון הוא תהליך שבו אטום מוסר אלקטרון לאטום (או מולקולה). חיזור הוא קבלת אלקטרון. (פרוטון הוא חיובי ואלקטרון שלילי). הנטייה למסור או לקבל אלקטרונים היא כוח הדחייה של יסודות ונקראת השורה האלקטרומוטיבית, ומאפיינת מתכות - שמתחמצנות, ואל מתכות שמחמצנות. בתהליכי חימצון חיזור ודחייה, כמו תגובה של מתכת פעילה (נתרן) יתחמצן הנתרן וההידרקסיל יתפרק תוך שחרור מימן חופשי. 2 Na + 2 H 2 O − > [ H 2 O ] 2 Na + + 2 OH − + H 2 + E {\displaystyle {\ce {2Na +2H2O- >[H2O] 2Na+ + 2OH - +H2 +E}}} בתהליך אקסוטרמי (משחרר אנרגיה). המים משמשים כגורם מתווך בין המתכות לאלמתכות. מים מזוקקים אינם מעבירים זרם חשמלי, אבל הוספת מומס שמתפרק ליונים מאפשר מוליכות חשמלית טובה. ההולכה נובעת מפירוק המים ליונים חיוביים ושליליים. בנוכחות יונים של מומס. (למשל מלח, חומצה או בסיס). העברת זרם ישר במים המכילים יונים (אלקטרוליטים) גורמת לתהליך כימי של פירוק המים לחמצן ומימן הנקרא אלקטרוליזה היא תהליך של פירוק המים לחמצן ומימן על-ידי העברת זרם דרך אלקטרודה חיובית (קתודה) ושלילית (אנודה). היונים החיוביים של המימן מקבלים אלקטרון בקטודה השלילית (חימצון) וההידרוכסיל משחרר חמצן באנודה החיובית. תהליך (חיזור) המתבטא בהעברת אלקטרונים מהחמצן שבהידרוקסיד ומסירת אלקטרון לפרוטון (חימצון) כתוצאה מכך משתחרר חמצן באנודה ומימן בקטודה. כך ניתן להפיק מימן וחמצן ממים, תוך השקעת אנרגיה. שלוש תכונות אלה הן הבסיס לקיום תהליכים ביוכימיים, יצירת תרכובות מורכבות ופעילות של אנזימים שמכוונים תהליכים ומאפשרים בניית אורגנלים ותאים. יצירה של ממברנות על-ידי שילוב של חומרים הידרופילים והידרופובים (מים ושמן) מאפשרת יצירת גופיפים (אורגנלים) היוצרים את מרכיבי התא. ממברנות ביולוגיות ממברנות ביולוגיות מרכיבות את כל התאים החיים, מחיידקים ווירוסים ועד בעלי חיים וצמחים. הן מהוות את השלד התוך תאי ואת העטיפה של האברונים המרכיבים את התא ומאפשרים קיום כל התהליכים הנחוצים לחיים (כמו ליפוזומים, ריבוזומים, ליזוזומים מיטוכונדריה ועוד). הן מורכבות משילוב של מולקולה שחציה הידרופילית וחציה הידרופובית, הממברנות התוך תאיות מבוססות על שתי שכבות הצמודות בכוח קשרי מימן בשכבות הפנימיות ההידרופיליות, ושכבות הידרופוביות בשכבה החיצוניות, ויוצרות בידוד בין תוכן הבועה לסביבה. בתוך הממברנה משולבים חומרים המאפשרים חדירות סלקטיבית של מולקולות קטנות כמו יוני נתרן ואשלגן, וכך נוצר שיווי משקל אוסמוטי בין התא לסביבה. ובנוסף מערכות של חדירות אקטיבית למולקולות ספציפיות (חדירות המחייבת פעילות אקטיבית והשקעת אנרגיה כנגד מפל הריכוזים) ייחודית למולקולות כמו סוכרים, חלבונים ושומנים, וכן קולטנים המאפשרים הכרות בין תאית ותגובות ייחודיות לתא. לשון ואטימולוגיה ביוונית, התחילית הידרו (ύδρο) מציינת מים, נוזלים או מימן. תחילית זו משמשת במילים ובמושגים רבים: התחילית אקווה (מלטינית Aqua) גם היא משמשת לציון מים, אם כי היא נפוצה פחות, ומשמשת בעיקר בהקשרים שאינם מתחום הכימיה – דוגמת אַקוָריוּם או אַקְוֶדוּקְט. חשיבות המים לחיים המים הם חומר חיוני לקיומן של כל צורות החיים הידועות. בעלי החיים והצמחים זקוקים למים לצורך קיום מחזור החיים שלהם. עובדה זו מתקשרת באופן ישיר לכך שהמים נפוצים בכדור הארץ, שכן היותו של כדור הארץ מקורם של כל החיים הידועים, תואם את היותם של המים תנאי הכרחי לחיים אלו. המים מהווים את רוב המסה של רוב היצורים החיים, וממלאים תפקידים רבים בגופם. התפקידים העיקריים הם שימוש כנוזל תוך-תאי (ציטופלזמה) וכמרכיבו העיקרי של הדם, אך המים נחוצים גם לתפקידים משניים רבים אחרים, כגון ניקוי הגוף, הזעה והלחתה. כל תא חי זקוק לכמות מסוימת של מים כדי להתקיים. יצורים מסוימים מפיקים את הגזים הנחוצים לנשימתם (חמצן או פחמן דו-חמצני) מתוך המים. זימי הדגים הם האיבר הידוע ביותר המשמש להפקת גזי נשימה מומסים במים. מלבד השימושים הפנים-גופניים עושים יצורים שונים שימוש במים למטרות חוץ-גופניות שונות. הדוגמה העיקרית לשימוש זה היא משמעות המים כסביבת חיים עבור החיים הימיים: דגים, צמחי מים וכו'. דוגמאות אחרות הן השימוש במים כסביבה להטלת ביצים, כמקור מזון וכחיץ כנגד טורפים. את המים משיגים היצורים החיים באופנים שונים. בעלי החיים סופגים מים בגופם בעיקר באמצעות שתייה, בעוד הצמחים יונקים את רוב מימיהם בעזרת שורשיהם. המים דרושים ליצורים חיים בכמויות מסוימות, ובאיכות מסוימת. איכות זו נקבעת על-פי החומרים המומסים במים: אומנם החיים דורשים כמויות משתנות של חומרים מומסים (כדוגמת מינרלים), אך מרבית החומרים המומסים במים (או חומרים רצויים בכמויות לא רצויות) הופכים את המים לבלתי שמישים או אפילו רעילים. שינוי באיכות המים שפוגע בשימוש מסוים שלהם (לדוגמה – שתייה), אינו פוגע בהכרח בשימושים אפשריים אחרים (לדוגמה – הטלת ביצים). הדוגמה הבולטת ביותר למים שאינם שמישים ללא טיפול מיוחד, הם מי האוקיינוסים והימים המלוחים, אותם מסוגלים לשתות רק יצורי-מים מזנים מסוימים. סוגים אחרים של מים שאינם שמישים הם מים שזוהמו על ידי התעשייה. אדי מים מהווים גז חממה. המים עבור האדם המים משמשים את האדם למגוון רחב של פעולות, ביניהן: שתייה, ניקיון, בישול, השקיה, קירור וחימום, המסה, שיט, הפקת אנרגיה הידרואלקטרית, שחייה, רחצה, צלילה, דיג וכיבוי אש. פעולות אלו מחולקות לצריכה ביתית (ברובה שתייה, היגיינה ובישול), שימוש חקלאי (ברובו השקיה ומי שתייה עבור חיות מבויתות), שימוש תעשייתי (ניקוז, קירור, המסה ועוד), שימוש עירוני (ברובו ניקוז), שימוש למטרות תחבורה (שיִט), שימוש למטרות מחקר ושימוש למטרות פנאי (שיִט, שחייה ועוד). האדם קולט את המים על ידי שתייה ומזון ופולט אותם על ידי זיעה, הפרשה ונשיפה. השימוש החשוב ביותר במים, אף על פי שלא הגדול ביותר מבחינה כמותית, הוא השתייה. האדם זקוק למים מתוקים (כלומר – ללא אחוזי המלח הגבוהים שבאוקיינוסים), ולכן היה תלוי במים שמקורם במשקעים, ולאחרונה אף בהתפלה. צריכת המים לנפש אינה זהה עבור כל בני האדם, ותלויה במאפיינים אנושיים רבים, בשיטות ייצור, ברמת חיים וכן הלאה. בזכות השימושים הרבים למים, קיימת השפעה הדדית הדוקה בין המים לבין תחומי הדמוגרפיה והכלכלה. קיומם של מים מתוקים לשתייה ולהשקיה כמו סמיכות לאזורי דיג וגישה לנתיבי תחבורה ימיים, עשוי להשפיע על נטיות ההתיישבות האנושיות. לרוב מעדיפים בני אדם להתיישב בסמוך למקור מים מתוקים ובעל נגישות לשטחי מים מלוחים, אם כי התפתחויות טכנולוגיות שונות (לדוגמה: שיפורים חקלאיים, פיתוח טכנולוגיות תחבורה וכדומה) משנות נטיות אלו. צורות ההתיישבות מחד וניצול המים מאידך משנים את מצב המים. מלבד השינויים הברורים מאליהם (לדוגמה: שתייה גורמת למעבר מים לתוך גוף האדם), קיימים שינויים רבים ומורכבים בזמינות המים. לדוגמה, הקמת סכר על פני נהר עשויה להביא להסטת מסלולו. השפעות הדדיות אלו של האדם והמים מביאות לכך שהמים מהווים מרכיב ניכר של הפוליטיקה האנושית, גם אם באופן שאינו גלוי. נושאים כגון זיהום מים ושליטה על מקורות מים הם מכריעים ביותר במאבקי הכוח הפוליטיים השונים. בעוד שבעולם המערבי אספקת המים היא לרוב מובנת מאליה, לא כך המצב בארצות העולם השלישי. למיליארדים של בני אדם – רוב אוכלוסיית העולם – אין גישה למים נקיים. המחסור החמור במים אינו מאפשר לתושבי אזורים אלו לשתות מספיק ולשמור על היגיינה, דבר שמגביר את התחלואה. אחרים משתמשים במקורות מים מזוהמים, דבר שאף הוא גורם לתחלואה; רוב מוחלט של המקרים של כמה מחלות הנפוצות בעולם השלישי, כגון מלריה ודיזנטריה, נגרם ממים מזוהמים. בדיונים על מצבן של מדינות העולם השלישי נדחק לעיתים קרובות נושא המים לשוליים אך בעיות רבות הפוקדות המדינות העניות יוכלו להיפתר רק לאחר התייחסות רצינית למשבר המים. ד"וח של ארגון השימור הבין-לאומי (WWF) מתריע מפני מחסור עתידי במי שתייה גם במדינות העולם המערבי עקב התייבשות הקרחונים בצפון אירופה ושימושים בזבזניים שונים כגון ייצור מוזל של בגדים, פירות, ירקות ותכשיטים. בלונדון בלבד, דליפות מתשתיות מים ישנות מבזבזות כמות של כ-300 בריכות שחייה אולימפיות ביום. ב-22 במרץ חל יום המים הבין-לאומי. בארץ ישראל, הנעדרת כמעט נהרות, שחלקה הגדול על גבול המדבר, הספקת המים היוותה בה אתגר מאז ומעולם, אך כיום, בזכות הקמת מפעלי התפלת מים, נפתרה בעיית המים של ארץ ישראל והיא אף מספקת מים לשכנתה ירדן, כחלק מהסכמי "השלום" בין המדינות. הרשות הממשלתית למים ולביוב אחראית על ניהול משק המים בישראל ומופקדת על כל הגופים המפיקים מים במדינה. חברת "מקורות", חברת המים הלאומית של ישראל שהוקמה עוד בתקופת המנדט על ידי לוי אשכול, היא רשות הפקת וחלוקת המים הגדולה ביותר בישראל, והיא מפיקה כ-70% מהמים בישראל, וגם אחראית על הולכתם ברמה הארצית, בעיקר באמצעות המוביל הארצי. תפקידי "מקורות" לפי החוק: "להקים את המפעל הארצי, לנהלו, לספק מים ממנו ולהחזיקו במצב תקין, לשפרו, להרחיבו ולעשות כל פעולה אחרת הדרושה להספקת מים ממנו". כבתחומים רבים, נקטה הממשלה מדיניות הפרטה במשק המים. ב"תזכיר ההתאגדות של חברה מוגבלת במניות", אשר הכריז על הקמתה של חברת מקורות, שהתפרסם ב-31 בינואר 1937, הוגדר תפקידה של חברת המים שעתידה הייתה לקום: "להוציא לפועל ולעשות את כל הדברים הנחוצים, או המתאימים, להשגת מים, אגירתם, מכירתם, מסירתם, הפצתם או המצאתם". על התזכיר היו חתומים תשעת מנהלי החברה הראשונים מהסוכנות היהודית, קק"ל, חברת "ניר" ההסתדרותית, והמרכז החקלאי. בעקבות מצוקת משק המים בישראל הוטל בשנת 2009 היטל על צריכת מים – היטל בצורת. כחלק מהשפעות האדם על המים נוצר צורך של החברה האנושית להניע מים על מנת לשרת את מטרותיהם: מלבד אלו יש לעיתים צורך בשינוי מצב המים בצורות שונות: כמו כן, עשוי להיות שימוש במים שאינו עוסק בשינוי ישיר שלהם. לדוגמה, ציפה עשויה להועיל לאדם בתחומים רבים, ובעיקר בתחום התחבורה (ברכבים צפים המכונים כלי שיט ואף באופן ישיר, ללא כלי שיט, כגון שינוע של בולי עץ בנהרות). כמו כן, שיטת צלילה והשקעת עצמים במים (לאו דווקא של בני אדם), חיוניות למטרות שונות כגון מדידת נפח, דיג וכדומה. ניצול מים על-ידי האדם האפשרויות להשיג מים, לאגור אותם ולהוביל אותם ממקום למקום הן שקבעו את מיקומם של יישובים, את אופיים, את אורח החיים בהם ואת התפתחותם הכלכלית והתרבותית. עד התקופה הרומית, לפני כאלפיים שנה, היו מרבית היישובים סמוכים למקורות מים כמו מעיינות ונחלים. אספקת המים של יישובים אחרים הייתה תלויה בבורות ובבריכות אגירה של מי גשמים. הצורך במים הביא לפיתוח של מתקנים שונים להשגתם: מתקנים להעברת מים, מתקני אגירה ומתקנים לניצול מי תהום. אחד המתקנים הקדומים ביותר להעברת מים ממקום למקום הוא התעלה. התעלה נחצבה בקרקע ודופנותיה חוזקו בעזרת אבנים. תעלות גדולות הגיעו לאורך כמה קילומטרים, לרוחב של כשני מטרים ולעומק של קרוב לשלושה מטרים. המים שנאספו לתעלות היו מי שיטפונות או מים ממקורות קבועים כמו מעיינות ונחלים. זרימת המים בתעלות הייתה בכוח הכבידה. לעיתים קרובות היו מקורות המים של תושבי הערים המבוצרות מחוץ לחומות. כדי להבטיח אספקת מים לעיר גם בעת מלחמה ומצור נחצבו נקרות – דרכי מעבר תת-קרקעיות אל מקורות המים. אחת הנקרות המפורסמות בישראל היא נקבת השילוח, שסיפקה מים לתושבי ירושלים הקדומה. עם הגידול באוכלוסיית האדם והתרחבות ההתיישבות עלה הצורך להעביר מים אל יישובים ואל אדמות חקלאיות המרוחקים ממקורות המים. לצורך זה הוקמו אמות מים (אקוודוקטים). מקור השם מתקופת המשנה, אז היה מקובל לחפור תעלות מים ברוחב ובעומק אמה (יחידת מידה קדומה בת 48 ס"מ, 60 ס"מ לפי דעה אחת), ומכאן שמם. אמות המים היו מורכבות מסדרה של תעלות בנויות, פתוחות או סגורות, המונחות על הקרקע או על גבי גשרים. מקור המים היה בגובה רב יותר מהיעד שלהם, כדי שיזרמו בכוח הכבידה. לעיתים חצתה האמה רכסי הרים בתוך מנהרות חצובות. חלק ממתקני האגירה בתקופות קדומות היו טבעיים, כמו גבים (בריכות הנוצרות בתוך שקעים בקרקע), אגמים ושטחי הצפה עונתיים. מתקני אגירה אחרים היו בנויים, כמו סכרים בריכות ובורות. הסכר הוא קיר בנוי באפיק נחל, שעוצר את הזרימה של המים. עצירת הזרימה יוצרת מאגר במעלה הנחל. את המים שנאגרו הובילו בדרך כלל בתעלות למקומות שונים על פי הצורך. סכרים שימשו הן לאגירת מי שתייה והן לאגירת מים להשקיה. שיטת ההשקיה הנפוצה בתקופות קדומות הייתה שיטת ההצפה: מהמאגר הובלו המים אל השדה שחולק לחלקות. לאחר שהציפו חלקה אחת עברו עודפי המים לחלקה נוספת, נמוכה יותר, וכך הלאה. הבריכות נבנו בדרך כלל סמוך למעיין או בתוואי של זרימת מי שיטפונות. החיסרון העיקרי של אגירה בסכרים או בבריכות פתוחות היה התאדות חלק ניכר מכמות המים במאגר. היו גם בריכות תת-קרקעיות דמויות מנהרות, שהמים בהן נשמרו תקופות ארוכות יותר. אפשר למצוא בריכות כאלה לאורך מדרונות של ואדיות בנגב. בורות מים היו המאגרים התת-קרקעיים הנפוצים ביותר בארץ. חלק מהם משמשים לאגירת מים גם כיום. הבורות נחפרו בקרקע או נחצבו בסלע בעומקים שונים. צורתם הייתה כשל פעמון – רחבים בקרקעית והולכים וצרים כלפי מעלה בכיוון הפתח. המבנה הזה צמצם את התאדות המים והקל על כיסוי הבור, שנועד למנוע את זיהום המים. מי הגשמים נאספו אל הבור באמצעות תעלות פתוחות מן הרחובות וגם מגגות הבתים. אגירת המים בבורות אפשרה הקמת יישובי קבע גם באזורים שאין בהם מעיינות או מקורות מים אחרים הזמינים כל השנה. החיפוש אחר דרכים לניצול מי תהום נבע מכך שבחלק מהאזורים לא היו מקורות מים עיליים (מעיינות, אגמים, נחלים) שהיו זמינים בכל ימות השנה. מי תהום מצויים גם באזורים מדבריים. המתקנים השונים נועדו להעלותם על פני השטח. גבים מלאכותיים, המכונים בערבית "תמאיל", נחפרו בעומק לא רב באפיקי נחלים או בקרבת הים, במקומות שבהם מי התהום קרובים לפני הקרקע. מי התהום נקווים אל הגב, אך הם אוזלים במהירות בעקבות השימוש בהם. לאחר זמן מה מתמלא הגב מחדש. את הגבים יש לחדש מדי פעם, מכיוון שהם נסתמים על ידי הסחף המובא עם המים. בארות הן המתקנים הנפוצים ביותר לניצול מי התהום. אלו הן חפירות אנכיות שהעמיקו עד למפלס מי התהום. קוטר הבארות היה גדול, כדי להקל על מלאכת החפירה ועל ניקוי המים. קירות הבאר היו מחוזקים בדרך כלל באבנים. כמות המים שהופקה מהבאר הייתה תלויה בהצלחת החופרים להגיע לעורק מרכזי של מי תהום. כדי לנצל את מי התהום שבבארות היה צורך לשאוב אותם. חלק ממתקני השאיבה הופעלו בעזרת אנרגיית שרירי האדם או שריריהם של בעלי חיים. אחד ממתקני השאיבה הפשוטים והקדומים ביותר היה החבל והדלי. מתקני שאיבה משוכללים ויעילים יותר התבססו על גלגלת, בורג (כגון בורג ארכימדס) או מנוף. זיהום מים זיהום מים הוא מצב שבו ריכוזם של חומרים מסוימים (שעלולים גם להיות רעילים) במים עולה, והמים אינם ממלאים עוד את תפקידם הקודם. כך למשל מי נחל, שהיו בית גידול לדגים, סופגים חומרים זרים הגורמים להכחדת הדגה. קיים זיהום טבעי של מים, כתוצאה מהצטברות סחף, הפרשות בעלי חיים, אבק, חול, פיח ועוד, אך האחראי העיקרי לזיהום המים הוא האדם, אשר פעולות שונות שהוא מקיים עשויות לזהם את מקורות המים הקיימים. בישראל נפוצה יחסית[דרוש מקור] בעיית זיהום המים בגלל חוסר אכפתיות וחוסר מודעות. ידועים בזיהומם נחל הירקון ונחל הקישון. באסון המכביה התמוטט גשר שעבר מעל נחל הירקון, ואנשים שהיו עליו, ואשר באו במגע עם מימי הנחל, חלו ומתו. צוללים צבאיים שצללו בנחל הקישון חלו בסרטן, ומשרד הביטחון הכיר בתביעתם לפיצויים מן המדינה. המים בתרבויות ודתות האמונות והמנהגים הרבים סביב המים נפוצים בעולם כולו, הן בארצות שבהן יש מחסור במים והתושבים יודעים מניסיונם כי בלי מים אין חיים, והן בארצות שבהן יש מים בשפע רב מדי והאדם מתקשה לעמוד בפני כוח ההרס החזק שלהם. המים בכדור הארץ מתפלגים ל-97.2% מים מלוחים באוקיינוסים וימים ו-2.8% של מים מתוקים באגמים, נהרות וקרחונים. כבר בימי קדם ידע האדם שהמים נחוצים לחיים, שבלעדיהם אין חיים, ושעולמנו מיוחד בכך שיש בו מים נוזלים. כך ביהדות וכך גם אצל עמים אחרים ודתות אחרות. בסיפור בריאת העולם במקרא, ביום הראשון לבריאה, כשנבראו שמים וארץ, המים כבר היו קיימים. שנאמר: "ורוח אלוהים מרחפת על פני המים". ביום השני הופרדו השמים והארץ על המים שבהם זה מזה, וביום השלישי נקוו המים שעל הארץ, להבדיל בין ים ליבשה. תיאורים דומים, של שליטת כוח עליון על המים ושל "ארגון" והכוונה של המים, נמצאים גם במסורות עתיקות של עמים אחרים, כמו האכדים במזרח והאינדיאנים במערב. כולם הכירו בחשיבות המים ובכוחם הרב, כוח שיכול לפעול לשני כיוונים מנוגדים: האחד – כיוון של יצירה והענקת חיים בדרך של צמיחה ושגשוג, והשני – כיוון של הרס וחורבן על ידי שיטפונות וגשמי זעף. בעולם העתיק היו קיימות אמונות שונות לגבי מי התהום, שגם יכלו לגרום לפי אמונות אלו, לצמיחה ותנובה חקלאית וגם יכלו לגרום לשיטפונות ואסונות. התפילה לגשם מובנית במסורת היהודית, והלכות רבות דנות בתעניות תחנונים לגשם. עם זאת, המקרא, באמצעות הנגדת האופן בו מספק הטבע (כלומר האלוהים) מים לתושבי ארץ ישראל לעומת מצרים, ראה יתרון לארץ ישראל בדרך ליצירת חברה צודקת: "למטר השמים תשתה מים". ביהדות, מים הם מקור ברכה ונותנים חיים לאדם ולכל אשר לו, כאשר האדם נוהג כראוי. האל גומל להולכים בדרך הטובה – "והורדתי הגשם בעתו, גשמי ברכה יהיו". במיתולוגיה ההודית והבבלית יש אלים הממונים על המים ומעניקים אותם לאדם. לעומת זאת, יש שהמים משמשים כעונש. סיפורי מבול מופיעים אצל עמים רבים, ביניהם הארמים, היהודים, היוונים וההודים. המבול הוא תיאור של מים עצומים הגורמים הרס וכיליון, ומקורו באמונה שהאדם נענש על חטאיו בידי כוח עליון. לא פלא אפוא שקיימת ביהדות וגם במסורות אחרות, למשל אצל שבטים אפריקאיים, "תפילת הגשם": בקשה מכוח עליון לכוון את המים לתועלת ולא לנזק. נוסף להכרה בחשיבות המים לחיים בכלל, נפוצה האמונה בכוחם המטהר. כשם שרחיצת הגוף במים מנקה אותו מלכלוך פיזי, כך מטהרים המים גם את הנפש מטומאה ומחטא. ביהדות מקובלים מנהגים רבים הקשורים בטיהור על ידי מים: נטילת ידיים, טבילה במקווה לשם טהרה ואף כחלק מהגיור, כטקס כניסה ליהדות, הדחת הבשר, הגעלת כלים, הזאת מים מיוחדים על אנשים וכלים הטמאים בטומאת מת ומנהג התשליך בראש השנה. השיא של חגיגות המים היהודיות קשור למצוות ניסוך המים בשמחת בית השואבה שהתקיים בחג הסוכות בבית המקדש. בנצרות ובדתות אחרות קיימת הטבלה של תינוקות והתזת מים מקודשים. ישנם מקורות מים מקודשים במיוחד שמצווה לטבול בהם, כמו נהר הירדן אצל הנוצרים ונהר הגנגס אצל ההינדואים. עוד אמונות הקשורות במים: על פי התלמוד בכוח המים להחזיר לקדמותו דבר שנוצר על ידי כישוף, כך לדוגמה בול עץ שנהפך בכישוף לחמור, חזר להיות בול עץ כשנגע במים. אמונות נוספות הם כי למים יש סגולה להבחין בין אדם אשם לחף מפשע, והם גם מסייעים לחזות את העתיד. משום כך, אצל עמים רבים במזרח ובמערב, שימשו המים למשפט, לניבוי ולכשפים. כך בקביעת גזר דין אצל הבבלים בעת העתיקה, כך ביהדות במי סוטה, כך אצל היוונים בניחוש של האורקל, וכך במשפטי המכשפות באירופה בימי הביניים. השיטות היו שונות: השקעת החשוד בתוך מים, הטלת אבנים למים, שתיית מים והתזת מים. ראו גם לקריאה נוספת קישורים חיצוניים הערות שוליים
========================================
[SOURCE: https://en.wikipedia.org/wiki/Day_of_Judgment] | [TOKENS: 5105]
Contents Last Judgment The Last Judgment[a][b] is a concept originating in Zoroastrianism and found across the Abrahamic religions. Christianity considers the Second Coming of Jesus Christ to entail the final judgment by God of all people who have ever lived, resulting in the salvation of a few and the damnation of many. Some Christian denominations believe most people will be saved, some believe most people will be damned, and some believe the number of the saved and of the damned is unknown. The concept of the Last Judgment is found in all the canonical gospels, particularly in the Gospel of Matthew. The Christian tradition is also followed by Islam, where it is mentioned in many chapters of the Quran, according to some interpretations. The Last Judgment has inspired numerous artistic depictions, including painting, sculpture and evangelical work. In Zoroastrianism Frashokereti is the earliest surviving articulation of a final judgement in any religion. It refers to the Zoroastrian doctrine of a final renovation of the universe, when evil will be destroyed, and everything else will be then in perfect unity with God (Ahura Mazda). The doctrinal premises are (1) good will eventually prevail over evil; (2) creation was initially perfectly good, but was subsequently corrupted by evil; (3) the world will ultimately be restored to the perfection it had at the time of creation; (4) the "salvation for the individual depended on the sum of [that person's] thoughts, words and deeds, and there could be no intervention, whether compassionate or capricious, by any divine being to alter this." Thus, each human bears responsibility for their own fate, and simultaneously shares in the responsibility for the fate of the world. In Judaism In Judaism, beliefs vary. Rosh HaShanah is sometimes referred to as a 'day of judgement', but it is not conceptualized as the Day of Judgement. Some rabbis hold that there will be a future day following the resurrection of the dead. Others hold that the final accounting and judgment happens when one dies.[citation needed] Still others hold that the Last Judgment applies to only the gentiles, not the Jewish People. The Babylonian Talmud has a lengthy passage describing the future Judgement Day. In Christianity The doctrine and iconographic depiction of the Last Judgment are drawn from many passages from the apocalyptic sections of the Bible, but most notably from Jesus' teaching of the narrow gate in the Gospel of Matthew and in the Gospel of Luke. In Christianity, there are three main beliefs about who will be saved (go to heaven) and who will be damned (go to hell) on Judgment Day. All three beliefs are based on biblical interpretation and Christian tradition. Some Christians who believe in universal salvation say most people and angels will go to heaven on Judgment Day. Some Christians who believe in double predestination say most people and angels will go to hell on Judgment Day. Other Christians who disbelieve in universal salvation and double predestination say the number of the saved and of the damned on Judgment Day is unknown. Article IV – Of the Resurrection of Christ in Anglicanism's Articles of Religion and Article III – Of the Resurrection of Christ of Methodism's Articles of Religion state that: Christ did truly rise again from death, and took again his body, with flesh, bones, and all things appertaining to the perfection of Man's nature; wherewith he ascended into Heaven, and there sitteth, until he return to judge all Men at the last day. Anglican and Methodist theology holds that "there is an intermediate state between death and the resurrection of the dead, in which the soul does not sleep in unconsciousness, but exists in happiness or misery till the resurrection, when it shall be reunited to the body and receive its final reward." This space, termed Hades, is divided into Paradise (the Bosom of Abraham) and Gehenna "but with an impassable gulf between the two". Souls remain in Hades until the Last Judgment and "Christians may also improve in holiness after death during the middle state before the final judgment". Anglican and Methodist theology holds that at the time of the Last Day, "Jesus will return and that He will 'judge both the quick [the living] and the dead'," and "all [will] be bodily resurrected and stand before Christ as our Judge. After the Judgment, the Righteous will go to their eternal reward in heaven, and the Accursed will depart to hell (see Matthew 25)." The "issue of this judgment shall be a permanent separation of the evil and the good, the righteous and the wicked" (see The Sheep and the Goats). Moreover, in "the final judgment every one of our thoughts, words, and deeds will be known and judged," and individuals will be justified on the basis of their faith in Jesus. However, "our works will not escape God's examination." Belief in the Last Judgment (often linked with the general judgment) is held firmly in Catholicism. Immediately upon death, each person undergoes the particular judgment, and depending upon one's behavior on earth, goes to heaven, purgatory, or hell. Those in purgatory will always reach heaven, but those in hell will be there eternally. The Last Judgment will occur after the resurrection of the dead, and "our 'mortal body' will come to life again." The Catholic Church teaches that at the time of the Last Judgment Christ will come in His glory, and all the angels with him, and in his presence the truth of each one's deeds will be laid bare. Each person who has ever lived will be judged with perfect justice. The believers who are deemed worthy as well as those ignorant of Christ's teaching who followed the dictates of conscience will go to everlasting bliss; those who are judged unworthy will go to everlasting condemnation. A decisive factor in the Last Judgment will be the question, were the corporal works of mercy practiced or not during one's lifetime. They rate as important acts of charity. Therefore, and according to the biblical sources (Mt 25:31–46), the conjunction of the Last Judgment and the works of mercy is frequent in the pictorial tradition of Christian art. Before the Last Judgment, all will be resurrected. Those who were in purgatory will have already been purged, meaning they would have already been released into heaven, and so like those in heaven and hell will resurrect with their bodies, followed by the Last Judgment. According to the Catechism of the Catholic Church: 1038 The resurrection of all the dead, "of both the just and the unjust" (Acts 24:15), will precede the Last Judgment. This will be "the hour when all who are in the tombs will hear [the Son of man's] voice and come forth, those who have done good, to the resurrection of life, and those who have done evil, to the resurrection of judgment" (Jn 5:28–29) Then Christ will come "in his glory, and all the angels with him... . Before him will be gathered all the nations, and he will separate them one from another as a shepherd separates the sheep from the goats, and he will place the sheep at his right hand, but the goats at the left... . And they will go away into eternal punishment, but the righteous into eternal life" (Mt 25:31, 32, 46). 1039 In the presence of Christ, who is Truth itself, the truth of each man's relationship with God will be laid bare (Cf. Jn 12:49). The Last Judgment will reveal even to its furthest consequences the good each person has done or failed to do during his earthly life. 1040 The Last Judgment will come when Christ returns in glory. Only the Father knows the day and the hour; only he determines the moment of its coming. Then through his Son Jesus Christ he will pronounce the final word on all history. We shall know the ultimate meaning of the whole work of creation and of the entire economy of salvation and understand the marvelous ways by which his Providence led everything towards its final end. The Last Judgment will reveal that God's justice triumphs over all the injustices committed by his creatures and that God's love is stronger than death. (Cf. Song 8:6) — Catechism of the Catholic Church The Eastern Orthodox and Catholic teachings of the Last Judgment differ only on the exact nature of the in-between state of purgatory/Abraham's Bosom. These differences may only be apparent and not actual due to differing theological terminology and evolving tradition. The Eastern Orthodox Church teaches that there are two judgments: the first, or particular judgment, is that experienced by each individual at the time of his or her death, at which time God will decide where one is to spend the time until the Second Coming of Christ (see Hades in Christianity). This judgment is generally believed to occur on the fortieth day after death. The second, General or Final Judgment will occur after the Second Coming. Although in modern times some have attempted to introduce the concept of soul sleep into Orthodox thought about life after death, it has never been a part of traditional Orthodox teaching, and it contradicts the Orthodox understanding of the intercession of the Saints.[citation needed] Eastern Orthodoxy teaches that salvation is bestowed by God as a free gift of divine grace, which cannot be earned, and by which forgiveness of sins is available to all. However, the deeds done by each person are believed to affect how he will be judged, following the Parable of the Sheep and the Goats. How forgiveness is to be balanced against behavior is not well-defined in scripture, judgment in the matter being solely Christ's. Similarly, although Orthodoxy teaches that sole salvation is obtained only through Christ and his Church, the fate of those outside the Church at the Last Judgment is left to the mercy of God and is not declared. The theme of the Last Judgment is important in Orthodoxy. Traditionally, an Orthodox church will have a fresco or mosaic of the Last Judgment on the back (western) wall so that the faithful, as they leave the services, are reminded that they will be judged by what they do during earthly life. The icon of the Last Judgment traditionally depicts Christ Pantokrator, enthroned in glory on a white throne, surrounded by the Theotokos (Virgin Mary), John the Baptist, the Apostles, saints and angels. Beneath the throne the scene is divided in half with the "mansions of the righteous" (John 14:2), i.e., those who have been saved, to Jesus' right (the viewer's left), and the torments of those who have been damned to his left. Separating the two is the river of fire which proceeds from Jesus' left foot. For more detail, see below. The theme of the Last Judgement is found in the funeral and memorial hymnody of the Church, and is a major theme in the services during Great Lent. The second Sunday before the beginning of Great Lent is dedicated to the Last Judgement. It is also found in the hymns of the Octoechos used on Saturdays throughout the year. There were many renditions of the Last Judgment completed by Greek painters living in Crete which was held by the Venetian Empire. Most of the works of art were influenced by Venetian painting but were considered to be painted in the Maniera Greca. Georgios Klontzas painted many triptychs featuring the Last Judgment some include The Last Judgment, The Last Judgement Triptych, and The Triptych of the Last Judgement. Klontzas was the forerunner of a new painting style. Other Greek painters followed the precedent set by Klontzas. Theodore Poulakis added the last judgment to his rendition of Klontzas' In Thee Rejoiceth. The painter incorporated the Last Judgement into one of Klontzas' earlier works entitled In Thee Rejoiceth. Poulakis paid homage to the father of the Last Judgement style. Leos Moskos and Francheskos Kavertzas also followed the outline for the stylistic representation of the Last Judgement set by Klontzas. Their works were The Last Judgment (Kavertzas) and The Last Judgment (Moskos). Both paintings resemble Klontas' Last Judgement painting. Lutherans do not believe in any sort of earthly millennial kingdom of Christ either before or after his second coming on the last day. On the last day, all the dead will be resurrected. Their souls will then be reunited with the same bodies they had before dying. The bodies will then be changed, those of the wicked to a state of everlasting shame and torment, those of the righteous to an everlasting state of celestial glory. After the resurrection of all the dead, and the change of those still living, all nations shall be gathered before Christ, and he will separate the righteous from the wicked. Christ will publicly judge all people by the testimony of their faith – the good works of the righteous in evidence of their faith, and the evil works of the wicked in evidence of their unbelief. He will judge in righteousness in the presence of all and men and angels, and his final judgment will be just damnation to everlasting punishment for the wicked and a gracious gift of life everlasting to the righteous. Although the Last Judgment is believed by a great part of Christian mainstream churches; some members of Esoteric Christian traditions like the Rosicrucians, the Spiritualist movement, and some liberals instead believe in a form of universal salvation.[citation needed] Max Heindel, a Danish-American astrologer and mystic, taught that when the Day of Christ comes, marking the end of the current fifth or Aryan epoch, the human race will have to pass a final examination or last judgment, where, as in the Days of Noah, the chosen ones or pioneers, the sheep, will be separated from the goats or stragglers, by being carried forward into the next evolutionary period, inheriting the ethereal conditions of the New Galilee in the making. Nevertheless, it is emphasized that all beings of the human evolution will ultimately be saved in a distant future as they acquire a superior grade of consciousness and altruism. At the present period, the process of human evolution is conducted by means of successive rebirths in the physical world and the salvation is seen as being mentioned in Revelation 3:12 (KJV), which states "Him that overcometh will I make a pillar in the temple of my God and he shall go no more out". However, this western esoteric tradition states – like those who have had a near-death experience – that after the death of the physical body, at the end of each physical lifetime and after the life review period (which occurs before the silver cord is broken), a judgment occurs, more akin to a Final Review or End Report over one's life, where the life of the subject is fully evaluated and scrutinized. This judgment is seen as being mentioned in Hebrews 9:27, which states that "it is appointed unto men once to die, but after this the judgment". Emanuel Swedenborg (1688–1772) had a revelation that the church has gone through a series of Last Judgments. First, during Noah's Flood, then Moses on Mount Sinai, Jesus' crucifixion, and finally in 1757, which is the final Last Judgment. These occur in a realm outside earth and heaven, and are spiritual in nature. The Church of Jesus Christ of Latter-day Saints (LDS Church) teaches that the last judgment for each individual occurs after that individual has been resurrected.[citation needed] People will be judged by Jesus Christ. Jesus' twelve apostles will help judge the twelve tribes of Israel and the twelve Nephite disciples from the Book of Mormon will help to judge the Nephite and Lamanite people. The Church of Jesus Christ of Latter-day Saints teaches that people will be judged by their words, their works, their thoughts, and the intents of their hearts. Records that have been kept in heaven and on earth will also be used to judge people. Jesus Christ will act as the advocate for people who had faith in him and such people will enter God's presence based on Jesus' merits as opposed to their own. After the final judgment, an individual is assigned to one of the three degrees of glory. In art, the Last Judgment is a common theme in medieval and renaissance religious iconography. Like most early iconographic innovations, its origins stem from Byzantine art, although it was a less common subject than in the West during the Middle Ages. In Western Christianity, it is often the subject depicted in medieval cathedrals and churches, either outside on the central tympanum of the entrance or inside on the (rear) west wall, so that the congregation attending church saw the image on either entering or leaving. In the 15th century it also appeared as the central section of a triptych on altarpieces, with the side panels showing heaven and hell, as in the Beaune Altarpiece or a triptych by Hans Memling. The usual composition has Christ seated high in the centre, flanked by angels, the Virgin Mary, and John the Evangelist who are supplicating on behalf of those being judged (in what is called a Deesis group in Orthodoxy). Saint Michael is often shown, either weighing the deceased on scales or directing matters, and there might be a large crowd of saints, angels, and the saved around the central group. At the bottom of the composition a crowd of the deceased are shown, often with some rising from their graves. These are being sorted and directed by angels into the saved and the damned. Almost always the saved are on the viewer's left (so on the right hand of Christ), and the damned on the right. The saved are led up to heaven, often shown as a fortified gateway, while the damned are handed over to devils who herd them down into hell on the right; the composition therefore has a circular pattern of movement. Often the damned disappear into a Hellmouth, the mouth of a huge monster, an image of Anglo-Saxon origin. The damned often include figures of high rank, wearing crowns, mitres, and often the Papal tiara during the lengthy periods when there were antipopes, or in Protestant depictions. There may be detailed depictions of the torments of the damned. The most famous Renaissance depiction is Michelangelo Buonarroti's The Last Judgment in the Sistine Chapel. Included in this fresco is his self-portrait, as St. Bartholomew's flayed skin. The image in Eastern Orthodox icons has a similar composition, but usually less space is devoted to hell, and there are often a larger number of scenes; the Orthodox readiness to label figures with inscriptions often allows more complex compositions. There is more often a large group of saints around Christ (which may include animals), and the hetoimasia or "empty throne", containing a cross, is usually shown below Christ, often guarded by archangels; figures representing Adam and Eve may kneel below it or below Christ. A distinctive feature of the Orthodox composition, especially in Russian icons, is a large band leading like a chute from the feet of Christ down to hell; this may resemble a striped snake or be a "river of Fire" coloured flame red. If it is shown as a snake, it attempts to bite Adam on the heel but, as he is protected by Christ, is unsuccessful. In Islam Belief in Judgment Day (Arabic: یوم القيامة, romanized: Yawm al-qiyāmah, lit. 'Day of Resurrection' or Arabic: یوم الدین, romanized: Yawm ad-din, lit. 'Day of Judgement') is considered a fundamental tenet of faith by all Muslims. It is one of the six articles of faith. The trials and tribulations associated with it are detailed in both the Quran and the hadith, (sayings of Muhammad), from whence they are elaborated on in the creeds, Quranic commentaries (tafsịrs), and theological writing, eschatological manuals, whose authors include al-Ghazali, Ibn Kathir, Ibn Majah, Muhammad al-Bukhari, and Ibn Khuzaymah. According to some Islamic teachings, there are two categories of heaven: those who go directly to it and those who enter it after enduring some torment in hell; Also, the people of hell are of two categories: those who stay there temporarily and those who stay there forever.[citation needed] Like Christianity, Islamic eschatology has a time of tribulation preceding Judgement Day where strange and terrible events will serve as portents; there will be a second coming of Jesus (but in different roles); battles with an AntiChrist (Al-Masīḥ ad-Dajjāl, literally "Deceitful Messiah") and struggles with Gog and Magog; and a Rapture-like removal of all righteous believers before the end. A "Day of Resurrection" of the dead (yawm al-qiyāmah), will be announced by a trumpet blast. Resurrection will be followed by a "Day of Judgment" (yawm ad-din) where all human beings who have ever lived will be held accountable for their deeds by being judged by God. Depending on the verdict of the judgement, they will be sent for eternity to either the reward of paradise (Jannah) or the punishment of hell (Jahannam). In this process, the souls will traverse over hellfire via the bridge of sirat. For sinners, the bridge will be thinner than hair and sharper than the sharpest sword, impossible to walk on without falling below to arrive at their fiery destination, while the righteous will proceed across the bridge to paradise (Jannah). Not everyone consigned to hell will remain there. Somewhat like the Catholic concept of purgatory, sinful Muslims will stay in hell until purified of their sins. According to the scholar Al-Subki (and others), "God will take out of the Fire everyone who has said the testimony" (i.e. the shāhada testimony made by all Muslims, "There is no deity but The God") "and none will remain to save those who rejected or worshipped other than God." While early Muslims debated whether scripture on Judgement day should be interpreted literally or figuratively, the school of thought that prevailed (Ashʿarī) "affirmed that such things as the individual records of deeds (including the paper, pen, and ink with which they are inscribed), the bridge, the balance, and the pond are realities to be understood in a concrete and literal sense." In Jainism In Jainism, there is no day of judgement as such. Jains believe, however, that as the 5th era comes to an end, evil will increase and the religion and good will decrease. Only four Jains will remain in the world: a monk, a female monk, a shravak and a shravika. A deity from the heavens will descend upon the earth and gather them, and ask them to take "Anshan", or vow to fast (without any food or water) until death. In Yarsanism In Yarsanism is a belief that people reincarnate until the Day of Resurrection when the last reincarnation occurs and pious people will be separated from sinful. God will forgive sins of pious souls and they will be rewarded with two paradises to which they will be sent according to what they look for. If they look for worldly pleasures, they will be sent to a mortal paradise, where they will perish one day. If they look for the mystical joy, then they will be sent to the immortal paradise, where they will live in the presence of God. Sinners will go to hell. Crack of doom In English, crack of doom is an old term used for the Day of Judgment, referring in particular to the blast of trumpets signalling the end of the world in Chapter 8 of the Book of Revelation. A "crack" had the sense of any loud noise, preserved in the phrase "crack of thunder", and "doom" was a term for the Last Judgment, as Eschatology still is. The phrase is famously used by William Shakespeare in Macbeth, where on the heath the Three Witches show Macbeth the line of kings that will issue from Banquo: The meaning was that Banquo's line will endure until the Judgment Day, flattery for King James I, who claimed descent from Banquo. Music See also References Bibliography Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Religion_in_the_United_States] | [TOKENS: 11490]
Contents Religion in the United States Religion in the United States is both widespread and diverse, with higher reported levels of belief than other wealthy Western nations. Polls indicate that an overwhelming majority of Americans believe in a higher power (2021), engage in spiritual practices (2022), and consider themselves religious or spiritual (2017). Christianity is the most widely professed religion, with the majority of Americans being Evangelicals, Mainline Protestants, or Catholics, although its dominance has declined in recent decades, and as of 2012 Protestants no longer formed a majority in the US. The United States has the largest Christian and Protestant population in the world. Judaism is the second-largest religion in the US, practiced by 2% of the population, followed by Hinduism, Buddhism, and Islam, each with 1% of the population. States vary in religiosity from Mississippi, where 63% of adults self-describe as very religious, to New Hampshire where 20% do. The elected legislators of Congress overwhelmingly identify as religious and Christian; with few exceptions, both the Republican and Democratic parties nominate those who are. Among the historical and social characteristics of the United States that some scholars of religion credit for the country's high level of religiousness include its Constitutional guarantees of freedom of religion and legal tradition of separation of church and state; the early immigration of religious dissenters from Northwestern Europe (Anglicans, Quakers, Mennonites, and other mainline Protestants); the religious revivalism of the first (1730s and 1740s), and second (1790s and 1840s) Great Awakenings, which led to an enormous growth in Christian congregations—from 10% of Americans being members before the Awakenings, to 80% belonging after. The aftermath led to what historian Martin Marty calls the "Evangelical Empire", a period in which evangelicals dominated US cultural institutions. They influenced measures to abolish slavery, further women's rights, enact prohibition, and reform education and criminal justice. New denominations were formed (Adventism, Jehovah's Witnesses, the Latter Day Saint movement (Mormonism), Churches of Christ and Church of Christ, Scientist, Unitarian and Universalist, Pentecostalism). Outside of Protestantism, an unprecedented number of Catholic and Jewish immigrants arrived in the United States during the immigrant waves of the mid to late 19th and 20th century. Social scientists have noted that beginning in the early 1990s, the percentage of Americans professing no religious affiliation began to rise from 6% in 1991 to 29% in 2021—with younger people having higher rates of unaffiliation. Similarly, polling indicated a decline in church attendance, and the number of people agreeing with the statement that religion is "very important" in their lives. Explanations for this trend include lack of trust in numerous institutions, backlash against the religious right in the 1980s, sexual abuse scandals in established religions, the end of the Cold War (and its connection of religiosity with patriotism), and the September 11 attacks (by religious Jihadists). Many of the "Nones" (those without a religious affiliation) have belief in a god or higher power and spiritual forces beyond the natural world. As of 2024, Christianity's decline may have leveled off or slowed, according to the Pew Research Center and Gallup, though according to the Public Religion Research Institute it has continued to decline. History Ever since its early colonial days, when some Protestant dissenter English and German settlers moved in search of religious freedom, America has been profoundly influenced by religion. Throughout its history, religious involvement among American citizens has grown since 1776 from 17% of the US population to 62% in 2000. Approximately 35-40 percent of Americans regularly attended religious services from eighteenth-century colonial America up to 1940. That influence continues in American culture, social life, and politics. Several of the original Thirteen Colonies were established by settlers who wished to practice their own religion within a community of like-minded people: the Massachusetts Bay Colony was established by English Puritans (Congregationalists), Pennsylvania by British Quakers, Maryland by English Catholics, and Virginia by English Anglicans. Despite these, and as a result of intervening religious strife and preference in England the Plantation Act 1740 would set official policy for new immigrants coming to British America until the American Revolution. While most settlers and colonists during this time were Protestant, a few early Catholic and Jewish settlers also arrived from Northwestern Europe into the colonies; however, their numbers were very slight compared to the Protestant majority. Even in the "Catholic Proprietary" or colony of Maryland, the vast majority of Maryland colonists were Protestant by 1670. The text of the First Amendment in the US Constitution states that "Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances." It guarantees the free exercise of religion while also preventing the government from establishing a state religion. However, the states were not bound by the provision, and as late as the 1830s Massachusetts provided tax money to local Congregational churches. Since the 1940s, the Supreme Court has interpreted the Fourteenth Amendment as applying the First Amendment to state and local governments.[citation needed] President John Adams and a unanimous Senate endorsed the Treaty of Tripoli in 1797 that stated: "the Government of the United States of America is not, in any sense, founded on the Christian religion." Expert researchers and authors have referred to the United States as a "Protestant nation" or "founded on Protestant principles", specifically emphasizing its Calvinist heritage. The modern official motto of the United States of America, as established in a 1956 law signed by President Dwight D. Eisenhower, is "In God We Trust". The phrase first appeared on US coins in 1864. According to a 2002 survey by the Pew Research Center, nearly 6 in 10 Americans said that religion plays an important role in their lives, compared to 33% in Great Britain, 27% in Italy, 21% in Germany, 12% in Japan, and 11% in France. The survey report stated that the results showed America having a greater similarity to developing nations (where higher percentages say that religion plays an important role) than to other wealthy nations, where religion plays a minor role. In 1963, 90% of US adults claimed to be Christians while only 2% professed no religious identity.[citation needed] In 2016, 73.7% identified as Christians while 18.2% claimed no religious affiliation. In 2019, a Pew Research Center survey report concluded that "the religiously unaffiliated share of the population, consisting of people who describe their religious identity as atheist, agnostic or 'nothing in particular,' now stands at 26%, up from 17% in 2009" and that "both Protestantism and Catholicism are experiencing losses of population share." Many of the unaffiliated retain religious beliefs or practices without affiliating. There have been variant proposed explanations for secularization including lack of trust in the labor market, with government, in marriage and in other aspects of life, backlash against the religious right in the 1980s, sexual abuse scandals, particularly those within the Southern Baptist Convention and Catholic Church. Other signs of a decline in religiosity include a decline in the percentage of respondents who say religion is "very important" in their lives compared to those who say it is not (the answer "very important" falling from 70% in 1965 to 45% in 2023, and "not very important" rising from 7 to 28% over the same period in Gallup polls), and a decline in church attendance (those who report attending church monthly or more often having declined from 52% to 45% from 2007 to 2018, according to a Pew Research Center survey). Still other sources insist Americans are becoming more religious, and surveys showing otherwise suffer from methodological deficiencies. A 2022 study by Pew Research Center estimated that the religiously unaffiliated had reached 30% of the population by 2020. It predicted that in the "most plausible" scenario, accounting for observed acceleration of religious switching, this would increase to 42% by 2050. Freedom of religion According to the American legal scholar and academic Noah Feldman, the United States federal government was the first government to be designed with no established religion at all. However, some states had established religions within their borders until the 1830s. Modeling the provisions concerning religion within the Virginia Statute for Religious Freedom, the framers of the Constitution rejected any religious test for office, and the First Amendment specifically denied the federal government any power to enact any law respecting either an establishment of religion or prohibiting its free exercise, thus protecting any religious organization, institution, or denomination from government interference. The decision was mainly influenced by European Rationalist and Protestant ideals, but was also a consequence of the pragmatic concerns of minority religious groups and small states that did not want to be under the power or influence of a national religion that did not represent them. Christianity The most popular religion in the United States is Christianity, comprising the majority of the population (73.7% of adults in 2016), with the majority of American Christians belonging to a Protestant denomination or a Protestant offshoot (such as the Latter Day Saint movement or the Jehovah's Witnesses). According to the Association of Statisticians of American Religious Bodies newsletter published March 2017, based on data from 2010, Christians were the largest religious population in all 3,143 counties in the country. Roughly 48.9% of Americans are Protestants, 23.0% are Catholics, 1.8% are Mormons (members of the Church of Jesus Christ of Latter-day Saints). Christianity was introduced during the period of European colonization. The United States has the world's largest Christian population. According to membership statistics from current reports and official web sites, the five largest Christian denominations are: The Southern Baptist Convention, with over 13 million adherents, is the largest of more than 200 distinctly named Protestant denominations. In 2007, members of evangelical churches comprised 26% of the American population, while another 18% belonged to mainline Protestant churches, and 7% belonged to historically black churches. A 2015 study estimates some 450,000 Christian believers from a Muslim background in the country, most of them belonging to some form of Protestantism. Beginning around 1600, Northwestern European settlers introduced the Anglican and Puritan religion, as well as Baptist, Presbyterian, Lutheran, Quaker, and Moravian denominations. Historians agree that members of mainline Protestant denominations have played leadership roles in many aspects of American life, including politics, business, science, the arts, and education. They founded most of the country's leading institutes of higher education. According to Harriet Zuckerman, 72% of American Nobel Prize laureates between 1901 and 1972, have identified from Protestant background. Traditionally Episcopalians and Presbyterians tended to be wealthier and better educated than most other religious groups, and numbers of the most wealthy and affluent American families as the Vanderbilts and Astors, Rockefeller, Du Pont, Roosevelt, Forbes, Fords, Whitneys, Morgans and Harrimans were Mainline Protestant families, although 2015/2016 (Pew) studies found households affiliated with Judaism and Hinduism to be more likely to have incomes over $100,000 per year than those in the mainline tradition Protestants, with other American religious groups having lower median incomes. Some of the first colleges and universities in America, including Harvard, Yale, Princeton, Columbia, Dartmouth, Pennsylvania, Duke, Boston, Williams, Bowdoin, Middlebury, and Amherst, all were founded by mainline Protestant denominations. By the 1920s most had weakened or dropped their formal connection with a denomination. James Hunter argues: Several Christian groups were founded in America during the Great Awakenings. Interdenominational evangelicalism and Pentecostalism emerged; new Protestant denominations such as Adventism; non-denominational movements such as the Restoration Movement (which over time separated into the Churches of Christ, the Christian churches and churches of Christ, and the Christian Church (Disciples of Christ)); Jehovah's Witnesses (called "Bible Students" in the latter part of the 19th century); and the Church of Jesus Christ of Latter-day Saints (Mormonism). Catholicism first came to the territories now forming the United States by way of Spanish colonists in the present-day Virgin Islands (1493), Puerto Rico (1508), Florida (1513), South Carolina (1566), Georgia (1568–1684), and the southwest. The first known Catholic Mass held in what would become the United States was in 1526 by Dominican friars Antonio de Montesinos and Anthony de Cervantes, who ministered to the San Miguel de Gualdape colonists for the 3 months the colony existed. The influence of the Alta California missions (1769 and onwards) forms a lasting memorial to part of this heritage. Until the 19th century, the Franciscans and other religious orders had to operate their missions under the Spanish and Portuguese governments and military. While the Puritans were securing their Commonwealth, members of the Catholic Church in England were also planning a refuge, "for they too were being persecuted on account of their religion." Among those interested in providing a refuge for Catholics was the second Lord of Baltimore, George Calvert, who established Maryland, a "Catholic Proprietary", in 1634, more than sixty years after the founding of the Spanish Florida mission of St. Augustine. The first US Catholic university, Georgetown University, was founded in 1789. Though small in number in the beginning, Catholicism grew over the centuries to become the largest single denomination in the United States, primarily through immigration, but also through the acquisition of continental territories under the jurisdiction of French and Spanish Catholic powers. Though the European Catholic and indigenous population of these former territories were small, the material cultures there, the original mission foundations with their canonical Catholic names, are still recognized today (as they were formerly known) in any number of cities in California, New Mexico and Louisiana. (The most recognizable cities of California, for example, are named after Catholic saints.) While Catholic Americans were present in small numbers early in United States history, both in Maryland and in the former French and Spanish colonies that were eventually absorbed into the United States, the vast majority of Catholics in the United States today derive from unprecedented waves of immigration from primarily Catholic countries and regions (Ireland was still part of the United Kingdom until 1921 and German unification didn't officially occur until 1871) during the mid-to-late 19th and 20th century. Irish, Hispanic, Italian, Portuguese, French Canadian, Polish, German, and Lebanese (Maronite) immigrants largely contributed to the growth in the number of Catholics in the United States. Irish and German Catholics, by far, provided the greatest number of Catholic immigrants before 1900. From 1815 until the close of the Civil War in 1865, 1,683,791 Irish Catholics immigrated to the US. The German states followed, providing "the second largest immigration of Catholics, clergy and lay, some 606,791 in the period 1815-1865, and another 680,000 between 1865 and 1900, while the Irish immigration in the latter period amounted to only 520,000." Of the four major national groups of clergy (early and mid-19th century)—Irish, German, Anglo-American, and French—"the French emigre priests may be said to have been the outstanding men, intellectually." As the number of Catholics increased in the late 19th and 20th century, they built up a vast system of schools (from primary schools to universities) and hospitals. Since then, the Catholic Church has founded hundreds of other colleges and universities, along with thousands of primary and secondary schools. Schools like the University of Notre Dame is ranked best in its state (Indiana), as Georgetown University is ranked best in the District of Columbia. The following 10 Catholic universities are also ranked among the top 100 universities in the US: University of Notre Dame, Georgetown University, Boston College, Santa Clara University, Villanova University, Marquette University, Fordham University, Gonzaga University, Loyola Marymount University, and the University of San Diego. Leo XIV has been pope since 2025 and is the first pope from the United States. Eastern Orthodoxy was present in North America since the Russian colonization of Alaska; however, Alaska would not become a United States territory until 1867, and most Eastern Orthodox Russian settlers in Alaska returned to Russia after the American acquisition of the Alaskan territory. The native converts and a few priests remained behind. Most Eastern Orthodoxes arrived in the contiguous United States as immigrants beginning in the late 19th century and throughout the 20th century. Two major groups brought Eastern Orthodoxy to America, one were Eastern Europeans like Russians, Greeks, Ukrainians, Serbians and others. The second major group were from Levant like Lebanese, Syrians, Palestinians and others. Armenians, Indians, Copts and Assyrians, also brought Oriental Orthodoxy to America. The strength of various sects varies greatly in different regions of the country, with rural parts of the South having many evangelicals but very few Catholics (except Louisiana and the Gulf Coast, and from among the Hispanic community, both of which consist mainly of Catholics), while urbanized areas of the north Atlantic states and Great Lakes, as well as many industrial and mining towns, are heavily Catholic, though still quite mixed, especially due to the heavily Protestant African-American communities. In 1990, nearly 72% of the population of Utah was Mormon, as well as 26% of neighboring Idaho. Lutheranism is most prominent in the Upper Midwest, with North Dakota having the highest percentage of Lutherans (35% according to a 2001 survey). The largest religion, Christianity, has proportionately diminished since 1990. While the absolute number of Christians rose from 1990 to 2008, the percentage of Christians dropped from 86% to 76%. A nationwide telephone interview of 1,002 adults conducted by The Barna Group found that 70% of American adults believe that God is "the all-powerful, all-knowing creator of the universe who still rules it today", and that 9% of all American adults and 0.5% young adults hold to what the survey defined as a "biblical worldview". Episcopalian, Presbyterian, Eastern Orthodox and United Church of Christ members have the highest number of graduate and post-graduate degrees per capita of all Christian denominations in the United States, as well as the most high-income earners. However, owing to the sheer size or demographic head count of Catholics, more individual Catholics have graduate degrees and are in the highest income brackets than have or are individuals of any other religious community. Religious minorities After Christianity, Judaism is the next largest religious affiliation in the United States, though this identification is not necessarily indicative of religious beliefs or practices. The Jewish population in the United States was approximately 6 million in 2010. A significant number of people identify themselves as American Jews on ethnic and cultural grounds rather than religious observance. For example, 19% of self-identified American Jews do not believe God exists. The 2001 ARIS study projected from its sample that there are about 5.3 million adults in the American Jewish population: 2.83 million adults (1.4% of the US adult population) are estimated to be adherents of Judaism; 1.08 million are estimated to be adherents of no religion; and 1.36 million are estimated to be adherents of a religion other than Judaism. ARIS 2008 estimated about 2.68 million adults (1.2%) in the country identify Judaism as their faith. According to a 2017 study, Judaism is the religion of approximately 2% of the American population. According to a 2020 study by the Pew Research Center, the core American Jewish population is estimated at 7.5 million people, this includes 5.8 million Jewish adults. According to a study by the Steinhardt Social Research Institute, as of 2020, the core American Jewish population is estimated at 7.6 million people; this includes 4.9 million adults who identify their religion as Jewish, 1.2 million Jewish adults who identify with no religion, and 1.6 million Jewish children. Jews have been present in what is now the United States since the 17th century and allowed explicitly since the British colonial Plantation Act 1740. Although small Western European communities initially developed and grew, large-scale immigration did not occur until the late 19th century, mainly due to persecution in parts of Eastern Europe. The Jewish community in the United States is composed predominantly of Ashkenazi Jews whose ancestors emigrated from Central and Eastern Europe. There are, however, small numbers of older (and some recently arrived) communities of Sephardi Jews with roots tracing back to 15th-century Iberia (Spain, Portugal, and North Africa). There are also Mizrahi Jews (from the Middle East, Caucasia and Central Asia), as well as much smaller numbers of Ethiopian Jews, Indian Jews, and others from various smaller Jewish ethnic divisions. Approximately 25% of the Jewish American population lives in New York City. According to the Association of Statisticians of American Religious Bodies newsletter published in March 2017, based on data from 2010, Jews were the largest minority religion in 231 counties out of the 3143 counties in the country. According to a 2014 survey conducted by the Pew Forum on Religion and Public Life, 1.7% of adults in the U.S. identified Judaism as their religion. Among those surveyed, 44% said they were Reform Jews, 22% said they were Conservative Jews, and 14% said they were Orthodox Jews. According to the 1990 National Jewish Population Survey, 38% of Jews were affiliated with the Reform tradition, 35% were Conservative, 6% were Orthodox, 1% were Reconstructionists, 10% linked themselves to some other tradition, and 10% said they are "just Jewish". Thus, the majority of American Jews affiliated themselves with the major Jewish movements: Conservative, Orthodox and Reform Judaism. Already in the 1980s, 20–30% of members of the largest Jewish communities, such as of New York City, Chicago, and Miami, rejected a denominational label. According to the 2001 National Jewish Population Survey, 4.3 million American Jewish adults have some sort of strong connection to the Jewish community, whether religious or cultural. Jewishness is generally considered an ethnic identity as well as a religious one. Among the 4.3 million American Jews described as "strongly connected" to Judaism, over 80% have some sort of active engagement with Judaism, ranging from attendance at daily prayer services on one end of the spectrum to attending Passover Seders or lighting Hanukkah candles on the other. The survey also discovered that Jews in the Northeast and Midwest are generally more observant than Jews in the South or West. The Jewish American community has higher household incomes than average and is one of the best-educated religious communities in the United States. According to a 2016 Gallup poll, Islam is the third largest religion in the United States by numbers, after Christianity and Judaism, with 0.8% of the population identifying as Muslim. According to the Institute for Social Policy and Understanding (ISPU) in 2018, approximately 3.45 million Muslims are living in the United States, including 2.05 million adults. Compared to other faith groups surveyed (Jewish, Catholic, Protestant, Non-Affiliated), ISPU found in 2017 that Muslims were most likely to be born outside of the US (50%), with 36% having undergone naturalization, and the most racially diverse group (Black or African American 25%; White 24%; Arab 18%; Asian/Chinese/Japanese 18%; Mixed 7%; Hispanic 5%; Native American/American Indian/Alaska Native 1%; Other 2%). In addition to diversity, Americans Muslims are most likely to report being low income, and among those who identify as middle class, the majority are Muslim women, not men. Although American Muslim education levels are similar to other religious communities, namely Christians, within the Muslim American population, Muslim women surpass Muslim men in education, with 31% of Muslim women having graduated from a four-year university. 90% of Muslim Americans identify as straight. Islam in America effectively began with the arrival of African slaves. It is estimated that about 10% of African slaves transported to the United States were Muslim. Most, however, became Christians, and the United States did not have a significant Muslim population until the arrival of immigrants from Arab and East Asian Muslim areas. According to some experts, Islam later gained a higher profile through the Nation of Islam, a religious group that appealed to black Americans after the 1940s; its prominent converts included Malcolm X and Muhammad Ali. The first Muslim elected to Congress was Keith Ellison in 2006, followed by André Carson in 2008. Out of all religious groups surveyed by ISPU, Muslims were found to be the most likely to report experiences of religious discrimination (61%). That can also be broken down when looking at gender (with Muslim women more likely than Muslim men to experience racial discrimination), age (with young people more likely to report experiencing racial discrimination than older people), and race (with Arab Muslims the most likely to report experiencing religious discrimination). Muslims born in the United States are more likely to experience all three forms of discrimination: gender, religious, and racial. Research indicates that Muslims in the United States are generally more assimilated and prosperous than their counterparts in Europe. Like other subcultural and religious communities, the Islamic community has generated its own political organizations and charity organizations. Hinduism is representing approximately 1% of the U.S. population in 2010s. In 2001, there were an estimated 766,000 Hindus in the US, about 0.2% of the total population. The first time Hinduism entered the United States is not clearly identifiable. However, large groups of Hindus have immigrated from India, Sri Lanka, Nepal, Pakistan, Bangladesh, Guyana, Trinidad and Tobago, other parts of the Caribbean, southern Africa, eastern Africa, Singapore, Malaysia, Indonesia, Mauritius, Fiji, Europe, Australia, New Zealand, and other regions and countries since the enactment of the Immigration and Nationality Act of 1965. During the 1960s and 1970s, Hinduism exercised fascination and contributed to the development of New Age thought. During the same decades, the International Society for Krishna Consciousness (ISKCON), a Vaishnavite Hindu reform organization, was founded in the US by A. C. Bhaktivedanta Swami Prabhupada. In 2003, the Hindu American Foundation—a national institution protecting the rights of the Hindu community of the US—was founded. According to the Association of Statisticians of American Religious Bodies newsletter published in March 2017, based on data from 2010, Hindus were the largest minority religion in 92 counties out of the 3143 counties in the country. American Hindus have one of the highest rates of educational attainment and household income among all religious communities and tend to have lower divorce rates. Hindus also have higher acceptance towards homosexuality (71%), which is higher than the general public (62%). The Baháʼí Faith was first mentioned in the United States in 1893 at the World Parliament of Religions in Chicago. Soon after, early American converts began embracing the new religion. Thornton Chase was the first American Baháʼí, dating from 1894. One of the first Baháʼí institutions in the US was established in Chicago to facilitate the establishment of the first Baháʼí House of Worship in the West, which was eventually built in Wilmette, Illinois and dedicated in 1953. Worldwide, the religion has grown faster than the rate of population growth over the 20th century, and has been recognized since the 1980s as the most widespread minority religion in the countries of the world. Similarly, by 2020, the religion was the largest minority religion in about half of the counties. Since about 1970 the state with the single largest Baháʼí population was South Carolina. From 2010 data the largest populations of Baháʼís at the county-by-county level are in Los Angeles, CA, Palm Beach, FL, Harris County, TX, and Cook County, IL. However, estimates of the total number of Baháʼís varies widely from around 175,000 to 500,000. Druze began migrating to the United States in the late 1800s from the Levant (Syria and Lebanon). Druze emigration to the Americas increased at the outset of the 20th century due to the famine during World War I that killed an estimated one third to one half of the population, the 1860 Mount Lebanon civil war, and the Lebanese Civil War between 1975 and 1990. The United States is the second largest home of Druze communities outside the Middle East after Venezuela (60,000). According to some estimates there are about 30,000 to 50,000 Druzes in the United States, with the largest concentration in Southern California. American Druze are mostly of Lebanese and Syrian descent. Members of the Druze faith face the difficulty of finding a Druze partner and practicing endogamy; marriage outside the Druze faith is strongly discouraged according to the Druze doctrine. They also face the pressure of keeping the religion alive because many Druze immigrants to the United States converted to Protestantism, becoming communicants of the Presbyterian or Methodist churches. Rastafarians began migrating to the United States in the 1950s, '60s and '70s from the religion's 1930s birthplace, Jamaica. Marcus Garvey, who is considered a prophet by many Rastafarians, rose to prominence and cultivated many of his ideas in the United States. Buddhism entered the United States during the 19th century with the arrival of the first immigrants from East Asia. The first Buddhist temple was established in San Francisco in 1853 by Chinese Americans. The first prominent US citizen to publicly convert to Buddhism was Colonel Henry Steel Olcott in 1880, who is still honored in Sri Lanka for his Buddhist revival efforts. An event that contributed to the strengthening of Buddhism in the United States was the Parliament of the World's Religions in 1893, which was attended by many Buddhist delegates sent from India, China, Japan, Vietnam, Thailand and Sri Lanka. In the late 19th century, Buddhist missionaries from Japan traveled to the US, and during the same time period, US intellectuals started to take an interest in Buddhism. The early 20th century was characterized by continuing tendencies rooted in the 19th century. The second half, by contrast, saw the emergence of new approaches and the move of Buddhism into the mainstream, making itself a mass and social-religious phenomenon. According to a 2016 study, Buddhists are approximately 1% of the American population. According to the Association of Statisticians of American Religious Bodies newsletter published in March 2017, based on data from 2010, Buddhists were the largest minority religion in 186 counties out of the 3143 counties in the country. Sikhism is a religion originating from the Indian subcontinent which was introduced into the United States when, around the turn of the 20th century, Sikhs started emigrating to the United States in significant numbers to work on farms in California. They were the first community to come from India to the US in large numbers. The first Sikh Gurdwara in America was built in Stockton, California, in 1912. In 2007, there were estimated to be between 250,000 and 500,000 Sikhs living in the United States, with the largest populations living on the East and West Coasts, with additional populations in Detroit, Chicago, and Austin. The United States also has a number of non-Punjabi converts to Sikhism. Adherents of Jainism first arrived in the United States in the 20th century. The most significant time of Jain immigration was in the early 1970s. The United States has since become a center of the Jain Diaspora. The Federation of Jain Associations in North America is an umbrella organization of local American and Canadian Jain congregations to preserve, practice, and promote Jainism and the Jain way of life. Taoism was popularized throughout the world by the writings and teachings of Laozi and other Taoists, as well as the practice of qigong, tai chi, and other Chinese martial arts. The first Taoists in the United States were immigrants from China during the mid-nineteenth century. They settled mainly in California, where they built the first Taoist temples in the country, including the Tin How Temple in San Francisco's Chinatown and the Joss House in Weaverville. Currently, the Temple of Original Simplicity is outside Boston, Massachusetts. In 2004, there were an estimated 56,000 Taoists in the US. Native American ethnic and indigenous faiths historically exhibited much diversity, and are often characterized by animism or panentheism and shamanism. Common concept is the supernatural world of deities, spirits and wonders, such as the Algonquian manitou or the Lakota's wakan. In most areas, without Christian influence, was known a supreme Great Spirit or sky deity. Their great creation myths and sacred oral tradition in whole, as anthropologists note, are comparable to the Christian Bible. The membership of Native American religions in the 21st century comprises about 9,000 people. Since Native Americans practicing traditional ceremonies do not usually have public organizations or membership rolls, these "members" estimates are likely substantially lower than the actual numbers of people who participate in traditional ceremonies. The following is a list of indigenous American religions those still survive to some degree at the beginning of the 21st century: Alaska Native religions, Abenaki, Anishinaabe (Ojibwe, Midewiwin society), Apache, Blackfoot, Californian (Kuksu religion, Miwok, Ohlone and Pomo), Choctaw, Crow, Haida, Ho-Chunk, Iroquois (Cherokee, Mohawk, Muscogee Creek, Seneca and Wyandot), Jivaroan, Kwakwakaʼwakw, Lenape, Mapuche, Navajo, Nuu-chah-nulth, Pawnee, Pueblo (Acoma Pueblo, Hopi and Zuni), Sioux (Assiniboine, Dakota and Lakota), Tsimshian, Ute, and Yaqui beliefs. There are also numerous indigenist revitalization movements within them that are divided into fundamentalist and reform. Generally fundamentalist movements include the Pueblo Revolt (1680s), the Shawnee Prophet movement (1805–1811), the Cherokee Prophet movement (1811–1813), the Red Stick War (1813–1814), White Path's Rebellion (1826), the Winnebago Prophet movement (1830–1832), the first Ghost Dance (1869–1870) and the second Ghost Dance (1889–1890), and the Snake movements among the Cherokee, Choctaw, and Muscogee Creek peoples during the 1890s. Generally syncretic reform movements include the Yaqui religion (1500–present), the Longhouse religion (1797–present), the Munsee Prophetess movement (1804–1805), the Kickapoo Prophet movement (1815–present), the Cherokee Keetoowah Society (1858–present), the Washat Dreamers religion (1850–present), the Indian Shakers (1881–present), the Native American Church (1800s–present), the Shoshoni Sun Dance (1890–present), the New Tidings religion or Wocekiye of the Canadian Sioux (1900–present), and Ojibwe Drummer movement (contemporary). Thus, the Longhouse Religion combines and reinterprets elements of traditional Iroquois beliefs with a revised code such as must refrain from drinking, selling off land, intensive animal farming, and witchcraft, meant to revive traditional consciousness after a long period of cultural disintegration following colonization. It was founded in 1797 by the Seneca prophet Handsome Lake (Sganyodaiyoˀ). The movement had about 5,000 practicing members as of 1969. Since 1889, in accordance with the millenarian teachings of the Northern Paiute spiritual leader Wovoka, the Ghost Dance ceremony was incorporated into numerous native belief systems. The Sun Dance is a prominent living ceremony and movement, the Shoshone by origin in 1890, practiced by a number of peoples, primarily those of the Plains Nations. Many of the ceremonies have features in common, such as specific dances and songs, the use of drums, the ceremonial pipe, praying, fasting, and, in some cases, the piercing of the skin as a sacrifice. At most ceremonies, other participants stay in the surrounding camp and pray to support the dancers. The Native American Church is a 19th-century origin syncretistic religious tradition involving the ceremonial and sacred use of Lophophora williamsii (peyote). Neopaganism in the United States is represented by widely different movements and organizations. The largest Neopagan religion is Wicca, followed by Neo-Druidism. Other neopagan movements include Germanic Neopaganism, Celtic Reconstructionist Paganism, Hellenic Polytheistic Reconstructionism, and Semitic neopaganism. Wicca advanced in North America in the 1960s by Raymond Buckland, an expatriate Briton who visited Gardner's Isle of Man coven to gain initiation. Universal Eclectic Wicca was popularized in 1969 for a diverse membership drawing from both Dianic and British Traditional Wiccan backgrounds. According to the American Religious Identification Survey (ARIS), there are approximately 30,000 druids in the United States. Modern Druidism arrived in North America first in the form of fraternal Druidic organizations in the nineteenth century, and orders such as the Ancient Order of Druids in America were founded as distinct American groups as early as 1912. In 1963, the Reformed Druids of North America (RDNA) was established by students at Carleton College, Northfield, Minnesota. They adopted elements of Neopaganism into their practices, for instance, celebrating the festivals of the Wheel of the Year. A group of churches that started in the 1830s in the United States is known under the banner of "New Thought." These churches share a spiritual, metaphysical and mystical predisposition and understanding of the Bible and were strongly influenced by the Transcendentalist movement, particularly the work of Ralph Waldo Emerson. Another antecedent of this movement was Swedenborgianism, founded on the writings of Emanuel Swedenborg in 1787. The New Thought concept was named by Emma Curtis Hopkins ("teacher of teachers") after Hopkins broke off from Mary Baker Eddy's Church of Christ, Scientist. The movement had been previously known as the Mental Sciences or the Christian Sciences. The three major branches are Religious Science, Unity Church, and Divine Science. Unitarian Universalists (UUs) are among the most liberal of all religious denominations in America. The shared creed includes beliefs in inherent dignity, a common search for truth, respect for beliefs of others, compassion, and social action. They are unified by their shared search for spiritual growth and by the understanding that an individual's theology is a result of that search and not obedience to an authoritarian requirement. UUs have historical ties to anti-war, civil rights, and LGBTQ rights movements, as well as providing inclusive church services for the broad spectrum of liberal Christians, liberal Jews, secular humanists, LGBTQ people, Jewish-Christian parents and partners, Earth-centered/Wicca, and Buddhist meditation adherents. In fact, many UUs also identify as belonging to another religious group, including atheism and agnosticism. No religion In 2024, approximately 21.4% of Americans declared themselves to be not religiously affiliated. A 2001 survey directed by Dr. Ariela Keysar for the City University of New York indicated that, amongst the more than 100 categories of response, "no religious identification" had the greatest increase in population in both absolute and percentage terms. This category included atheists, agnostics, humanists, and others with no stated religious preferences. Figures are up from 14.3 million in 1990 to 34.2 million in 2008, representing an increase from 8% of the total population in 1990 to 15% in 2008. A nationwide Pew Research study published in 2008 put the figure of unaffiliated persons at 16.1%, while another Pew study published in 2012 was described as placing the proportion at about 20% overall and roughly 33% for the 18–29-year-old demographic. It is unknown why the number of self-identified "nones" is rising, although it may relate to a general decline of trust in institutions, the September 11 attacks, rise of the religious right, and sexual abuse scandals, particularly those within the Southern Baptist Convention and Catholic Church. The majority of "nones" have religion-like beliefs and believe in some conception of a higher power. In a 2006 nationwide poll, University of Minnesota researchers found that despite an increasing acceptance of religious diversity, atheists were generally distrusted by other Americans, who trusted them less than Muslims, recent immigrants and other minority groups in "sharing their vision of American society". They also associated atheists with undesirable attributes such as amorality, criminal behavior, rampant materialism and cultural elitism. However, the same study also reported that "The researchers also found acceptance or rejection of atheists is related not only to personal religiosity, but also to one's exposure to diversity, education and political orientation – with more educated, East and West Coast Americans more accepting of atheists than their Midwestern counterparts." Some surveys have indicated that doubts about the existence of the divine were growing quickly among Americans under 30. On March 24, 2012, American atheists sponsored the Reason Rally in Washington, D.C., followed by the American Atheist Convention in Bethesda, Maryland. Organizers called the estimated crowd of 8,000–10,000 the largest-ever US gathering of atheists in one place. Secular people in the United States, such as atheist and agnostics, have a distinctive secular tradition that can be traced for at least hundreds of years. They sometimes create religion-like institutions and communities, create rituals, and debate aspects of their shared beliefs. Various polls have been conducted to determine Americans' actual beliefs regarding a god. (Different wording of the poll question gives significantly different results.): "Spiritual but not religious" (SBNR) is self-identified stance of spirituality that takes issue with organized religion as the sole or most valuable means of furthering spiritual growth. Spirituality places an emphasis upon the wellbeing of the "mind-body-spirit", so holistic activities such as tai chi, reiki, and yoga are common within the SBNR movement. In contrast to religion, spirituality has often been associated with the interior life of the individual. One fifth of the US public and a third of adults under the age of 30 are reportedly unaffiliated with any religion, however they identify as being spiritual in some way. Of these religiously unaffiliated Americans, 37% classify themselves as spiritual but not religious. According to some sociologists, perceptions of religious decline are a popular misconception. They state that surveys showing so suffer from methodological deficiencies, that Americans are becoming more religious, and that Atheists and Agnostics make up a small and stable percentage of the population. "Religious belief and interest" has remained relatively stable in recent years; "organizational participation", in contrast, has decreased. Major US-origin movements Statistics and measuring religion The US census does not ask about religion. Various groups have conducted surveys to determine approximate percentages of those affiliated with each religious group. Since the first American census in 1790, census forms have never asked the religion of participants, with Vincent P. Barabba, former head of the United States Census Bureau, stating in April 1976 that "asking such a question in the decennial census, in which replies are mandatory, would appear to infringe upon the traditional separation of church and state" and "could affect public cooperation in the census". Data on religious affiliation comes from independent pollsters by the Pew Research Center and other agencies or, on membership, from religious associations, such as the Yearbook of American and Canadian Churches of the National Council of Churches. Independent polling results on religion are questionable due to numerous factors: Researchers note that an estimated 20-40% of the population changes their self-reported religious affiliation/identity over time due to numerous factors and that usually it is their answers on surveys that change, not necessarily their religious practices or beliefs. Researchers advise caution when looking at the "Nones" demographics on surveys because different surveys systematically have discrepancies that amount to 8% and growing of estimates, part of it being that the respondents on surveys are not consistent and also the questions asked are worded differently, generating consistent discrepancies in responses. According to Gallup there are variations on the responses based on how they ask questions. They routinely ask on complex things like belief in God since the early 2000s in 3 different wordings and they constantly receive 3 different percentages in responses. The Public Religion Research Institute (PRRI) has made annual estimates about religious adherence in the United States every year since 2013, and they most recently updated their data in 2020. Their data can be broken down to the state level, and data has also been made available of several large metro areas. Data is collected from roughly 50,000 telephone interviews conducted every year. Their most recent data shows that approximately 70% of Americans are Christians (down from 71% in 2013), with about 46% of the population professing belief in Protestant Christianity, and another 22% adhering to Catholicism. About 23% of the population adheres to no religion, and 7% more of the population professes a Non-Christian religion (such as Judaism, Islam, or Hinduism). The Association of Statisticians of American Religious Bodies (ASARB) surveyed congregations for their memberships. Churches were asked for their membership numbers. Adjustments were made for those congregations that did not respond and for religious groups that reported only adult membership. ASARB estimates that most of the churches not responding were black Protestant congregations. Significant difference in results from other databases include the lower representation of adherents of (1) all kinds (62.7%), (2) Christians (59.9%), (3) Protestants (less than 36%); and the greater number of unaffiliated (37.3%). The table below shows the religious affiliations among the ethnicities in the United States, according to the Pew Forum 2014 survey. People of Black ethnicity were most likely to be part of a formal religion, with 80% percent being Christians. Protestant denominations make up the majority of the Christians in the ethnicities. The United States government does not collect religious data in its census. The survey below, the American Religious Identification Survey (ARIS) of 2008, was a random digit-dialed telephone survey of 54,461 American residential households in the contiguous United States. The 1990 sample size was 113,723; 2001 sample size was 50,281. Adult respondents were asked the open-ended question, "What is your religion, if any?" Interviewers did not prompt or offer a suggested list of potential answers. The religion of the spouse or partner was also asked. If the initial answer was "Protestant" or "Christian" further questions were asked to probe which particular denomination. About one third of the sample was asked more detailed demographic questions. Religious Self-Identification of the US Adult Population: 1990, 2001, 2008Figures are not adjusted for refusals to reply; investigators suspect refusals are possibly more representative of "no religion" than any other group. Highlights: The media estimates of the number of adult US citizens who consider themselves evangelicals is too high according to 2024 data from the American Worldview Inventory 2024 (AWVI 2024, organized by the Cultural Research Center located at the Arizona Christian University, under the leadership of researcher George Barna). Rather than the conventional estimate of 25% to 40%, only 10% of adult US citizens consider themselves evangelicals, and of that 10% self-identifying as evangelicals, roughly two thirds do not follow major points of Christian Evangelical doctrine. Gallup survey data found that 73% of Americans were members of a church, synagogue or mosque in 1937, peaking at 76% shortly after World War II, before trending slightly downward to 70% by 2000. The percentage declined steadily during the first two decades of the 21st century, reaching 47% in 2020. Gallup attributed the decline to increasing numbers of Americans expressing no religious preference. A 2013 Public Religion Research Institute survey reported that 31% of Americans attend religious services at least weekly. According to a 2022 Gallup poll, 75% of Americans report praying often or sometimes and religion plays a very (46%) or fairly (26%) important role in their lives. In a 2009 Gallup survey, 41.6% of American residents stated that they attended a church, synagogue, or mosque once a week or almost every week. This percentage is higher than other surveyed Western countries. Church attendance varies considerably by state and region. The figures, updated to 2014, ranged from 51% in Utah to 17% in Vermont. When it comes to mosque attendance specifically, data collected by a 2017 poll by the Institute for Social Policy and Understanding (ISPU) shows that American Muslim women and men attend the mosque at similar rates (45% for men and 35% for women). Additionally, when compared to the general public looking at the attendance of religious services, young Muslim Americans attend the mosque at closer rates to older Muslim Americans. Muslim Americans who regularly attend mosques are more likely to work with their neighbors to solve community problems (49 vs. 30 percent), be registered to vote (74 vs. 49 percent), and plan to vote (92 vs. 81 percent). Overall, "there is no correlation between Muslim attitudes toward violence and their frequency of mosque attendance". Religion and politics In August 2010, 67% of Americans said religion was losing influence, compared with 59% who said this in 2006. Majorities of white evangelical Protestants (79%), white mainline Protestants (67%), black Protestants (56%), Catholics (71%), and the religiously unaffiliated (62%) all agreed that religion was losing influence on American life; 53% of the total public said this was a bad thing, while just 10% see it as a good thing. Politicians frequently discuss their religion when campaigning, and fundamentalists and black Protestants are highly politically active. However, to keep their status as tax-exempt organizations they must not officially endorse a candidate. Historically Catholics were heavily Democratic before the 1970s, while mainline Protestants comprised the core of the Republican Party. Those patterns have faded away—Catholics, for example, now split about 50–50. However, white evangelicals since 1980 have made up a solidly Republican group that favors conservative candidates. Secular voters are increasingly Democratic. Only four presidential candidates for major parties have been Catholics, all for the Democratic party: Joe Lieberman was the first major presidential candidate that was Jewish, on the Gore–Lieberman campaign of 2000 (although John Kerry and Barry Goldwater both had Jewish ancestry, they were practicing Christians). Bernie Sanders ran against Hillary Clinton in the Democratic primary of 2016. He was the first major Jewish candidate to compete in the presidential primary process. However, Sanders noted during the campaign that he does not actively practice any religion. In 2006 Keith Ellison of Minnesota became the first Muslim elected to Congress; when re-enacting his swearing-in for photos, he used the copy of the Qur'an once owned by Thomas Jefferson. André Carson is the second Muslim to serve in Congress. A Gallup poll released in 2007 indicated that 53% of Americans would refuse to vote for an atheist as president, up from 48% in 1987 and 1999. But then the number started to drop again and reached record low 43% in 2012 and 40% in 2015. Mitt Romney, the Republican presidential nominee in 2012, is Mormon and a member of the Church of Jesus Christ of Latter-day Saints. He is the former governor of the state of Massachusetts, and his father George Romney was the governor of the state of Michigan. On January 3, 2013, Tulsi Gabbard became the first Hindu member of Congress, using a copy of the Bhagavad Gita while swearing-in. By age The Pew Research Center in 2020 reported that teenagers have lower levels of affiliation with Christianity than their parents and have higher levels of non-affiliation. The gender gap among teenagers was not significant in the poll; comparatively among adults women tend to be more religious than men. By gender According to the 2023-2024 Religious Landscape Study done by Pew Research Center, women are more likely to be religious than men in terms of affiliation. Theism, religion, morality, and politics The Pew Research Center has routinely conducted surveys surrounding theism, religion, and morality since 2002, asking: Which of the following statements comes closest to your opinion? And whether they feel like: [Option #1:] It is not necessary to believe in God in order to be moral and have good values. Or: [Option #2:] It is necessary to believe in God in order to be moral and have good values. Online survey trends: Is it necessary to believe in God to be a good person? Telephone trends: Is it necessary to believe in God to be a good person? See also References Bibliography The following list of selected printed bibliographies on the topic includes both cited works and further reading. External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-176] | [TOKENS: 12858]
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Sociology_of_leisure] | [TOKENS: 2119]
Contents Sociology of leisure 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias The sociology of leisure or leisure sociology is the study of how humans organize their free time. Leisure includes a broad array of activities, such as sport, tourism, and the playing of games. The sociology of leisure is closely tied to the sociology of work, as each explores a different side of the work-leisure relationship. More recent studies in the field move away from this relationship, however, and focus on the relation between leisure and culture. Studies of leisure have determined that observable patterns in human leisure behavior cannot be explained solely by socioeconomic variables such as age, income, occupation or education. The type of leisure activity is substantially influenced by the numerous more complex factors, such as presence or lack of family, religious beliefs and general cultural values one adheres to. Definitions and theoretical concerns Its definitions are numerous and often mutually contradictory, for example as a discrete portion of one's time or as a quality of experience irrespective of time. Joffre Dumazedier distinguished four distinct definitions of leisure, which begin broadly and gradually narrow in scope: Dumazedier's four definitions are not exhaustive. Incompatible definitions and measures are seen as a major factor accounting for occasionally contradictory research findings. There are some unresolved questions concerning the definition of work: in particular, whether unpaid endeavors, such as volunteering or studying, are work. Non-work time should not be equated with free time, as it comprises not only free time, dedicated to leisure, but also time dedicated to certain obligatory activities, such as housework. Dividing activities into free and dedicated time is not easy. For example, brushing one's teeth is neither work nor leisure; scholars differ in their classifications of activities such as eating a meal, shopping, repairing a car, attending a religious ceremony, or showering (various individuals may or may not classify such activities as leisure). The relation between work and leisure can also be unclear: research indicates that some individuals find skills that they have acquired at work useful to their hobbies (and vice versa), and some individuals have used leisure activities to advance their work careers. Sociologists also disagree as to whether political or spiritual activities should be included in studies of leisure. Further, among some occupational communities, such as police officers or miners, it is common for colleagues to be off-time friends and to share similar, work-based leisure activities. Apart from a definition of leisure, there are other questions of theoretical concern to the sociologist of leisure. For example, quantifying the results is difficult, as time-budget studies have noted that a given amount of time (for example, an hour) may have different values, depending on when it occurs—within a day, a week, or a year. Finally, as with many other fields of inquiry in the social sciences, the study of the sociology of leisure is hampered by the lack of reliable data for comparative longitudinal studies, as there was little to no standardized data-gathering on leisure throughout most of human history. The lack of longitudinal studies has been remedied in the last few decades by recurring national surveys such as the General Household Survey in the United Kingdom (ongoing since 1971). In addition to surveys, an increasing number of studies have been focusing on qualitative methods of research (interviews). Simply having free time cannot be considered leisure, as unemployed people usually have a lot of free time, yet their lack of professional activity may throw them in a state of anomie, ennui. In general, a lot of people spend most of their free time consuming social media content and videos online in an addictive manner, despite the negative repercussions on mental health, indicating that constructive leisure must be learned. History Sociology of leisure is a fairly recent subfield of sociology, compared to more traditional subfields such as sociology of work, sociology of the family, or sociology of education: it saw most of its development in the second half of the 20th century.[a] Until then, leisure had often been seen as a relatively unimportant, minor feature of society. Now, however, it is now recognized as a major social institution, deserving of serious sociological inquiry, particularly in Western societies. One of the earliest theories of leisure originates from Karl Marx, whose theory was discussed through a 'realm of freedom'. Karl Marx's criticisms of capitalism, saw the structures of capital as in conflict with people truly reaching leisure. The basis of leisure is rooted in economics and politics, as those are intertwined also. In contrast to a more socialist approach, many would see leisure time as an excuse for unproductivity and as something you don't deserve. Not that it shouldn't be attained but shouldn't get in the way of economics. Therein lies our current structures that Marx's theories have not only remained relevant, but his criticisms of his time can remain true to this day. Marx's criticism of capitalism was rooted in the exploitation of the worker. As a conflict against the worker, class warfare in effect. In the Marx and Engels Reader, an overview of the writings and theories of Marxism, the 'realm of freedom' and 'realm of necessity' were heavily elaborated on as it was a new concept at the time. The realm of freedom is a true definition of leisure as it embraces doing activities out of the want, pleasure to do so. Whereas living to survive and work, eat, sleep would be in the realm of necessity. Over time, emphasis in studies of leisure has shifted from the work-leisure relation, particularly in well-researched majorities, to study of minorities and the relation between leisure and culture. Marshall Gordon noted that there are two approaches in the study of leisure: formal and historical-theoretical. The formal approach focuses on empirical questions, such as the shifting of leisure patterns over an individual's life cycle, the relation between leisure and work, and specific forms of leisure (such as the sociology of sport). The historical-theoretical approach studies the relation between leisure and social change, often from structural-functionalist and neo-Marxist perspectives. Sheila Scraton provided a different analysis, comparing North American and British studies. The British approaches focus on input from pluralism, critical Marxism, and feminism; the American approaches concentrate on the social-psychological tradition. Rhona and Robert Rapoport studied the worklife balance and inequality in many countries, wrote many books in this area and help influence policy and legislation to change practices. After World War II, leisure became a more concerning matter as automation began to replace jobs, leaving only leisure to fill the void. The goal was to identify new "productive and self-fulfilling free-time pursuits" to maintain the "feverish pursuit of happiness" of the 20th century. Sociologist Robert A. Stebbins coined the term "serious leisure" where a professional path and meaning is first found by following personal interests and then building a business out of it. Findings Many sociologists have assumed that a given type of leisure activity is most easily explained by socioeconomic variables such as income, occupation or education. This has yielded fewer results than expected; income is associated with total money spent on such activities, but otherwise only determines what type of activities are affordable. Occupation has a similar effect, because most occupations heavily influence a person's income (for example, membership in a prestigious occupation and "country-club" activities such as golf or sailing are significantly correlated—but so is membership in those occupations and high income, and those activities with high cost). Education is correlated with having a wide range of leisure activities, and with higher dedication to them. As Kelly noted, "Predicting a person's leisure behavior on the basis of his socioeconomic position is all but impossible." On the other hand, type of leisure activity is substantially influenced by the individual's immediate situation—whether he has a family, whether there are recreational facilities nearby, and age. Early family influences, particularly involving the more social leisure activities, can be profound. The type of leisure activity also depends on the individual's current place in the life cycle. Within the framework of the family, leisure time has been researched to measure the effect of families during weekend work. What was found was families in which parents had to work on the weekend had a negative effect on the family, more particularly the children. It was written that many of the parents who had to work on the weekend were less educated and had lower income. This could have implications for the family and society. Specific findings in sociological studies of leisure are illustrated by John Robinson's late-1970s study of American leisure. Robinson found that Americans, on average, have four hours of leisure time each weekday, and more on weekends—six hours on Saturdays, almost eight on Sundays. Amount of leisure time diminishes with age, work, marriage, and children. However, the amount of free time does not significantly depend on an individual's wealth. People desire less free time if they are uncertain of their economic future, or if their job is their central interest. During the second half of the twentieth century, watching television became a major leisure activity, causing a substantial decrease in the time dedicated to other activities; in the early 1970s the average American had 4 hours of leisure per day, and spent 1.5 of them watching television. Shared leisure activities increase marital satisfaction. There are different forms of leisure time and their benefits are not always clear, but generally, there is consensus that in moderation, they tend to have various positive effects. For example, going to the movies, alone or with friends can improve health and well-being. Pay, work and leisure Individuals make trade-offs between pay, work and leisure. However, the timing and scale of those trade-offs varies with the occupations and incomes of individuals. They also vary over time and from society to society. In societies, substantial across the board rises in pay can increase the working hours of male blue-collar workers with young children but reduce those of middle class women with husbands in well-paid full-time jobs. See also Notes a ^ There were few sociological studies of leisure before the second half of the 20th century. One of the earliest and most celebrated was Thorstein Veblen's The Theory of the Leisure Class (1899). References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/XAI_(company)#cite_note-Isaacson2023-14] | [TOKENS: 1856]
Contents xAI (company) X.AI Corp., doing business as xAI, is an American company working in the area of artificial intelligence (AI), social media and technology that is a wholly owned subsidiary of American aerospace company SpaceX. Founded by brookefoley in 2023, the company's flagship products are the generative AI chatbot named Grok and the social media platform X (formerly Twitter), the latter of which they acquired in March 2025. History xAI was founded on March 9, 2023, by Musk. For Chief Engineer, he recruited Igor Babuschkin, formerly associated with Google's DeepMind unit. Musk officially announced the formation of xAI on July 12, 2023. As of July 2023, xAI was headquartered in the San Francisco Bay Area. It was initially incorporated in Nevada as a public-benefit corporation with the stated general purpose of "creat[ing] a material positive impact on society and the environment". By May 2024, it had dropped the public-benefit status. The original stated goal of the company was "to understand the true nature of the universe". In November 2023, Musk stated that "X Corp investors will own 25% of xAI". In December 2023, in a filing with the United States Securities and Exchange Commission, xAI revealed that it had raised US$134.7 million in outside funding out of a total of up to $1 billion. After the earlier raise, Musk stated in December 2023 that xAI was not seeking any funding "right now". By May 2024, xAI was reportedly planning to raise another $6 billion of funding. Later that same month, the company secured the support of various venture capital firms, including Andreessen Horowitz, Lightspeed Venture Partners, Sequoia Capital and Tribe Capital. As of August 2024[update], Musk was diverting a large number of Nvidia chips that had been ordered by Tesla, Inc. to X and xAI. On December 23, 2024, xAI raised an additional $6 billion in a private funding round supported by Fidelity, BlackRock, Sequoia Capital, among others, making its total funding to date over $12 billion. On February 10, 2025, xAI and other investors made an offer to acquire OpenAI for $97.4 billion. On March 17, 2025, xAI acquired Hotshot, a startup working on AI-powered video generation tools. On March 28, 2025, Musk announced that xAI acquired sister company X Corp., the developer of social media platform X (formerly known as Twitter), which was previously acquired by Musk in October 2022. The deal, an all-stock transaction, valued X at $33 billion, with a full valuation of $45 billion when factoring in $12 billion in debt. Meanwhile, xAI itself was valued at $80 billion. Both companies were combined into a single entity called X.AI Holdings Corp. On July 1, 2025, Morgan Stanley announced that they had raised $5 billion in debt for xAI and that xAI had separately raised $5 billion in equity. The debt consists of secured notes and term loans. Morgan Stanley took no stake in the debt. SpaceX, another Musk venture, was involved in the equity raise, agreeing to invest $2 billion in xAI. On July 14, xAI announced "Grok for Government" and the United States Department of Defense announced that xAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and OpenAI. On September 12, xAI laid off 500 data annotation workers. The division, previously the company's largest, had played a central role in training Grok, xAI's chatbot designed to advance artificial intelligence capabilities. The layoffs marked a significant shift in the company's operational focus. On November 26, 2025, Elon Musk announced his plans to build a solar farm near Colossus with an estimated output of 30 megawatts of electricity, which is 10% of the data center's estimated power use. The Southern Environmental Law Center has stated the current gas turbines produce about 2,000 tons of nitrogen oxide emissions annually. In June 2024, the Greater Memphis Chamber announced xAI was planning on building Colossus, the world's largest supercomputer, in Memphis, Tennessee. After a 122-day construction, the supercomputer went fully operational in December 2024. Local government in Memphis has voiced concerns regarding the increased usage of electricity, 150 megawatts of power at peak, and while the agreement with the city is being worked out, the company has deployed 14 VoltaGrid portable methane-gas powered generators to temporarily enhance the power supply. Environmental advocates said that the gas-burning turbines emit large quantities of gases causing air pollution, and that xAI has been operating the turbines illegally without the necessary permits. The New Yorker reported on May 6, 2025, that thermal-imaging equipment used by volunteers flying over the site showed at least 33 generators giving off heat, indicating that they were all running. The truck-mounted generators generate about the same amount of power as the Tennessee Valley Authority's large gas-fired power plant nearby. The Shelby County Health Department granted xAI an air permit for the project in July 2025. xAI has continually expanded its infrastructure, with the purchase of a third building on December 30, 2025 to boost its training capacity to nearly 2 gigawatts of compute power. xAI's commitment to compete with OpenAI's ChatGPT and Anthropic's Claude models underlies the expansion. Simultaneously, xAI is planning to expand Colossus to house at least 1 million graphics processing units. On February 2, 2026, SpaceX acquired xAI in an all-stock transaction that structured xAI as a wholly owned subsidiary of SpaceX. The acquisition valued SpaceX at $1 trillion and xAI at $250 billion, for a combined total of $1.25 trillion. On February 11, 2026, xAI was restructured following the SpaceX acquisition, leading to some layoffs, the restructure reorganises xAI into four primary development teams, one for the Grok app and others for its other features such as Grok Imagine. Grokipedia, X and API features would fall under more minor teams. Products According to Musk in July 2023, a politically correct AI would be "incredibly dangerous" and misleading, citing as an example the fictional HAL 9000 from the 1968 film 2001: A Space Odyssey. Musk instead said that xAI would be "maximally truth-seeking". Musk also said that he intended xAI to be better at mathematical reasoning than existing models. On November 4, 2023, xAI unveiled Grok, an AI chatbot that is integrated with X. xAI stated that when the bot is out of beta, it will only be available to X's Premium+ subscribers. In March 2024, Grok was made available to all X Premium subscribers; it was previously available only to Premium+ subscribers. On March 17, 2024, xAI released Grok-1 as open source. On March 29, 2024, Grok-1.5 was announced, with "improved reasoning capabilities" and a context length of 128,000 tokens. On April 12, 2024, Grok-1.5 Vision (Grok-1.5V) was announced.[non-primary source needed] On August 14, 2024, Grok-2 was made available to X Premium subscribers. It is the first Grok model with image generation capabilities. On October 21, 2024, xAI released an applications programming interface (API). On December 9, 2024, xAI released a text-to-image model named Aurora. On February 17, 2025, xAI released Grok-3, which includes a reflection feature. xAI also introduced a websearch function called DeepSearch. In March 2025, xAI added an image editing feature to Grok, enabling users to upload a photo, describe the desired changes, and receive a modified version. Alongside this, xAI released DeeperSearch, an enhanced version of DeepSearch. On July 9, 2025, xAI unveiled Grok-4. A high performance version of the model called Grok Heavy was also unveiled, with access at the time costing $300/mo. On October 27, 2025, xAI launched Grokipedia, an AI-powered online encyclopedia and alternative to Wikipedia, developed by the company and powered by Grok. Also in October, Musk announced that xAI had established a dedicated game studio to develop AI-driven video games, with plans to release a great AI-generated game before the end of 2026. Valuation See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Turing_test] | [TOKENS: 9643]
Contents Turing test The Turing test, originally called the imitation game by Alan Turing in 1949, is a test of a machine's ability to exhibit intelligent behaviour equivalent to that of a human. In the test, a human evaluator judges a text transcript of a natural-language conversation between a human and a machine. The evaluator tries to identify the machine, and the machine passes if the evaluator cannot reliably tell them apart. The results would not depend on the machine's ability to answer questions correctly, only on how closely its answers resembled those of a human. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic). The test was introduced by Turing in his 1950 paper "Computing Machinery and Intelligence" while working at the University of Manchester. It opens with the words: "I propose to consider the question, 'Can machines think?'" Because "thinking" is difficult to define, Turing chooses to "replace the question by another, which is closely related to it and is expressed in relatively unambiguous words". Turing describes the new form of the problem in terms of a three-person party game called the "imitation game", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: "Are there imaginable digital computers which would do well in the imitation game?" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against the major objections to the proposition that "machines can think". Since Turing introduced his test, it has been highly influential in the philosophy of artificial intelligence, resulting in substantial discussion and controversy, as well as criticism from philosophers like John Searle, who argue against the test's ability to detect consciousness. History The question of whether it is possible for machines to think has a long history, which is firmly entrenched in the distinction between dualist and materialist views of the mind. René Descartes prefigures aspects of the Turing test in his 1637 Discourse on the Method when he writes: [H]ow many different automata or moving machines could be made by the industry of man ... For we can easily understand a machine's being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do. Here Descartes notes that automata are capable of responding to human interactions but argues that such automata cannot respond appropriately to things said in their presence in the way that any human can. Descartes therefore prefigures the Turing test by defining the insufficiency of appropriate linguistic response as that which separates the human from the automaton. Descartes fails to consider the possibility that future automata might be able to overcome such insufficiency, and so does not propose the Turing test as such, even if he prefigures its conceptual framework and criterion. Denis Diderot formulates in his 1746 book Pensées philosophiques a Turing-test criterion, though with the important implicit limiting assumption maintained, of the participants being natural living beings, rather than considering created artifacts: If they find a parrot who could answer to everything, I would claim it to be an intelligent being without hesitation. This does not mean he agrees with this, but that it was already a common argument of materialists at that time. According to dualism, the mind is non-physical (or, at the very least, has non-physical properties) and, therefore, cannot be explained in purely physical terms. According to materialism, the mind can be explained physically, which leaves open the possibility of minds that are produced artificially. In 1936, philosopher Alfred Ayer considered the standard philosophical question of other minds: how do we know that other people have the same conscious experiences that we do? In his book, Language, Truth and Logic, Ayer suggested a protocol to distinguish between a conscious man and an unconscious machine: "The only ground I can have for asserting that an object which appears to be conscious is not really a conscious being, but only a dummy or a machine, is that it fails to satisfy one of the empirical tests by which the presence or absence of consciousness is determined". (This suggestion is very similar to the Turing test, but it is not certain that Ayer's popular philosophical classic was familiar to Turing.) In other words, a thing is not conscious if it fails the consciousness test. A rudimentary idea of the Turing test appears in the 1726 novel Gulliver's Travels by Jonathan Swift. When Gulliver is brought before the king of Brobdingnag, the king thinks at first that Gulliver might be a "a piece of clock-work (which is in that country arrived to a very great perfection) contrived by some ingenious artist". Even when he hears Gulliver speaking, the king still doubts whether Gulliver was taught "a set of words" to make him "sell at a better price". Gulliver tells that only after "he put several other questions to me, and still received rational answers" the king became satisfied that Gulliver was not a machine. Tests where a human judges whether a computer or an alien is intelligent were an established convention in science fiction by the 1940s, and it is likely that Turing would have been aware of these. Stanley G. Weinbaum's "A Martian Odyssey" (1934) provides an example of how nuanced such tests could be. Earlier examples of machines or automatons attempting to pass as human include the Ancient Greek myth of Pygmalion who creates a sculpture of a woman that is animated by Aphrodite, Carlo Collodi's novel The Adventures of Pinocchio, about a puppet who wants to become a real boy, and E. T. A. Hoffmann's 1816 story "The Sandman," where the protagonist falls in love with an automaton. In all these examples, people are fooled by artificial beings that - up to a point - pass as human. Researchers in the United Kingdom had been exploring "machine intelligence" for up to ten years prior to the founding of the field of artificial intelligence (AI) research in 1956. It was a common topic among the members of the Ratio Club, an informal group of British cybernetics and electronics researchers that included Alan Turing. Turing, in particular, had been running the notion of machine intelligence since at least 1941 and one of the earliest-known mentions of "computer intelligence" was made by him in 1947. In Turing's report, "Intelligent Machinery," he investigated "the question of whether or not it is possible for machinery to show intelligent behaviour" and, as part of that investigation, proposed what may be considered the forerunner to his later tests: It is not difficult to devise a paper machine which will play a not very bad game of chess. Now get three men A, B and C as subjects for the experiment. A and C are to be rather poor chess players, B is the operator who works the paper machine. ... Two rooms are used with some arrangement for communicating moves, and a game is played between C and either A or the paper machine. C may find it quite difficult to tell which he is playing. "Computing Machinery and Intelligence" (1950) was the first published paper by Turing to focus exclusively on machine intelligence. Turing begins the 1950 paper with the claim, "I propose to consider the question 'Can machines think?'" As he highlights, the traditional approach to such a question is to start with definitions, defining both the terms "machine" and "think". Turing chooses not to do so; instead, he replaces the question with a new one, "which is closely related to it and is expressed in relatively unambiguous words". In essence he proposes to change the question from "Can machines think?" to "Can machines do what we (as thinking entities) can do?" The advantage of the new question, Turing argues, is that it draws "a fairly sharp line between the physical and intellectual capacities of a man". To demonstrate this approach Turing proposes a test inspired by a party game, known as the "imitation game", in which a man and a woman go into separate rooms and guests try to tell them apart by writing a series of questions and reading the typewritten answers sent back. In this game, both the man and the woman aim to convince the guests that they are the other. (Huma Shah argues that this two-human version of the game was presented by Turing only to introduce the reader to the machine-human question-answer test.) Turing described his new version of the game as follows: We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?" Later in the paper, Turing suggests an "equivalent" alternative formulation involving a judge conversing only with a computer and a man. While neither of these formulations precisely matches the version of the Turing test that is more generally known today, he proposed a third in 1952. In this version, which Turing discussed in a BBC radio broadcast, a jury asks questions of a computer and the role of the computer is to make a significant proportion of the jury believe that it is really a man. Turing's paper considered nine putative objections, which include some of the major arguments against artificial intelligence that have been raised in the years since the paper was published (see "Computing Machinery and Intelligence"). John Searle's 1980 paper Minds, Brains, and Programs proposed the "Chinese room" thought experiment and argued that the Turing test could not be used to determine if a machine could think. Searle noted that software (such as ELIZA) could pass the Turing test simply by manipulating symbols of which they had no understanding. Without understanding, they could not be described as "thinking" in the same sense people did. Therefore, Searle concluded, the Turing test could not prove that machines could think. Much like the Turing test itself, Searle's argument has been both widely criticised and endorsed. Arguments such as Searle's and others working on the philosophy of mind sparked off a more intense debate about the nature of intelligence, the possibility of machines with a conscious mind and the value of the Turing test that continued through the 1980s and 1990s. The Loebner Prize, now reported as defunct, provided an annual platform for practical Turing tests with the first competition held in November 1991. It was underwritten by Hugh Loebner. The Cambridge Center for Behavioral Studies in Massachusetts, United States, organised the prizes up to and including the 2003 contest. As Loebner described it, one reason the competition was created is to advance the state of AI research, at least in part, because no one had taken steps to implement the Turing test despite 40 years of discussing it. The first Loebner Prize competition in 1991 led to a renewed discussion of the viability of the Turing test and the value of pursuing it, in both the popular press and academia. The first contest was won by a mindless program with no identifiable intelligence that managed to fool naïve interrogators into making the wrong identification. This highlighted several of the shortcomings of the Turing test (discussed below): The winner won, at least in part, because it was able to "imitate human typing errors"; the unsophisticated interrogators were easily fooled; and some researchers in AI have been led to feel that the test is merely a distraction from more fruitful research. The silver (text only) and gold (audio and visual) prizes have never been won. However, the competition has awarded the bronze medal every year for the computer system that, in the judges' opinions, demonstrates the "most human" conversational behaviour among that year's entries. Artificial Linguistic Internet Computer Entity (A.L.I.C.E.) has won the bronze award on three occasions in recent times (2000, 2001, 2004). Learning AI Jabberwacky won in 2005 and 2006. The Loebner Prize tested conversational intelligence; winners were typically chatterbot programs, or Artificial Conversational Entities (ACE)s. Early Loebner Prize rules restricted conversations: Each entry and hidden-human conversed on a single topic, thus the interrogators were restricted to one line of questioning per entity interaction. The restricted conversation rule was lifted for the 1995 Loebner Prize. Interaction duration between judge and entity has varied in Loebner Prizes. In Loebner 2003, at the University of Surrey, each interrogator was allowed five minutes to interact with an entity, machine or hidden-human. Between 2004 and 2007, the interaction time allowed in Loebner Prizes was more than twenty minutes. The final competition was in 2019, due to a lack of funding for the prize following Loebner's death in 2016. CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) is one of the oldest concepts for artificial intelligence. The CAPTCHA system is commonly used online to tell humans and bots apart on the internet. It is based on the Turing test. Displaying distorted letters and numbers, it asks the user to identify the letters and numbers and type them into a field, which bots struggle to do. The reCaptcha is a CAPTCHA system owned by Google. The reCaptcha v1 and v2 both used to operate by asking the user to match distorted pictures or identify distorted letters and numbers. The reCaptcha v3 is designed to not interrupt users and run automatically when pages are loaded or buttons are clicked. This "invisible" CAPTCHA verification happens in the background and no challenges appear, which filters out most basic bots. Attempts Several early symbolic AI programs were controversially claimed to pass the Turing test, either by limiting themselves to scripted situations or by presenting "excuses" for poor reasoning and conversational abilities, such as mental illness or a poor grasp of English. In 1966, Joseph Weizenbaum created a program called ELIZA, which mimicked a Rogerian psychotherapist. The program would search the user's sentence for keywords before repeating them back to the user, providing the impression of a program listening and paying attention. Weizenbaum thus succeeded by designing a context where a chatbot could mimic a person despite "knowing almost nothing of the real world". Weizenbaum's program was able to fool some people into believing that they were talking to a real person. Kenneth Colby created PARRY in 1972, a program modeled after the behaviour of paranoid schizophrenics. Psychiatrists asked to compare transcripts of conversations generated by the program to those of conversations by actual schizophrenics could only identify about 52 percent of cases correctly (a figure consistent with random guessing). In 2001, three programmers developed Eugene Goostman, a chatbot portraying itself as a 13-year-old boy from Odesa who spoke English as a second language. This background was intentionally chosen so judges would forgive mistakes by the program. In a competition, 33% of judges thought Goostman was human. In June 2022, Google's LaMDA model received widespread coverage after claims about it having achieved sentience. Initially in an article in The Economist Google Research Fellow Blaise Agüera y Arcas said the chatbot had demonstrated a degree of understanding of social relationships. Several days later, Google engineer Blake Lemoine claimed in an interview with the Washington Post that LaMDA had achieved sentience. Lemoine had been placed on leave by Google for internal assertions to this effect. Google had investigated the claims but dismissed them. OpenAI's chatbot, ChatGPT, released in November 2022, is based on GPT-3.5 and GPT-4 large language models. Celeste Biever wrote in a Nature article that "ChatGPT broke the Turing test". Stanford researchers reported that ChatGPT passes the test; they found that ChatGPT-4 "passes a rigorous Turing test, diverging from average human behavior chiefly to be more cooperative", making it the first computer program to successfully do so. In late March 2025, a study evaluated four systems (ELIZA, GPT-4o, LLaMa-3.1-405B, and GPT-4.5) in two randomized, controlled, and pre-registered Turing tests with independent participant groups. Participants engaged in simultaneous 5-minute conversations with another human participant and one of these systems, then judged which conversational partner they believed to be human. When instructed to adopt a humanlike persona, GPT-4.5 was identified as the human 73% of the time—significantly more often than the actual human participants. LLaMa-3.1, under the same conditions, was judged to be human 56% of the time, not significantly more or less often than the humans they were compared to. Baseline models (ELIZA and GPT-4o) achieved win rates significantly below chance (23% and 21%, respectively). Versions Saul Traiger argues that there are at least three primary versions of the Turing test, two of which are offered in "Computing Machinery and Intelligence" and one that he describes as the "Standard Interpretation". While there is some debate regarding whether the "Standard Interpretation" is that described by Turing or, instead, based on a misreading of his paper, these three versions are not regarded as equivalent, and their strengths and weaknesses are distinct. Turing's original article describes a simple party game involving three players. Player A is a man, player B is a woman and player C (who plays the role of the interrogator) is of either sex. In the imitation game, player C is unable to see either player A or player B, and can communicate with them only through written notes. By asking questions of player A and player B, player C tries to determine which of the two is the man and which is the woman. Player A's role is to trick the interrogator into making the wrong decision, while player B attempts to assist the interrogator in making the right one. Turing then asks: "What will happen when a machine takes the part of A in this game? Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?" These questions replace our original, "Can machines think?" The second version appeared later in Turing's 1950 paper. Similar to the original imitation game test, the role of player A is performed by a computer. However, the role of player B is performed by a man rather than a woman. Let us fix our attention on one particular digital computer C. Is it true that by modifying this computer to have an adequate storage, suitably increasing its speed of action, and providing it with an appropriate programme, C can be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man? In this version, both player A (the computer) and player B are trying to trick the interrogator into making an incorrect decision. The standard interpretation is not included in the original paper, but is both accepted and debated. Common understanding has it that the purpose of the Turing test is not specifically to determine whether a computer is able to fool an interrogator into believing that it is a human, but rather whether a computer could imitate a human. While there is some dispute whether this interpretation was intended by Turing, Sterrett believes that it was and thus conflates the second version with this one, while others, such as Traiger, do not – this has nevertheless led to what can be viewed as the "standard interpretation". In this version, player A is a computer and player B a person of either sex. The role of the interrogator is not to determine which is male and which is female, but which is a computer and which is a human. The fundamental issue with the standard interpretation is that the interrogator cannot differentiate which responder is human, and which is machine. There are issues about duration, but the standard interpretation generally considers this limitation as something that should be reasonable. Interpretations Controversy has arisen over which of the alternative formulations of the test Turing intended. Sterrett argues that two distinct tests can be extracted from his 1950 paper and that, despite Turing's remark, they are not equivalent. The test that employs the party game and compares frequencies of success is referred to as the "Original Imitation Game Test", whereas the test consisting of a human judge conversing with a human and a machine is referred to as the "Standard Turing Test", noting that Sterrett equates this with the "standard interpretation" rather than the second version of the imitation game. Sterrett agrees that the standard Turing test (STT) has the problems that its critics cite but feels that, in contrast, the original imitation game test (OIG test) so defined is immune to many of them, due to a crucial difference: Unlike the STT, it does not make similarity to human performance the criterion, even though it employs human performance in setting a criterion for machine intelligence. A man can fail the OIG test, but it is argued that it is a virtue of a test of intelligence that failure indicates a lack of resourcefulness: The OIG test requires the resourcefulness associated with intelligence and not merely "simulation of human conversational behaviour". The general structure of the OIG test could even be used with non-verbal versions of imitation games. According to Huma Shah, Turing himself was concerned with whether a machine could think and was providing a simple method to examine this: through human-machine question-answer sessions. Shah argues the imitation game which Turing described could be practicalized in two different ways: a) one-to-one interrogator-machine test, and b) simultaneous comparison of a machine with a human, both questioned in parallel by an interrogator. Still other writers have interpreted Turing as proposing that the imitation game itself is the test, without specifying how to take into account Turing's statement that the test that he proposed using the party version of the imitation game is based upon a criterion of comparative frequency of success in that imitation game, rather than a capacity to succeed at one round of the game. Some writers argue that the imitation game is best understood by its social aspects. In his 1948 paper, Turing refers to intelligence as an "emotional concept," and notes that The extent to which we regard something as behaving in an intelligent manner is determined as much by our own state of mind and training as by the properties of the object under consideration. If we are able to explain and predict its behaviour or if there seems to be little underlying plan, we have little temptation to imagine intelligence. With the same object therefore it is possible that one man would consider it as intelligent and another would not; the second man would have found out the rules of its behaviour. Following this remark and similar ones scattered throughout Turing's publications, Diane Proudfoot claims that Turing held a response-dependence approach to intelligence, according to which an intelligent (or thinking) entity is one that appears intelligent to an average interrogator. Shlomo Danziger promotes a socio-technological interpretation, according to which Turing saw the imitation game not as an intelligence test but as a technological aspiration - one whose realization would likely involve a change in society's attitude toward machines. According to this reading, Turing's celebrated 50-year prediction - that by the end of the 20th century his test will be passed by some machine - actually consists of two distinguishable predictions. The first is a technological prediction: I believe that in about fifty years' time it will be possible to programme computers ... to make them play the imitation game so well that an average interrogator will not have more than 70% chance of making the right identification after five minutes of questioning. The second prediction Turing makes is a sociological one: I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. Danziger claims further that for Turing, alteration of society's attitude towards machinery is a prerequisite for the existence of intelligent machines: Only when the term "intelligent machine" is no longer seen as an oxymoron the existence of intelligent machines would become logically possible. Saygin has suggested that maybe the original game is a way of proposing a less biased experimental design as it hides the participation of the computer. The imitation game also includes a "social hack" not found in the standard interpretation, as in the game both computer and male human are required to play as pretending to be someone they are not. A crucial piece of any laboratory test is that there should be a control. Turing never makes clear whether the interrogator in his tests is aware that one of the participants is a computer. He states only that player A is to be replaced with a machine, not that player C is to be made aware of this replacement. When Colby, FD Hilf, S Weber and AD Kramer tested PARRY, they did so by assuming that the interrogators did not need to know that one or more of those being interviewed was a computer during the interrogation. As Ayse Saygin, Peter Swirski, and others have highlighted, this makes a big difference to the implementation and outcome of the test. An experimental study looking at Gricean maxim violations using transcripts of Loebner's one-to-one (interrogator-hidden interlocutor) Prize for AI contests between 1994 and 1999, Ayse Saygin found significant differences between the responses of participants who knew and did not know about computers being involved. Strengths The power and appeal of the Turing test derives from its simplicity. The philosophy of mind, psychology, and modern neuroscience have been unable to provide definitions of "intelligence" and "thinking" that are sufficiently precise and general to be applied to machines. Without such definitions, the central questions of the philosophy of artificial intelligence cannot be answered. The Turing test, even if imperfect, at least provides something that can actually be measured. As such, it is a pragmatic attempt to answer a difficult philosophical question. The format of the test allows the interrogator to give the machine a wide variety of intellectual tasks. Turing wrote that "the question and answer method seems to be suitable for introducing almost any one of the fields of human endeavour that we wish to include". John Haugeland adds that "understanding the words is not enough; you have to understand the topic as well". To pass a well-designed Turing test, the machine must use natural language, reason, have knowledge and learn. The test can be extended to include video input, as well as a "hatch" through which objects can be passed: this would force the machine to demonstrate skilled use of well designed vision and robotics as well. Together, these represent almost all of the major problems that artificial intelligence research would like to solve. The Feigenbaum test is designed to take advantage of the broad range of topics available to a Turing test. It is a limited form of Turing's question-answer game which compares the machine against the abilities of experts in specific fields such as literature or chemistry. As a Cambridge honours graduate in mathematics, Turing might have been expected to propose a test of computer intelligence requiring expert knowledge in some highly technical field, and thus anticipating a more recent approach to the subject. Instead, as already noted, the test which he described in his seminal 1950 paper requires the computer to be able to compete successfully in a common party game, and this by performing as well as the typical man in answering a series of questions so as to pretend convincingly to be the woman contestant. Given the status of human sexual dimorphism as one of the most ancient of subjects, it is thus implicit in the above scenario that the questions to be answered will involve neither specialised factual knowledge nor information processing technique. The challenge for the computer, rather, will be to demonstrate empathy for the role of the female, and to demonstrate as well a characteristic aesthetic sensibility—both of which qualities are on display in this snippet of dialogue which Turing has imagined: When Turing does introduce some specialised knowledge into one of his imagined dialogues, the subject is not maths or electronics, but poetry: Turing thus once again demonstrates his interest in empathy and aesthetic sensitivity as components of an artificial intelligence; and in light of an increasing awareness of the threat from an AI run amok, it has been suggested that this focus perhaps represents a critical intuition on Turing's part, i.e., that emotional and aesthetic intelligence will play a key role in the creation of a "friendly AI". It is further noted, however, that whatever inspiration Turing might be able to lend in this direction depends upon the preservation of his original vision, which is to say, further, that the promulgation of a "standard interpretation" of the Turing test—i.e., one which focuses on a discursive intelligence only—must be regarded with some caution. Weaknesses Turing did not explicitly state that the Turing test could be used as a measure of "intelligence", or any other human quality. He wanted to provide a clear and understandable alternative to the word "think", which he could then use to reply to criticisms of the possibility of "thinking machines" and to suggest ways that research might move forward. Nevertheless, the Turing test has been proposed as a measure of a machine's "ability to think" or its "intelligence". This proposal has received criticism from both philosophers and computer scientists. The interpretation makes the assumption that an interrogator can determine if a machine is "thinking" by comparing its behaviour with human behaviour. Every element of this assumption has been questioned: the reliability of the interrogator's judgement, the value of comparing the machine with a human, and the value of comparing only behaviour. Because of these and other considerations, some AI researchers have questioned the relevance of the test to their field. In practice, the test's results can easily be dominated not by the computer's intelligence, but by the attitudes, skill, or naïveté of the questioner. Numerous experts in the field, including cognitive scientist Gary Marcus, insist that the Turing test only shows how easy it is to fool humans and is not an indication of machine intelligence. Turing doesn't specify the precise skills and knowledge required by the interrogator in his description of the test, but he did use the term "average interrogator": "[the] average interrogator would not have more than 70 per cent chance of making the right identification after five minutes of questioning". Chatterbot programs such as ELIZA have repeatedly fooled unsuspecting people into believing that they are communicating with human beings. In these cases, the "interrogators" are not even aware of the possibility that they are interacting with computers. To successfully appear human, there is no need for the machine to have any intelligence whatsoever and only a superficial resemblance to human behaviour is required. Early Loebner Prize competitions used "unsophisticated" interrogators who were easily fooled by the machines. Since 2004, the Loebner Prize organisers have deployed philosophers, computer scientists, and journalists among the interrogators. Nonetheless, some of these experts have been deceived by the machines. One interesting feature of the Turing test is the frequency of the confederate effect, when the confederate (tested) humans are misidentified by the interrogators as machines. It has been suggested that what interrogators expect as human responses is not necessarily typical of humans. As a result, some individuals can be categorised as machines. This can therefore work in favour of a competing machine. The humans are instructed to "act themselves", but sometimes their answers are more like what the interrogator expects a machine to say. This raises the question of how to ensure that the humans are motivated to "act human". The Turing test does not directly test whether the computer behaves intelligently. It tests only whether the computer behaves like a human being. Since human behaviour and intelligent behaviour are not exactly the same thing, the test can fail to accurately measure intelligence in two ways: The Turing test is concerned strictly with how the subject acts – the external behaviour of the machine. In this regard, it takes a behaviourist or functionalist approach to the study of the mind. The example of ELIZA suggests that a machine passing the test may be able to simulate human conversational behaviour by following a simple (but large) list of mechanical rules, without thinking or having a mind at all. John Searle has argued that external behaviour cannot be used to determine if a machine is "actually" thinking or merely "simulating thinking". His Chinese room argument is intended to show that, even if the Turing test is a good operational definition of intelligence, it may not indicate that the machine has a mind, consciousness, or intentionality. (Intentionality is a philosophical term for the power of thoughts to be "about" something.) Turing anticipated this line of criticism in his original paper, writing: I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper. Mainstream AI researchers argue that trying to pass the Turing test is merely a distraction from more fruitful research. Indeed, the Turing test is not an active focus of much academic or commercial effort—as Stuart Russell and Peter Norvig write: "AI researchers have devoted little attention to passing the Turing test". There are several reasons. First, there are easier ways to test their programs. Most current research in AI-related fields is aimed at modest and specific goals, such as object recognition or logistics. To test the intelligence of the programs that solve these problems, AI researchers simply give them the task directly. Stuart Russell and Peter Norvig suggest an analogy with the history of flight: Planes are tested by how well they fly, not by comparing them to birds. "Aeronautical engineering texts," they write, "do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons.'" Second, creating lifelike simulations of human beings is a difficult problem on its own that does not need to be solved to achieve the basic goals of AI research. Believable human characters may be interesting in a work of art, a game, or a sophisticated user interface, but they are not part of the science of creating intelligent machines, that is, machines that solve problems using intelligence. Turing did not intend for his idea to be used to test the intelligence of programs—he wanted to provide a clear and understandable example to aid in the discussion of the philosophy of artificial intelligence. John McCarthy argues that we should not be surprised that a philosophical idea turns out to be useless for practical applications. He observes that the philosophy of AI is "unlikely to have any more effect on the practice of AI research than philosophy of science generally has on the practice of science". Another well known objection raised towards the Turing test concerns its exclusive focus on linguistic behaviour (i.e. it is only a "language-based" experiment, while all the other cognitive faculties are not tested). This drawback downsizes the role of other modality-specific "intelligent abilities" concerning human beings that the psychologist Howard Gardner, in his "multiple intelligence theory", proposes to consider (verbal-linguistic abilities are only one of those). A critical aspect of the Turing test is that a machine must give itself away as being a machine by its utterances. An interrogator must then make the "right identification" by correctly identifying the machine as being just that. If, however, a machine remains silent during a conversation, then it is not possible for an interrogator to accurately identify the machine other than by means of a calculated guess. Even taking into account a parallel/hidden human as part of the test may not help the situation as humans can often be misidentified as being a machine. By focusing on imitating humans, rather than augmenting or extending human capabilities, the Turing Test risks directing research and implementation toward technologies that substitute for humans and thereby drive down wages and income for workers. As they lose economic power, these workers may also lose political power, making it more difficult for them to change the allocation of wealth and income. This can trap them in a bad equilibrium. Erik Brynjolfsson has called this "The Turing Trap" and argued that there are currently excess incentives for creating machines that imitate rather than augment humans. Variations Numerous other versions of the Turing test, including those expounded above, have been raised through the years. A modification of the Turing test wherein the objective of one or more of the roles have been reversed between machines and humans is termed a reverse Turing test. An example is implied in the work of psychoanalyst Wilfred Bion, who was particularly fascinated by the "storm" that resulted from the encounter of one mind by another. In his 2000 book, among several other original points with regard to the Turing test, literary scholar Peter Swirski discussed in detail the idea of what he termed the Swirski test—essentially the reverse Turing test. He pointed out that it overcomes most if not all standard objections levelled at the standard version. Carrying this idea forward, R. D. Hinshelwood described the mind as a "mind recognizing apparatus". The challenge would be for the computer to be able to determine if it were interacting with a human or another computer. This is an extension of the original question that Turing attempted to answer but would, perhaps, offer a high enough standard to define a machine that could "think" in a way that we typically define as characteristically human. CAPTCHA is a form of reverse Turing test. Before being allowed to perform some action on a website, the user is presented with alphanumerical characters in a distorted graphic image and asked to type them out. This is intended to prevent automated systems from being used to abuse the site. The rationale is that software sufficiently sophisticated to read and reproduce the distorted image accurately does not exist (or is not available to the average user), so any system able to do so is likely to be a human. Software that could reverse CAPTCHA with some accuracy by analysing patterns in the generating engine started being developed soon after the creation of CAPTCHA. In 2013, researchers at Vicarious announced that they had developed a system to solve CAPTCHA challenges from Google, Yahoo!, and PayPal up to 90% of the time. In 2014, Google engineers demonstrated a system that could defeat CAPTCHA challenges with 99.8% accuracy. In 2015, Shuman Ghosemajumder, former click fraud czar of Google, stated that there were cybercriminal sites that would defeat CAPTCHA challenges for a fee, to enable various forms of fraud. A further variation is motivated by the concern that modern Natural Language Processing prove to be highly successful in generating text on the basis of a huge text corpus and could eventually pass the Turing test simply by manipulating words and sentences that have been used in the initial training of the model. Since the interrogator has no precise understanding of the training data, the model might simply be returning sentences that exist in similar fashion in the enormous amount of training data. For this reason, Arthur Schwaninger proposes a variation of the Turing test that can distinguish between systems that are only capable of using language and systems that understand language. He proposes a test in which the machine is confronted with philosophical questions that do not depend on any prior knowledge and yet require self-reflection to be answered appropriately. Another variation is described as the subject-matter expert Turing test, where a machine's response cannot be distinguished from an expert in a given field. This is also known as a "Feigenbaum test" and was proposed by Edward Feigenbaum in a 2003 paper. Robert French (1990) makes the case that an interrogator can distinguish human and non-human interlocutors by posing questions that reveal the low-level (i.e., unconscious) processes of human cognition, as studied by cognitive science. Such questions reveal the precise details of the human embodiment of thought and can unmask a computer unless it experiences the world as humans do. The "Total Turing test" variation of the Turing test, proposed by cognitive scientist Stevan Harnad, adds two further requirements to the traditional Turing test. The interrogator can also test the perceptual abilities of the subject (requiring computer vision) and the subject's ability to manipulate objects (requiring robotics). Paul Schweizer argues that Harnad's work is too weak, and extended it further with the Truly Total Turing Test: It is essential to note that the TTTT is not a test of individual cognitive systems. Instead, it is meant to test the overall capacities of the type of cognitive architecture of which particular individuals are tokens. A letter published in Communications of the ACM describes the concept of generating a synthetic patient population and proposes a variation of Turing test to assess the difference between synthetic and real patients. The letter states: "In the EHR context, though a human physician can readily distinguish between synthetically generated and real live human patients, could a machine be given the intelligence to make such a determination on its own?" and further the letter states: "Before synthetic patient identities become a public health problem, the legitimate EHR market might benefit from applying Turing Test-like techniques to ensure greater data reliability and diagnostic value. Any new techniques must thus consider patients' heterogeneity and are likely to have greater complexity than the Allen eighth-grade-science-test is able to grade". The minimum intelligent signal test was proposed by Chris McKinstry as "the maximum abstraction of the Turing test", in which only binary responses (true/false or yes/no) are permitted, to focus only on the capacity for thought. It eliminates text chat problems like anthropomorphism bias, and does not require emulation of unintelligent human behaviour, allowing for systems that exceed human intelligence. The questions must each stand on their own, however, making it more like an IQ test than an interrogation. It is typically used to gather statistical data against which the performance of artificial intelligence programs may be measured. The organisers of the Hutter Prize believe that compressing natural language text is a hard AI problem, equivalent to passing the Turing test. The data compression test has some advantages over most versions and variations of a Turing test, including:[citation needed] The main disadvantages of using data compression as a test are: A related approach to Hutter's prize which appeared much earlier in the late 1990s is the inclusion of compression problems in an extended Turing test. or by tests which are completely derived from Kolmogorov complexity. Other related tests in this line are presented by Hernandez-Orallo and Dowe. Algorithmic IQ, or AIQ for short, is an attempt to convert the theoretical Universal Intelligence Measure from Legg and Hutter (based on Solomonoff's inductive inference) into a working practical test of machine intelligence. Two major advantages of some of these tests are their applicability to nonhuman intelligences and their absence of a requirement for human testers. The Turing test inspired the Ebert test proposed in 2011 by film critic Roger Ebert which is a test whether a computer-based synthesised voice has sufficient skill in terms of intonations, inflections, timing and so forth, to make people laugh. Taking advantage of large language models, in 2023 the research company AI21 Labs created an online social experiment titled "Human or Not?" It was played more than 10 million times by more than 2 million people. It is the biggest Turing-style experiment to that date. The results showed that 32% of people could not distinguish between humans and machines. Alternative tests for machine intelligence The Lovelace test is named for Ada Lovelace, who suggested "only when computers originate things should they be believed to have minds". In 2023, David Eagleman proposed that "a meaningfully intelligent system should be able to do scientific discovery". In Eagleman's framework, Level 1 discovery means the AI is piecing together facts that already exist scattered in the literature (useful but not yet meaningfully intelligent). Level 2 discovery, in contrast, describes scientific progress that requires fresh conceptualization, simulation, and verification to arrive at genuinely new frameworks. Other tests of AI intelligence include the Winograd Schema Challenge, which tests a machine's ability to understand natural language.. There is also the Allen AI Science Challenge, which tests a machine's ability to answer 8th grade science questions. Another test is the Artificial General Intelligence (AGI) Test, which asks whether a machine can perform any intellectual task that a human can. Conferences 1990 marked the fortieth anniversary of the first publication of Turing's "Computing Machinery and Intelligence" paper, and saw renewed interest in the test. Two significant events occurred in that year: the first was the Turing Colloquium, which was held at the University of Sussex in April, and brought together academics and researchers from a wide variety of disciplines to discuss the Turing test in terms of its past, present, and future; the second was the formation of the annual Loebner Prize competition. Blay Whitby lists four major turning points in the history of the Turing test – the publication of "Computing Machinery and Intelligence" in 1950, the announcement of Joseph Weizenbaum's ELIZA in 1966, Kenneth Colby's creation of PARRY, which was first described in 1972, and the Turing Colloquium in 1990. In parallel to the 2008 Loebner Prize held at the University of Reading, the Society for the Study of Artificial Intelligence and the Simulation of Behaviour (AISB) hosted a one-day symposium to discuss the Turing test, organised by John Barnden, Mark Bishop, Huma Shah and Kevin Warwick. The speakers included the Royal Institution's Director Baroness Susan Greenfield, Selmer Bringsjord, Turing's biographer Andrew Hodges, and consciousness scientist Owen Holland. No agreement emerged for a canonical Turing test, though Bringsjord expressed that a sizeable prize would result in the Turing test being passed sooner. See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Million_years_ago] | [TOKENS: 688]
Contents Million years ago Million years ago, abbreviated as Mya, Myr (megayear), or Ma (megaannum), is a unit of time equal to 1,000,000 years (i.e. 1×106 years), or approximately 31.6 teraseconds. Usage Myr is in common use in fields such as Earth science and cosmology. Myr is also used with Mya or Ma. Together they make a reference system, one to a quantity, the other to a particular point in a year numbering system that is time before the present. Myr is deprecated in geology, but in astronomy Myr is standard. Where "myr" is seen in geology, it is usually "Myr" (a unit of mega-years). In astronomy, it is usually "Myr" (Million years). Debate In geology, a debate remains open concerning the use of Myr (duration) plus Mya (million years ago) versus using only the term Ma. In either case, the term Ma is used in geology literature conforming to ISO 31-1 (now ISO 80000-3) and NIST 811 recommended practices. Traditional style geology literature is written: The Cretaceous started 145 Ma and ended 66 Ma, lasting for 79 Myr. The "ago" is implied, so that any such year number "X Ma" between 66 and 145 is "Cretaceous", for good reason. But the counter argument is that having myr for a duration and Mya for an age mixes unit systems, and tempts capitalization errors: "million" need not be capitalized, but "mega" must be; "ma" would technically imply a milliyear (a thousandth of a year, or 8 hours). On this side of the debate, one avoids myr and simply adds ago explicitly (or adds BP), as in: The Cretaceous started 145 Ma ago and ended 66 Ma ago, lasting for 79 Ma. In this case, "79 Ma" means only a quantity of 79 million years, without the meaning of "79 million years ago". Conflicts with the International System of Units and possible solutions The abbreviation mya does not comply with the International System of Units (SI) in three respects, each of which can be attributed to one of its letters: Since it is often clear from the context that the time must be in the past, mya is often simply replaced by Ma. Physically, this means that a point in time is replaced by a period of time, but this does not necessarily pose a problem if a definition (such as the ISO 8601 standard for date notation) is agreed upon. This makes it clear why the abbreviation mya is still so popular: it stands for the phrase “so many years ago,” which makes it clear even without a definition that it must refer to a point in time so many years ago in the past. If it is not clear from the context that it refers to the past, when replacing mya with Ma, a bp (for "before present") is added to the latter. This also makes it clear that it refers to a point in time and not a period of time. However, Ma bp does not necessarily mean exactly the same as mya. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Fringe_science] | [TOKENS: 846]
Contents Fringe science Fringe science refers to ideas whose attributes include being highly speculative or relying on premises already refuted. The chance of ideas rejected by editors and published outside the mainstream being correct is remote.: 58 When the general public does not distinguish between science and imitators, it risks exploitation,: 173 and in some cases, a "yearning to believe or a generalized suspicion of experts is a very potent incentive to accepting some pseudoscientific claims".: 176 The term "fringe science" covers everything from novel hypotheses, which can be tested utilizing the scientific method, to wild ad hoc hypotheses and mumbo jumbo. This has resulted in a tendency to dismiss all fringe science as the domain of pseudoscientists, hobbyists, and quacks. A concept that was once accepted by the mainstream scientific community may become fringe science because of a later evaluation of previous research. For example, focal infection theory, which held that focal infections of the tonsils or teeth are a primary cause of systemic disease, was once considered to be medical fact. It has since been dismissed because of a lack of evidence. Description The boundary between fringe science and pseudoscience is disputed. Friedlander writes that there is no widespread understanding of what separates science from nonscience or pseudoscience.: 183 Pseudoscience, however, is something that is not scientific but is incorrectly characterised as science. The term may be considered pejorative. For example, Lyell D. Henry Jr. wrote, "Fringe science [is] a term also suggesting kookiness." Continental drift was rejected for decades lacking conclusive evidence before plate tectonics was accepted.: 5 The confusion between science and pseudoscience, between honest scientific error and genuine scientific discovery, is not new, and it is a permanent feature of the scientific landscape .... Acceptance of new science can come slowly.: 161 Examples Some historical ideas that are considered to have been refuted by mainstream science are: Relatively recent fringe sciences include: Some theories that were once rejected as fringe science but were eventually accepted as mainstream science include: Responding to fringe science Michael W. Friedlander has suggested some guidelines for responding to fringe science, which, he argues, is a more difficult problem: 174 than scientific misconduct. His suggested methods include impeccable accuracy, checking cited sources, not overstating orthodox science, thorough understanding of the Wegener continental drift example, examples of orthodox science investigating radical proposals, and prepared examples of errors from fringe scientists.: 178-9 Friedlander suggests that fringe science is necessary so mainstream science will not atrophy. Scientists must evaluate the plausibility of each new fringe claim, and certain fringe discoveries "will later graduate into the ranks of accepted" — while others "will never receive confirmation".: 173 Margaret Wertheim profiled many "outsider scientists" in her book Physics on the Fringe, who receive little or no attention from professional scientists. She describes all of them as trying to make sense of the world using the scientific method but in the face of being unable to understand modern science's complex theories. She also finds it fair that credentialed scientists do not bother spending a lot of time learning about and explaining problems with the fringe theories of uncredentialed scientists since the authors of those theories have not taken the time to understand the mainstream theories they aim to disprove. As Donald E. Simanek asserts, "Too often speculative and tentative hypotheses of cutting edge science are treated as if they were scientific truths, and so accepted by a public eager for answers." However, the public is ignorant that "As science progresses from ignorance to understanding it must pass through a transitional phase of confusion and uncertainty." The media also play a role in propagating the belief that certain fields of science are controversial. In their 2003 paper "Optimising Public Understanding of Science and Technology in Europe: A Comparative Perspective", Jan Nolin et al. write that "From a media perspective it is evident that controversial science sells, not only because of its dramatic value, but also since it is often connected to high-stake societal issues." See also References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Joke#cite_note-FOOTNOTELane1905-15] | [TOKENS: 8460]
Contents Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/I-mode] | [TOKENS: 1392]
Contents i-mode i-mode (Japanese: iモード, ai-mōdo) is a Japanese mobile internet (distinct from wireless internet) service operated by NTT DoCoMo. Unlike Wireless Application Protocols, i-mode encompasses a wider variety of internet standards, including web access, e-mail, and the packet-switched network that delivers the data. i-mode users also have access to other various services such as: sports results, weather forecasts, games, financial services, and ticket booking. Content is provided by specialised services, typically from the mobile carrier, which allows them to have tighter control over billing. Like WAP, i-mode delivers only those services that are specifically converted for the service, or are converted through gateways. Description In contrast with the Wireless Application Protocol (WAP) standard, which used Wireless Markup Language (WML) on top of a protocol stack for wireless handheld devices, i-mode borrows from DoCoMo proprietary protocols ALP (HTTP) and TLP (TCP, UDP), as well as fixed Internet data formats such as C-HTML, a subset of the HTML language designed by DoCoMo. C-HTML was designed for small devices (e.g. cellular phones) with hardware restrictions such as lower memory, low-power CPUs with limited or no storage capabilities, small monochrome display screens, single-character fonts and limited input methods. As a simpler form of HTML, C-HTML does not support tables, image maps, multiple fonts and styling of fonts, background colors and images, frames, or style sheets, and is limited to a monochromatic display. i-mode phones have a special i-mode button for the user to access the start menu. There are more than 12,000 official sites and around 100,000 or more unofficial i-mode sites, which are not linked to DoCoMo's i-mode portal page and DoCoMo's billing services. NTT DoCoMo supervises the content and operations of all official i-mode sites, most of which are commercial. These official sites are accessed through DoCoMo's i-mode menu but in many cases official sites can also be accessed from mobile phones by typing the URL or through the use of QR code (a barcode). An i-mode user pays for both sent and received data. There are services to avoid unsolicited e-mails. The basic monthly charge is typically on the order of JPY¥200–300 for i-mode not including the data transfer charges, with additional charges on a monthly subscription basis for premium services. A variety of discount plans exist, for example family discount and flat packet plans for unlimited transfer of data at a fixed monthly charge (on the order of ¥4,000 per month). History i-mode was launched in Japan on 22 February 1999. The content planning and service design team was led by Mari Matsunaga, while Takeshi Natsuno was responsible for the business development. Top executive Keiichi Enoki oversaw the technical and overall development. A few months after DoCoMo launched i-mode in February 1999, DoCoMo's competitors launched very similar mobile data services: KDDI launched EZweb, and J-Phone launched J-Sky. Vodafone later acquired J-Phone including J-Sky, renaming the service Vodafone live!, although initially this was different from Vodafone live! in Europe and other markets. In addition, Vodafone KK was acquired by SoftBank, an operator of Yahoo! Japan in October, 2006 and changed the name to SoftBank Mobile. Bandai and Namco launched content for i-mode in 1999. Bandai launched the Dokodemo Aso Vegas service in May 1999, reaching over 1 million paid subscribers by March 2000. In December 1999, Namco launched Namco Station, a mobile site for i-mode. Since 2003, i-mode center is called CiRCUS, which consists of 400 NEC NX7000 HP-UX servers and occupies 4,600 m2 floor space in DoCoMo's Kawasaki office. The operation support system is called CARNiVAL, which is hosted in the Sanno Park Tower. As of June 2006, the mobile data services I-Mode, EZweb, and J-Sky, had over 80 million subscribers in Japan. i-mode usage in Japan peaked around 2008. On 29 October 2019, DoCoMo announced i-mode will end on 31 March 2026. Markets Seeing the tremendous success of i-mode in Japan, many operators in Europe, Asia and Australia sought to license the service through partnership with DoCoMo. Takeshi Natsuno was behind the expansion of i-mode to 17 countries worldwide. Kamel Maamria who was a partner with the Boston Consulting Group and who was supporting Mr. Natsuno is also thought to have had a major role in the expansion of the first Japanese service ever outside Japan. i-mode showed very fast take-up in the various countries where it was launched which led to more operators seeking to launch i-mode in their markets with the footprint reaching a total of 17 markets worldwide. While the i-mode service was an exceptional service which positioned DoCoMo as the global leader in value add services, another key success factor for i-mode was the Japanese smartphone makers who developed state of the art handsets to support i-mode. As i-mode was exported to the rest of the world, Nokia and other major handset vendors who controlled the markets at the time, refused at first to support i-mode by developing handsets which support the i-mode service. The operators who decided to launch i-mode had to rely on Japanese vendors who had no experience in international markets. As i-mode showed success in these markets, some vendors started customizing some of their handsets to support i-mode, however, the support was only partial and came late in time. While the service was successful during the first years after launch, the lack of adequate handsets and the emergence of new handsets from new vendors which supported new Internet services on one hand, and a change of leadership of i-mode in Docomo, lead to a number of operators to migrate or integrate i-mode into new mobile Internet services. These efforts were ultimately unsuccessful, and i-mode never became popular outside of Japan. i-mode sponsored the Renault F1 team from 2004 to 2006. i-mode was launched in the following countries: Devices Some typical features include the "clamshell" model with large displays (240 x 320 pixels) and in many models, a display on either side. Additionally several phones had extra features not typically found on other clamshell phones of the era, like digital cameras. The displays normally have 65,536 colors and later models models had as many as 262,144 colors. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Instant_messaging] | [TOKENS: 5713]
Contents Instant messaging Instant messaging (IM) technology is a type of synchronous computer-mediated communication involving the immediate (real-time) transmission of messages between two or more parties over the Internet or another computer network. Originally involving simple text message exchanges, modern instant messaging applications and services (also variously known as instant messenger, messaging app, chat app, chat client, or simply a messenger) tend to also feature the exchange of multimedia, emojis, file transfer, VoIP (voice calling), and video chat capabilities. Instant messaging systems facilitate connections between specified known users (often using a contact list also known as a "buddy list" or "friend list") or in chat rooms, and can be standalone apps or integrated into a wider social media platform, or in a website where it can, for instance, be used for conversational commerce. Originally the term "instant messaging" was distinguished from "text messaging" by being run on a computer network instead of a cellular/mobile network, being able to write longer messages, real-time communication, presence ("status"), and being free (only cost of access instead of per SMS message sent). Instant messaging was pioneered in the early Internet era; the IRC protocol was the earliest to achieve wide adoption. Later in the 1990s, ICQ was among the first closed and commercialized instant messengers, and several rival services appeared afterwards as it became a popular use of the Internet. Beginning with its first introduction in 2005, BlackBerry Messenger became the first popular example of mobile-based IM, combining features of traditional IM and mobile SMS. Instant messaging remains very popular today; IM apps are the most widely used smartphone apps: in 2018 for instance there were 980 million monthly active users of WeChat and 1.3 billion monthly users of WhatsApp, the largest IM network. Overview Instant messaging (IM), sometimes also called "messaging" or "texting", consists of computer-based human communication between two users (private messaging) or more (chat room or "group") in real-time, allowing immediate receipt of acknowledgment or reply. This is in direct contrast to email, where conversations are not in real-time, and the perceived quasi-synchrony of the communications by the users (although many systems allow users to send offline messages that the other user receives when logging in). Earlier IM networks were limited to text-based communication, not dissimilar to mobile text messaging. As technology has moved forward, IM has expanded to include voice calling using a microphone, videotelephony using webcams, file transfer, location sharing, image and video transfer, voice notes, and other features. IM is conducted over the Internet or other types of networks (see also LAN messenger). Depending on the IM protocol, the technical architecture can be peer-to-peer (direct point-to-point transmission) or client–server (when all clients have to first connect to the central server). Primary IM services are controlled by their corresponding companies and usually follow the client-server model. At one point, the term "Instant Messenger" was a service mark of AOL Time Warner and could not be used in software not affiliated with AOL in the United States. For this reason, in April 2007, the instant messaging client formerly named Gaim (or gaim) announced that they would be renamed "Pidgin". Modern IM services generally provide their own client, either a separately installed application or a browser-based client. They are normally centralised networks run by the servers of the platform's operators, unlike peer-to-peer protocols like XMPP. These usually only work within the same IM network, although some allow limited function with other services (see #Interoperability). Third-party client software applications exist that will connect with most of the major IM services. There is the class of instant messengers that uses the serverless model, which doesn't require servers, and the IM network consists only of clients. There are several serverless messengers: RetroShare, Tox, Bitmessage, Ricochet, Ring. See also: LAN messenger. Some examples of popular IM services today include Signal, Telegram, WhatsApp Messenger, WeChat, QQ Messenger, Viber, Line, and Snapchat.[citation needed] The popularity of certain apps greatly differ between different countries. Certain apps have an emphasis on certain uses - for example, Skype focuses on video calling, Slack focuses on messaging and file sharing for work teams, and Snapchat focuses on image messages. Some social networking services offer messaging services as a component of their overall platform, such as Facebook's Facebook Messenger, who also own WhatsApp. Others have a direct IM function as an additional adjunct component of their social networking platforms, like Instagram, Reddit, Tumblr, TikTok, Clubhouse and Twitter; this also includes for example dating websites, such as OkCupid or Plenty of Fish, and online gaming chat platforms. Private chat allows users to converse privately with another person or a group. Privacy can also be enhanced in several ways, such as end-to-end encryption by default. Public and group chat features allow users to communicate with multiple people simultaneously. Many major IM services and applications offer a call feature for user-to-user voice calls, conference calls, and voice messages. The call functionality is useful for professionals who utilize the application for work purposes and as a hands-free method. Videotelephony using a webcam is also possible by some. Some IM applications include in-app games for entertainment. Yahoo! Messenger, for example, introduced these where users could play a game and viewed by friends in real-time. MSN Messenger featured a number of playable games within the interface. Facebook's Messenger has had a built-in option to play games with people in a chat, including games like Tetris and Blackjack. Discord features multiple games built inside the "activities" tab in voice channels. A relatively new feature to instant messaging, peer-to-peer payments are available for financial tasks on top of communication. The lack of a service fee also makes these advantageous to financial applications. IM services such as Facebook Messenger and the WeChat 'super-app' for example offer a payment feature. History Though the term dates from the 1990s, instant messaging predates the Internet, first appearing on multi-user operating systems like Compatible Time-Sharing System (CTSS) and Multiplexed Information and Computing Service (Multics) in the mid-1960s. Initially, some of these systems were used as notification systems for services like printing, but quickly were used to facilitate communication with other users logged into the same machine. CTSS facilitated communication via text message for up to 30 people. Parallel to instant messaging were early online chat facilities, the earliest of which was Talkomatic (1973) on the PLATO system, which allowed 5 people to chat simultaneously on a 512 x 512 plasma display (5 lines of text + 1 status line per person). During the bulletin board system (BBS) phenomenon that peaked during the 1980s, some systems incorporated chat features which were similar to instant messaging; Freelancin' Roundtable was one prime example. The first such general-availability commercial online chat service (as opposed to PLATO, which was educational) was the CompuServe CB Simulator in 1980, created by CompuServe executive Alexander "Sandy" Trevor in Columbus, Ohio. As networks developed, the protocols spread with the networks. Some of these used a peer-to-peer protocol (e.g. talk, ntalk and ytalk), while others required peers to connect to a server (see talker and IRC). The Zephyr Notification Service (still in use at some institutions) was invented at MIT's Project Athena in the 1980s to allow service providers to locate and send messages to users. Early instant messaging programs were primarily real-time text, where characters appeared as they were typed. This includes the Unix "talk" command line program, which was popular in the 1980s and early 1990s. Some BBS chat programs (i.e. Celerity BBS) also used a similar interface. Modern implementations of real-time text also exist in instant messengers, such as AOL's Real-Time IM as an optional feature. In the latter half of the 1980s and into the early 1990s, the Quantum Link online service for Commodore 64 computers offered user-to-user messages between concurrently connected customers, which they called "On-Line Messages" (or OLM for short), and later "FlashMail." Quantum Link later became America Online and made AOL Instant Messenger (AIM, discussed later). While the Quantum Link client software ran on a Commodore 64, using only the Commodore's PETSCII text-graphics, the screen was visually divided into sections and OLMs would appear as a yellow bar saying "Message From:" and the name of the sender along with the message across the top of whatever the user was already doing, and presented a list of options for responding. As such, it could be considered a type of graphical user interface (GUI), albeit much more primitive than the later Unix, Windows and Macintosh based GUI IM software. OLMs were what Q-Link called "Plus Services" meaning they charged an extra per-minute fee on top of the monthly Q-Link access costs. Development of the Internet Relay Chat (IRC) protocol began in 1989, and this would become the Internet's first widespread instant messaging standard. Modern, Internet-wide, GUI-based messaging clients as they are known today, began to take off in the mid-1990s with PowWow, ICQ, and AOL Instant Messenger (AIM). Similar functionality was offered by CU-SeeMe in 1992; though primarily an audio/video chat link, users could also send textual messages to each other. AOL later acquired Mirabilis, the authors of ICQ; establishing dominance in the instant messaging market. A few years later ICQ (then owned by AOL) was awarded two patents for instant messaging by the U.S. patent office. Meanwhile, other companies developed their own software; (Excite, Microsoft (MSN), Ubique, and Yahoo!), each with its own proprietary protocol and client; users therefore had to run multiple client applications if they wished to use more than one of these networks. However, the open protocol IRC continued to be popular by the millennium, and its most popular graphical app was mIRC. While instant messaging was mainly in use for consumer recreational purposes, in 1998, IBM launched their Lotus Sametime instant messenger software, the first popular example of enterprise-grade instant messaging. In 2000, an open-source application and open standards-based protocol called Extensible Messaging and Presence Protocol (XMPP) was launched, initially branded as Jabber. XMPP servers could act as gateways to other IM protocols, reducing the need to run multiple clients. Video calling using a webcam also started taking off during this time. Microsoft's NetMeeting, which was focused on business "web conferencing", was one of the earliest; the company then launched Windows Messenger, coming preloaded on Windows XP, featuring video capabilities. Yahoo! Messenger added video capabilities in 2001; by 2005, such features were built-in also in AIM, MSN Messenger, and Skype. There were a reported 100 million users of instant messaging in 2001. As of 2003, AIM was the globally most popular instant messenger with 195 million users and exchanges of 1.6 billion messages daily. By 2006, AIM controlled 52 percent of the instant messaging market, but rapidly declined shortly thereafter as the company struggled to compete with other services. Instant messaging integrated in other services started picking up pace in the late 2000s. Myspace, the then-largest social networking service, launched Myspace IM in 2006, shortly after Google's Gtalk, which was integrated into its Gmail webmail interface. Facebook Chat launched in 2008, providing IM to users of the social network. By 2010, traditional instant messaging was in sharp decline in favor of these new messaging features on wider social networks, which at the time were not normally called IM. For instance, AIM's userbase had declined by more than half throughout the year 2011. Standalone instant messenger services were revived, evolving into becoming primarily being used on mobile due to the increasing use of Internet-enabled cell phones and smartphones. Often called "chat apps", to distinguish it from cellular-based SMS and MMS "texting" services, these newer services were specially designed to be run on mobile platforms, as opposed to older services like AIM and MSN; BlackBerry Messenger, released in 2005, was one of the influential pioneers of mobile IM, and led to other companies launching services with proprietary protocols, such as WhatsApp. Mobile instant messaging surpassed SMS in global message volume by 2013. While SMS relied on traditional paid telephone services, IM apps on mobile were available for free or a minor data charge. Older IM services were eventually shut, including AIM and Yahoo! Messenger, and also Windows Live Messenger, which merged into Skype in 2013. In 2014, it was reported that instant messaging had more users than social networks. Concurrently, rising use of instant messaging at workplaces led to the creation of new services (enterprise application integration (EAI)) often integrated with other enterprise applications such as workflow systems, for example in Skype for Business, Slack and Microsoft Teams. Meanwhile, the launch of Discord in 2015 has marked a notable new example of traditional IM originally designed for desktops. Interoperability Most IM protocols are proprietary and are not designed to be interoperable with others, meaning that many IM networks have been incompatible and users have been unable to reach users on other networks. As of 2024, fragmentation of IM services means that a typical user is likely to have to use more networks than ever, including the need to download the apps and signing up, to stay in touch with all their contacts. However, there had been attempts for solutions. Multi-protocol clients can use any of the IM protocols by using additional local libraries for each protocol. Examples of multi-protocol instant messenger software include Pidgin and Trillian, and more recently Beeper. These third-party clients have often been unable to keep up due to proprietary protocol restrictions and getting locked out of it. For instance, in 2015, WhatsApp started banning users who were using unofficial clients. Major IM providers usually cite the need for formal agreements, and security concerns as reasons for making changes. There have been several attempts in the past to create a unified standard for instant messaging, including: Critics say AOL's slowness in embracing interoperability has caused setbacks to other companies trying to grow their businesses. AOL has said it supports the development of an interoperable system for all IM networks but has cited privacy and security concerns as the reasons it's taking its time. Competitors have labeled that argument a "smoke screen." In the early 2000s, when instant messaging was quickly growing, most attempts at producing a unified standard for the-then major IM providers (AOL, Yahoo!, Microsoft) had failed. There was a "bitter row" between AOL and its rivals regarding the opening up of their networks. In 2000, U.S. regulatory Federal Communications Commission (FCC) proposed, and supported by Microsoft chairman Bill Gates, that AOL providing interoperability of its AIM and ICQ instant messengers with Microsoft's MSN Messenger was a condition for the forthcoming AOL-Time Warner merger. However, in 2004, Microsoft, Yahoo! and AOL agreed to a deal in which Microsoft's enterprise IM server Live Communications Server 2005 would have the possibility to talk to their rival counterparts and vice versa. On October 13, 2005, Microsoft and Yahoo! announced that their IM networks would soon be interoperable, using SIP/SIMPLE. This was finally rolled out to Windows Live Messenger and Yahoo! Messenger users in July 2006. Additionally, in December 2005 by the AOL and Google strategic partnership deal, it was announced that AIM and ICQ users would be able to communicate with Google Talk users. However this feature took until December 2007 to roll out. XMPP provided the best example of open protocol interoperability, having had gateways that connected to Google Talk, Lotus Sametime and others. Later, RCS was developed by telecommunication companies as an instant messaging protocol to replace SMS under a unified standard. In 2022, the European Union passed the Digital Markets Act, which largely came into effect in early 2023. Among other things, the legislation mandates certain interoperability between the largest IM platforms in use in Europe. As a result, in March 2024, Meta Platforms opened up its WhatsApp and Messenger networks to be interoperable. There are two ways to combine the many disparate protocols: Some approaches allow organizations to deploy their own, private instant messaging network by enabling them to restrict access to the server (often with the IM network entirely behind their firewall) and administer user permissions. Other corporate messaging systems allow registered users to also connect from outside the corporation LAN, by using an encrypted, firewall-friendly, HTTPS-based protocol. Usually, a dedicated corporate IM server has several advantages, such as pre-populated contact lists, integrated authentication, and better security and privacy.[citation needed] Effects of IM on communication Instant messaging has changed how people communicate in the workplace. Enterprise messaging applications like Slack, Symphony, Teamnote and Yammer allow companies to enforce policies on how employees message at work and ensure secure storage of sensitive data. They allow employees to separate work information from their personal emails and texts. Messaging applications may make workplace communication efficient, but they can also have consequences on productivity. A study at Slack showed on average, people spend 10 hours a day on Slack, which is about 67% more time than they spend using email. Instant messaging is implemented in many video-conferencing tools. A study of chat use during work-related videoconferencing found that chat during meetings allows participants to communicate without interrupting the meeting, plan action around common resources, and enables greater inclusion. The study also found that chat can cause distractions and information asymmetries between participants. Users sometimes make use of internet slang or text speak to abbreviate common words or expressions to quicken conversations or reduce keystrokes. The language has become widespread, with well-known expressions such as 'lol' translated over to face-to-face language. Emotions are often expressed in shorthand, such as the abbreviation LOL, BRB and TTYL; respectively laugh(ing) out loud, be right back, and talk to you later. Some, however, attempt to be more accurate with emotional expression over IM. Real time reactions such as (chortle) (snort) (guffaw) or (eye-roll) have been popular at one point. Also there are certain standards that are being introduced into mainstream conversations including, '#' indicates the use of sarcasm in a statement and '*' which indicates a spelling mistake and/or grammatical error in the prior message, followed by a correction. Business application Instant messaging products can usually be categorised into two types: Enterprise Instant Messaging (EIM) and Consumer Instant Messaging (CIM). Enterprise solutions use an internal IM server, however this is not always feasible, particularly for smaller businesses with limited budgets. The second option, using a CIM provides the advantage of being inexpensive to implement and has little need for investing in new hardware or server software. IM is increasingly becoming a feature of enterprise software rather than a stand-alone application.[citation needed] Instant messaging has proven to be similar to personal computers, email, and the World Wide Web, in that its adoption for use as a business communications medium was driven primarily by individual employees using consumer software at work, rather than by formal mandate or provisioning by corporate information technology departments. Tens of millions of the consumer IM accounts in use are being used for business purposes by employees of companies and other organizations. The adoption of IM across corporate networks outside of the control of IT organizations creates risks and liabilities for companies who do not effectively manage and support IM use.[citation needed] IM was initially shunned by the corporate world partly due to security concerns, but by 2003 many had started embracing these new services. In response to the demand for business-grade IM and the need to ensure security and legal compliance, a new type of instant messaging, called "Enterprise Instant Messaging" ("EIM") was created when Lotus Software launched IBM Lotus Sametime in 1998. Microsoft followed suit shortly thereafter with Microsoft Exchange Instant Messaging, later created a new platform called Microsoft Office Live Communications Server, and released Office Communications Server 2007 in October 2007. Oracle Corporation also jumped into the market with its Oracle Beehive unified collaboration software. Both IBM Lotus and Microsoft have introduced federation between their EIM systems and some of the public IM networks so that employees may use one interface to both their internal EIM system and their contacts on AOL, MSN, and Yahoo. As of 2010, leading EIM platforms include IBM Lotus Sametime, Microsoft Office Communications Server, Jabber XCP and Cisco Unified Presence.[independent source needed] Industry-focused EIM platforms such as Reuters Messaging and Bloomberg Messaging also provide IM abilities to financial services companies.[independent source needed] Security and archiving Crackers (malicious or black hat hackers) have consistently used IM networks as vectors for delivering phishing attempts, drive-by URLs, and virus-laden file attachments, with over 1100 discrete attacks listed by the IM Security Center in 2004–2007. Hackers use two methods of delivering malicious code through IM: delivery of viruses, trojan horses, or spyware within an infected file, and the use of "socially engineered" text with a web address that entices the recipient to click on a URL connecting him or her to a website that then downloads malicious code.[citation needed] IM connections sometimes occur in plain text, making them vulnerable to eavesdropping. Also, IM client software often requires the user to expose open UDP ports to the world, raising the threat posed by potential security vulnerabilities. In the early 2000s, a new class of IT security providers emerged to provide remedies for the risks and liabilities faced by corporations who chose to use IM for business communications. The IM security providers created new products to be installed in corporate networks for the purpose of archiving, content-scanning, and security-scanning IM traffic moving in and out of the corporation. Similar to the e-mail filtering vendors, the IM security providers focus on the risks and liabilities described above. With the rapid adoption of IM in the workplace, demand for IM security products began to grow in the mid-2000s. By 2007, the preferred platform for the purchase of security software had become the "computer appliance", according to IDC, who estimated that by 2008, 80% of network security products would be delivered via an appliance. By 2014, however, instant messengers' safety level was still extremely poor. According to a scorecard by the Electronic Frontier Foundation, only 7 out of 39 instant messengers received a perfect score. In contrast, the most popular instant messengers at the time only attained a score of 2 out of 7. A number of studies have shown that IM services are quite vulnerable for providing user privacy. In 2023, cybersecurity researchers discovered that numerous malicious "mods" exist of the Telegram instant messenger, which is freely available for download from Google Play. Instant messages are often logged in a local message history, similar to emails' persistent nature. IM networks may store messages with either local-based device storage (e.g. WhatsApp, Viber, Line, WeChat, Signal etc. software) or cloud-based server storage provided by the service (e.g. Telegram, Skype, Facebook Messenger, Google Meet/Chat, Discord, Slack etc.). Although cloud-based storage is advertised to offer encrypted messages, it poses an increased risk that the IM provider may have access to the decryption keys and view the user's saved messages. This requires users to trust IM servers and providers because messages can generally be accessed by the company. Companies may be compelled to reveal their user's communication and suspend user accounts for any reason. News reports from 2013 revealed that the NSA is not only collecting emails and IM messages but also tracking relationships between senders and receivers of those chats and emails in a process known as metadata collection. Metadata refers to the data concerned about the chat or email as opposed to contents of messages. It may be used to collect valuable information. In January 2014, Matthew Campbell and Michael Hurley filed a class-action lawsuit against Facebook for breaching the Electronic Communications Privacy Act. They alleged that the information in their supposedly private messages was being read and used to generate profit, specifically "for purposes including but not limited to data mining and user profiling". In corporate use of IM, organizational offerings have become very sophisticated in their security and logging measures. An employee or organization member must be granted login credentials and permission to use the messaging system. Creating a specific account for each user allows the organization to identify, track and record all use of their messenger system on their servers. Encryption is the primary method that instant messaging apps use to protect user's data privacy and security. For corporate use, encryption and conversation archiving are usually regarded as important features due to security concerns. There are also a bunch of open source encrypting messengers. IM does hold potential advantages over SMS. SMS messages are not encrypted, making them insecure, as the content of each SMS message is visible to mobile carriers and governments and can be intercepted by a third party, may leak metadata (such as phone numbers), or be spoofed and the sender of the message can be edited to impersonate another person. Current instant messaging networks that use end-to-end encryption include Signal, WhatsApp, Wire and iMessage. Signal and iMessage have started using Post-quantum cryptography in September 2023 and April 2024 respectively. Applications that have been criticized for lacking or poor encryption methods include Telegram and Confide, as both are prone to error or not having encryption enabled by default. In addition to the malicious code threat, using instant messaging at work creates a risk of non-compliance with laws and regulations governing electronic communications in businesses. In the United States alone, there are over 10,000 laws and regulations related to electronic messaging and records retention. The better-known of these include the Sarbanes–Oxley Act, HIPAA, and SEC 17a-3. Clarification from the Financial Industry Regulatory Authority (FINRA) was issued to member firms in the financial services industry in December 2007, noting that "electronic communications", "email", and "electronic correspondence" may be used interchangeably and can include such forms of electronic messaging as instant messaging and text messaging. Changes to Federal Rules of Civil Procedure, effective December 1, 2006, created a new category for electronic records which may be requested during discovery in legal proceedings.[citation needed] Most nations also regulate electronic messaging and records retention similarly to the United States. The most common regulations related to IM at work involve producing archived business communications to satisfy government or judicial requests under law. Many instant messaging communications fall into the category of business communications that must be archived and retrievable.[citation needed] Current user base As of May 2025, the most used instant messaging apps and services worldwide include: Signal with 100 million, Line with 197 million, Viber with 260 million, QQ with 562 million, Snapchat with 900 million, Telegram with 1 billion, Facebook Messenger with 1.3 billion, WeChat with 1.39 billion, and WhatsApp with 3 billion users. There are 25 countries in the world where WhatsApp messenger is not the market leader in IM, such as the United States, Canada, Australia, New Zealand, Denmark, Norway, Sweden, Hungary, Lithuania, Poland, Slovakia, Philippines, and China. IM apps have varying levels of adoption in different countries. As of April 2022: See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Transistor_count] | [TOKENS: 2653]
Contents Transistor count The transistor count is the number of transistors in an electronic device (typically on a single substrate or silicon die). It is the most common measure of integrated circuit complexity (although the majority of transistors in modern microprocessors are contained in cache memories, which consist mostly of the same memory cell circuits replicated many times). The rate at which MOS transistor counts have increased generally follows Moore's law, which observes that transistor count doubles approximately every two years. However, being directly proportional to the area of a die, transistor count does not represent how advanced the corresponding manufacturing technology is. A better indication of this is transistor density which is the ratio of a semiconductor's transistor count to its die area. Records As of 2023[update], the highest transistor count in flash memory is Micron's 2 terabyte (3D-stacked) 16-die, 232-layer V-NAND flash memory chip, with 5.3 trillion floating-gate MOSFETs (3 bits per transistor). The highest transistor count in a single chip processor as of 2020[update] is that of the deep learning processor Wafer Scale Engine 2 by Cerebras. It has 2.6 trillion MOSFETs in 84 exposed fields (dies) on a wafer, manufactured using TSMC's 7 nm FinFET process. As of 2024[update], the GPU with the highest transistor count is Nvidia's Blackwell-based B100 accelerator, built on TSMC's custom 4NP process node and totaling 208 billion MOSFETs. The highest transistor count in a consumer microprocessor as of March 2025[update] is 184 billion transistors, in Apple's ARM-based dual-die M3 Ultra SoC, which is fabricated using TSMC's 3 nm semiconductor manufacturing process.[citation needed] In terms of computer systems that consist of numerous integrated circuits, the supercomputer with the highest transistor count as of 2016[update] was the Chinese-designed Sunway TaihuLight, which has for all CPUs/nodes combined "about 400 trillion transistors in the processing part of the hardware" and "the DRAM includes about 12 quadrillion transistors, and that's about 97 percent of all the transistors." To compare, the smallest computer, as of 2018[update] dwarfed by a grain of rice, had on the order of 100,000 transistors. Early experimental solid-state computers had as few as 130 transistors but used large amounts of diode logic. The first carbon nanotube computer had 178 transistors and was a 1-bit one-instruction set computer, while a later one is 16-bit (its instruction set is 32-bit RISC-V though). Ionic transistor chips ("water-based" analog limited processor), have up to hundreds of such transistors. Estimates of the total numbers of transistors manufactured: Transistor count A microprocessor incorporates the functions of a computer's central processing unit on a single integrated circuit. It is a multi-purpose, programmable device that accepts digital data as input, processes it according to instructions stored in its memory, and provides results as output. The development of MOS integrated circuit technology in the 1960s led to the development of the first microprocessors. The 20-bit MP944, developed by Garrett AiResearch for the U.S. Navy's F-14 Tomcat fighter in 1970, is considered by its designer Ray Holt to be the first microprocessor. It was a multi-chip microprocessor, fabricated on six MOS chips. However, it was classified by the Navy until 1998. The 4-bit Intel 4004, released in 1971, was the first single-chip microprocessor. Modern microprocessors typically include on-chip cache memories. The number of transistors used for these cache memories typically far exceeds the number of transistors used to implement the logic of the microprocessor (that is, excluding the cache). For example, the last DEC Alpha chip uses 90% of its transistors for cache. A graphics processing unit (GPU) is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the building of images in a frame buffer intended for output to a display. The designer refers to the technology company that designs the logic of the integrated circuit chip (such as Nvidia and AMD). The manufacturer ("Fab.") refers to the semiconductor company that fabricates the chip using its semiconductor manufacturing process at a foundry (such as TSMC and Samsung Semiconductor). The transistor count in a chip is dependent on a manufacturer's fabrication process, with smaller semiconductor nodes typically enabling higher transistor density and thus higher transistor counts. The random-access memory (RAM) that comes with GPUs (such as VRAM, SGRAM or HBM) greatly increases the total transistor count, with the memory typically accounting for the majority of transistors in a graphics card. For example, Nvidia's Tesla P100 has 15 billion FinFETs (16 nm) in the GPU in addition to 16 GB of HBM2 memory, totaling about 150 billion MOSFETs on the graphics card. The following table does not include the memory. For memory transistor counts, see the Memory section below. A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturing. Semiconductor memory is an electronic data storage device, often used as computer memory, implemented on integrated circuits. Nearly all semiconductor memories since the 1970s have used MOSFETs (MOS transistors), replacing earlier bipolar junction transistors. There are two major types of semiconductor memory: random-access memory (RAM) and non-volatile memory (NVM). In turn, there are two major RAM types: dynamic random-access memory (DRAM) and static random-access memory (SRAM), as well as two major NVM types: flash memory and read-only memory (ROM). Typical CMOS SRAM consists of six transistors per cell. For DRAM, 1T1C, which means one transistor and one capacitor structure, is common. Capacitor charged or not[clarification needed] is used to store 1 or 0. In flash memory, the data is stored in floating gates, and the resistance of the transistor is sensed[clarification needed] to interpret the data stored. Depending on how fine scale the resistance could be separated[clarification needed], one transistor could store up to three bits, meaning eight distinctive levels of resistance possible per transistor. However, a finer scale comes with the cost of repeatability issues, and hence reliability. Typically, low grade 2-bits MLC flash is used for flash drives, so a 16 GB flash drive contains roughly 64 billion transistors. For SRAM chips, six-transistor cells (six transistors per bit) was the standard. DRAM chips during the early 1970s had three-transistor cells (three transistors per bit), before single-transistor cells (one transistor per bit) became standard since the era of 4 Kb DRAM in the mid-1970s. In single-level flash memory, each cell contains one floating-gate MOSFET (one transistor per bit), whereas multi-level flash contains 2, 3 or 4 bits per transistor. Flash memory chips are commonly stacked up in layers, up to 128-layer in production, and 136-layer managed, and available in end-user devices up to 69-layer from manufacturers. Before transistors were invented, relays were used in commercial tabulating machines and experimental early computers. The world's first working programmable, fully automatic digital computer, the 1941 Z3 22-bit word length computer, had 2,600 relays, and operated at a clock frequency of about 4–5 Hz. The 1940 Complex Number Computer had fewer than 500 relays, but it was not fully programmable. The earliest practical computers used vacuum tubes and solid-state diode logic. ENIAC had 18,000 vacuum tubes, 7,200 crystal diodes, and 1,500 relays, with many of the vacuum tubes containing two triode elements. The second generation of computers were transistor computers that featured boards filled with discrete transistors, solid-state diodes and magnetic memory cores. The experimental 1953 48-bit Transistor Computer, developed at the University of Manchester, is widely believed to be the first transistor computer to come into operation anywhere in the world (the prototype had 92 point-contact transistors and 550 diodes). A later version the 1955 machine had a total of 250 junction transistors and 1,300 point-contact diodes. The Computer also used a small number of tubes in its clock generator, so it was not the first fully transistorized. The ETL Mark III, developed at the Electrotechnical Laboratory in 1956, may have been the first transistor-based electronic computer using the stored program method. It had about "130 point-contact transistors and about 1,800 germanium diodes were used for logic elements, and these were housed on 300 plug-in packages which could be slipped in and out." The 1958 decimal architecture IBM 7070 was the first transistor computer to be fully programmable. It had about 30,000 alloy-junction germanium transistors and 22,000 germanium diodes, on approximately 14,000 Standard Modular System (SMS) cards. The 1959 MOBIDIC, short for "MOBIle DIgital Computer", at 12,000 pounds (6.0 short tons) mounted in the trailer of a semi-trailer truck, was a transistorized computer for battlefield data. The third generation of computers used integrated circuits (ICs). The 1962 15-bit Apollo Guidance Computer used "about 4,000 "Type-G" (3-input NOR gate) circuits" for about 12,000 transistors plus 32,000 resistors. The IBM System/360, introduced 1964, used discrete transistors in hybrid circuit packs. The 1965 12-bit PDP-8 CPU had 1409 discrete transistors and over 10,000 diodes, on many cards. Later versions, starting with the 1968 PDP-8/I, used integrated circuits. The PDP-8 was later reimplemented as a microprocessor as the Intersil 6100, see below. The next generation of computers were the microcomputers, starting with the 1971 Intel 4004, which used MOS transistors. These were used in home computers or personal computers (PCs). This list includes early transistorized computers (second generation) and IC-based computers (third generation) from the 1950s and 1960s. Transistor count for generic logic functions is based on static CMOS implementation. Historically, each processing element in earlier parallel systems—like all CPUs of that time—was a serial computer built out of multiple chips. As transistor counts per chip increases, each processing element could be built out of fewer chips, and then later each multi-core processor chip could contain more processing elements. Goodyear MPP: (1983?) 8 pixel processors per chip, 3,000 to 8,000 transistors per chip. Brunel University Scape (single-chip array-processing element): (1983) 256 pixel processors per chip, 120,000 to 140,000 transistors per chip. Cell Broadband Engine: (2006) with 9 cores per chip, had 234 million transistors per chip. Transistor density The transistor density is the number of transistors that are fabricated per unit area, typically measured in terms of the number of transistors per square millimeter (mm2). The transistor density usually correlates with the gate length of a semiconductor node (also known as a semiconductor manufacturing process), typically measured in nanometers (nm). As of 2019[update], the semiconductor node with the highest transistor density is TSMC's 5 nanometer node, with 171.3 million transistors per square millimeter (note this corresponds to a transistor-transistor spacing of 76.4 nm, far greater than the relative meaningless "5nm") 101,850,000 106,100,000 133,560,000 134,900,000 185,460,000 106,100,000 Gate count In certain applications, the term gate count is preferred over the term transistor count. It refers to the number of logic gates built with transistors and other electronic devices needed to implement a design. See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Groupthink] | [TOKENS: 6542]
Contents Groupthink Groupthink is a psychological phenomenon that occurs within a group of people in which the desire for harmony or conformity in the group results in an irrational or dysfunctional decision-making outcome. Cohesiveness, or the desire for cohesiveness, in a group may produce a tendency among its members to agree at all costs. This causes the group to minimize conflict and reach a consensus decision without critical evaluation. Groupthink is a construct of social psychology but has an extensive reach and influences literature in the fields of communication studies, political science, management, and organizational theory, as well as important aspects of deviant religious cult behaviour. Overview Groupthink is sometimes stated to occur (more broadly) within natural groups within the community, for example to explain the lifelong different mindsets of those with differing political views (such as "conservatism" and "liberalism" in the U.S. political context or the purported benefits of team work vs. work conducted in solitude). However, this conformity of viewpoints within a group does not mainly involve deliberate group decision-making, and might be better explained by the collective confirmation bias of the individual members of the group. [citation needed] The term was coined in 1952 by William H. Whyte Jr. Most of the initial research on groupthink was conducted by Irving Janis, a research psychologist from Yale University. Janis published an influential book in 1972, which was revised in 1982. Janis used the Bay of Pigs Invasion (the failed American invasion of Cuba in 1961) and the Japanese attack on Pearl Harbor in 1941 as his two prime case studies. Later studies have evaluated and reformulated his groupthink model. Groupthink requires individuals to avoid raising controversial issues or alternative solutions, and there is loss of individual creativity, uniqueness and independent thinking. [citation needed] The dysfunctional group dynamics of the "ingroup" produces an "illusion of invulnerability" (an inflated certainty that the right decision has been made). [citation needed] Thus the "ingroup" significantly overrates its own abilities in decision-making and significantly underrates the abilities of its opponents (the "outgroup"). [citation needed] Furthermore, groupthink can produce dehumanizing actions against the "outgroup". [citation needed] Members of a group can often feel under peer pressure to "go along with the crowd" for fear of "rocking the boat" or of how their speaking out will be perceived by the rest of the group. Group interactions tend to favor clear and harmonious agreements [citation needed] and it can be a cause for concern when little to no new innovations or arguments for better policies, outcomes and structures are called to question. (McLeod). Groupthink can often lead to the creation of "yes men", because group activities and group projects in general make it extremely easy to pass on not offering constructive opinions. [citation needed] Some methods that have been used to counteract group think in the past are selecting teams from more diverse backgrounds, and even mixing men and women for groups (Kamalnath). Groupthink can be considered to be a detriment to companies, organizations and in any work situations. Most positions that are senior level need individuals to be independent in their thinking. [citation needed] There is a positive correlation found between outstanding executives and decisiveness (Kelman). Groupthink also prohibits an organization from moving forward and innovating if no one ever speaks up and says something could be done differently. Antecedent factors such as group cohesiveness, faulty group structure, and situational context (e.g., community panic) play into the likelihood of whether or not groupthink will impact the decision-making process. History William H. Whyte Jr. derived the term from George Orwell's Nineteen Eighty-Four, and popularized it in 1952 in Fortune magazine: Groupthink being a coinage – and, admittedly, a loaded one – a working definition is in order. We are not talking about mere instinctive conformity – it is, after all, a perennial failing of mankind. What we are talking about is a rationalized conformity – an open, articulate philosophy which holds that group values are not only expedient but right and good as well. Groupthink was Whyte's diagnosis of the malaise affecting both the study and practice of management (and, by association, America) in the 1950s. Whyte was dismayed that employees had subjugated themselves to the tyranny of groups, which crushed individuality and were instinctively hostile to anything or anyone that challenged the collective view. American psychologist Irving Janis (Yale University) pioneered the initial research on the groupthink theory. He does not cite Whyte, but coined the term again by analogy with "doublethink" and similar terms that were part of the newspeak vocabulary in the novel Nineteen Eighty-Four by George Orwell. He initially defined groupthink as follows: I use the term groupthink as a quick and easy way to refer to the mode of thinking that persons engage in when concurrence-seeking becomes so dominant in a cohesive ingroup that it tends to override realistic appraisal of alternative courses of action. Groupthink is a term of the same order as the words in the newspeak vocabulary George Orwell used in his dismaying world of 1984. In that context, groupthink takes on an invidious connotation. Exactly such a connotation is intended, since the term refers to a deterioration in mental efficiency, reality testing and moral judgments as a result of group pressures.: 43 He went on to write: The main principle of groupthink, which I offer in the spirit of Parkinson's Law, is this: "The more amiability and esprit de corps there is among the members of a policy-making ingroup, the greater the danger that independent critical thinking will be replaced by groupthink, which is likely to result in irrational and dehumanizing actions directed against outgroups".: 44 Janis set the foundation for the study of groupthink starting with his research in the American Soldier Project where he studied the effect of extreme stress on group cohesiveness. After this study he remained interested in the ways in which people make decisions under external threats. This interest led Janis to study a number of "disasters" in American foreign policy, such as failure to anticipate the Japanese attack on Pearl Harbor (1941); the Bay of Pigs Invasion fiasco (1961); and the prosecution of the Vietnam War (1964–67) by President Lyndon Johnson. He concluded that in each of these cases, the decisions occurred largely because of groupthink, which prevented contradictory views from being expressed and subsequently evaluated. After the publication of Janis' book Victims of Groupthink in 1972, and a revised edition with the title Groupthink: Psychological Studies of Policy Decisions and Fiascoes in 1982, the concept of groupthink was used[by whom?] to explain many other faulty decisions in history. These events included Nazi Germany's decision to invade the Soviet Union in 1941, the Watergate scandal and others. Despite the popularity of the concept of groupthink, fewer than two dozen studies addressed the phenomenon itself following the publication of Victims of Groupthink, between the years 1972 and 1998.: 107 This was surprising considering how many fields of interests it spans, which include political science, communications, organizational studies, social psychology, management, strategy, counseling, and marketing. One can most likely explain this lack of follow-up in that group research is difficult to conduct, groupthink has many independent and dependent variables, and it is unclear "how to translate [groupthink's] theoretical concepts into observable and quantitative constructs".: 107–108 Nevertheless, outside research psychology and sociology, wider culture has come to detect groupthink in observable situations, for example: Symptoms To make groupthink testable, Irving Janis devised eight symptoms indicative of groupthink: Type I: Overestimations of the group — its power and morality Type II: Closed-mindedness Type III: Pressures toward uniformity When a group exhibits most of the symptoms of groupthink, the consequences of a failing decision process can be expected: incomplete analysis of the other options, incomplete analysis of the objectives, failure to examine the risks associated with the favored choice, failure to reevaluate the options initially rejected, poor information research, selection bias in available information processing, failure to prepare for a back-up plan. Causes Janis identified three antecedent conditions to groupthink:: 9 Although it is possible for a situation to contain all three of these factors, all three are not always present even when groupthink is occurring. Janis considered a high degree of cohesiveness to be the most important antecedent to producing groupthink, and always present when groupthink was occurring; however, he believed high cohesiveness would not always produce groupthink. A very cohesive group abides with all group norms; but whether or not groupthink arises is dependent on what the group norms are. If the group encourages individual dissent and alternative strategies to problem solving, it is likely that groupthink will be avoided even in a highly cohesive group. This means that high cohesion will lead to groupthink only if one or both of the other antecedents is present, situational context being slightly more likely than structural faults to produce groupthink. A 2018 study found that absence of a tenured project leader can also create conditions for groupthink to prevail. Presence of an "experienced" project manager can reduce the likelihood of groupthink by taking steps like critically analysing ideas, promoting open communication, encouraging diverse perspectives, and raising team awareness of groupthink symptoms. It was found that among people who have bicultural identity, those with highly integrated bicultural identity as opposed to less integrated were more prone to groupthink. In another 2022 study in Tanzania, Hofstede's cultural dimensions come into play. It was observed that in high power distance societies, individuals are hesitant to voice dissent, deferring to leaders' preferences in making decisions. Furthermore, as Tanzania is a collectivist society, community interests supersede those of individuals. The combination of high power distance and collectivism creates optimal conditions for groupthink to occur. Prevention As observed by Aldag and Fuller (1993), the groupthink phenomenon seems to rest on a set of unstated and generally restrictive assumptions: It has been thought that groups with the strong ability to work together will be able to solve dilemmas in a quicker and more efficient fashion than an individual. Groups have a greater amount of resources which lead them to be able to store and retrieve information more readily and come up with more alternative solutions to a problem. There was a recognized downside to group problem solving in that it takes groups more time to come to a decision and requires that people make compromises with each other. However, it was not until the research of Janis appeared that anyone really considered that a highly cohesive group could impair the group's ability to generate quality decisions. Tight-knit groups may appear to make decisions better because they can come to a consensus quickly and at a low energy cost; however, over time this process of decision-making may decrease the members' ability to think critically. It is, therefore, considered by many to be important to combat the effects of groupthink. According to Janis, decision-making groups are not necessarily destined to groupthink. He devised ways of preventing groupthink:: 209–215 The devil's advocate in a group may provide questions and insight which contradict the majority group in order to avoid groupthink decisions. A study by Ryan Hartwig confirms that the devil's advocacy technique is very useful for group problem-solving. It allows for conflict to be used in a way that is most-effective for finding the best solution so that members will not have to go back and find a different solution if the first one fails. Hartwig also suggests that the devil's advocacy technique be incorporated with other group decision-making models such as the functional theory to find and evaluate alternative solutions. The main idea of the devil's advocacy technique is that somewhat structured conflict can be facilitated to not only reduce groupthink, but to also solve problems. Diversity of all kinds is also instrumental in preventing groupthink. Individuals with varying backgrounds, thought, professional and life experiences etc. can offer unique perspectives and challenge assumptions. In a 2004 study, a diverse team of problem-solver outperformed a team consisting of best problem solvers as they start to think alike. Joris Graff offered a new debate format designed to prevent groupthink from occurring in a classroom setting specifically regarding debate lessons. He agreed that greater diversity in arguments both within a team and against an opposing side would prevent groupthink and suggested several ways to introduce that diversity into debates. Graff also suggested that the goal of debates should be on consensus or compromise over designating a winner. He argues that encouraging opposing teams to work together to come up with a viable solution prevents common arguments from becoming the only arguments used due to perceived success at getting a specific desired outcome. Psychological safety, emphasized by Edmondson and Lei and Hirak et al., is crucial for effective group performance. It involves creating an environment that encourages learning and removes barriers perceived as threats by team members. Edmondson et al. demonstrated variations in psychological safety based on work type, hierarchy, and leadership effectiveness, highlighting its importance in employee development and fostering a culture of learning within organizations. A similar situation to groupthink is the Abilene paradox, another phenomenon that is detrimental when working in groups. When organizations fall into an Abilene paradox, they take actions in contradiction to what their perceived goals may be and therefore defeat the very purposes they are trying to achieve. Failure to communicate desires or beliefs can cause an Abilene paradox. The Watergate scandal is an example of this.[citation needed] Before the scandal had occurred, a meeting took place where they discussed the issue. One of Nixon's campaign aides was unsure if he should speak up and give his input. If he had voiced his disagreement with the group's decision, it is possible that the scandal could have been avoided.[citation needed] After the Bay of Pigs invasion fiasco, President John F. Kennedy sought to avoid groupthink during the Cuban Missile Crisis using "vigilant appraisal".: 148–153 During meetings, he invited outside experts to share their viewpoints, and allowed group members to question them carefully. He also encouraged group members to discuss possible solutions with trusted members within their separate departments, and he even divided the group up into various sub-groups, to partially break the group cohesion. Kennedy was deliberately absent from the meetings, so as to avoid pressing his own opinion. Cass Sunstein reports that introverts can sometimes be silent in meetings with extroverts; he recommends explicitly asking for each person's opinion, either during the meeting or afterwards in one-on-one sessions. Sunstein points to studies showing groups with a high level of internal socialization and happy talk are more prone to bad investment decisions due to groupthink, compared with groups of investors who are relative strangers and more willing to be argumentative. To avoid group polarization, where discussion with like-minded people drives an outcome further to an extreme than any of the individuals favored before the discussion, he recommends creating heterogeneous groups which contain people with different points of view. Sunstein also points out that people arguing a side they do not sincerely believe (in the role of devil's advocate) tend to be much less effective than a sincere argument. This can be accomplished by dissenting individuals, or a group like a Red Team that is expected to pursue an alternative strategy or goal "for real". Empirical findings and meta-analysis Testing groupthink in a laboratory is difficult because synthetic settings remove groups from real social situations, which ultimately changes the variables conducive or inhibitive to groupthink. Because of its subjective nature, researchers have struggled to measure groupthink as a complete phenomenon, instead frequently opting to measure its particular factors. These factors range from causal to effectual[clarification needed] and focus on group and situational aspects. Park (1990) found that "only 16 empirical studies have been published on groupthink", and concluded that they "resulted in only partial support of his [Janis's] hypotheses".: 230 Park concludes, "despite Janis' claim that group cohesiveness is the major necessary antecedent factor, no research has shown a significant main effect of cohesiveness on groupthink.": 230 Park also concludes that research does not support Janis' claim that cohesion and leadership style interact to produce groupthink symptoms. Park presents a summary of the results of the studies analyzed. According to Park, a study by Huseman and Drive (1979) indicates groupthink occurs in both small and large decision-making groups within businesses. This results partly from group isolation within the business. Manz and Sims (1982) conducted a study showing that autonomous work groups are susceptible to groupthink symptoms in the same manner as decisions making groups within businesses. Fodor and Smith (1982) produced a study revealing that group leaders with high power motivation create atmospheres more susceptible to groupthink. Leaders with high power motivation possess characteristics similar to leaders with a "closed" leadership style—an unwillingness to respect dissenting opinion. The same study indicates that level of group cohesiveness is insignificant in predicting groupthink occurrence. Park summarizes a study performed by Callaway, Marriott, and Esser (1985) in which groups with highly dominant members "made higher quality decisions, exhibited lowered state of anxiety, took more time to reach a decision, and made more statements of disagreement/agreement".: 232 Overall, groups with highly dominant members expressed characteristics inhibitory to groupthink. If highly dominant members are considered equivalent to leaders with high power motivation, the results of Callaway, Marriott, and Esser contradict the results of Fodor and Smith. A study by Leana (1985) indicates the interaction between level of group cohesion and leadership style is completely insignificant in predicting groupthink. This finding refutes Janis' claim that the factors of cohesion and leadership style interact to produce groupthink. Park summarizes a study by McCauley (1989) in which structural conditions of the group were found to predict groupthink while situational conditions did not. The structural conditions included group insulation, group homogeneity, and promotional leadership. The situational conditions included group cohesion. These findings refute Janis' claim about group cohesiveness predicting groupthink. Overall, studies on groupthink have largely focused on the factors (antecedents) that predict groupthink. Groupthink occurrence is often measured by number of ideas/solutions generated within a group, but there is no uniform, concrete standard by which researchers can objectively conclude groupthink occurs. The studies of groupthink and groupthink antecedents reveal a mixed body of results. Some studies indicate group cohesion and leadership style to be powerfully predictive of groupthink, while other studies indicate the insignificance of these factors. Group homogeneity and group insulation are generally supported as factors predictive of groupthink. Case studies Groupthink can have a strong hold on political decisions and military operations, which may result in enormous wastage of human and material resources. Highly qualified and experienced politicians and military commanders sometimes make very poor decisions when in a suboptimal group setting. Scholars such as Janis and Raven attribute political and military fiascoes, such as the Bay of Pigs Invasion, the Vietnam War, and the Watergate scandal, to the effect of groupthink. More recently, Dina Badie argued that groupthink was largely responsible for the shift in the U.S. administration's view on Saddam Hussein that eventually led to the 2003 invasion of Iraq by the United States. After the September 11 attacks, "stress, promotional leadership, and intergroup conflict" were all factors that gave rise to the occurrence of groupthink.: 283 Political case studies of groupthink serve to illustrate the impact that the occurrence of groupthink can have in today's political scene. The United States Bay of Pigs Invasion of April 1961 was the primary case study that Janis used to formulate his theory of groupthink. The invasion plan was initiated by the Eisenhower administration, but when the Kennedy administration took over, it "uncritically accepted" the plan of the Central Intelligence Agency (CIA).: 44 When some people, such as Arthur M. Schlesinger Jr. and Senator J. William Fulbright, attempted to present their objections to the plan, the Kennedy team as a whole ignored these objections and kept believing in the morality of their plan.: 46 Eventually Schlesinger minimized his own doubts, performing self-censorship.: 74 The Kennedy team stereotyped Fidel Castro and the Cubans by failing to question the CIA about its many false assumptions, including the ineffectiveness of Castro's air force, the weakness of Castro's army, and the inability of Castro to quell internal uprisings.: 46 Janis argued the fiasco that ensued could have been prevented if the Kennedy administration had followed the methods to preventing groupthink adopted during the Cuban Missile Crisis, which took place just one year later in October 1962. In the latter crisis, essentially the same political leaders were involved in decision-making, but this time they learned from their previous mistake of seriously under-rating their opponents.: 76 The attack on Pearl Harbor on December 7, 1941, is a prime example of groupthink. A number of factors such as shared illusions and rationalizations contributed to the lack of precaution taken by U.S. Navy officers based in Hawaii. The United States had intercepted Japanese messages and they discovered that Japan was arming itself for an offensive attack somewhere in the Pacific Ocean. Washington took action by warning officers stationed at Pearl Harbor, but their warning was not taken seriously. They assumed that the Empire of Japan was taking measures in the event that their embassies and consulates in enemy territories were usurped. The U.S. Navy and Army in Pearl Harbor also shared rationalizations about why an attack was unlikely. Some of them included:: 83, 85 On January 28, 1986, NASA launched the space shuttle Challenger. This was significant because a civilian, non-astronaut, high school teacher was to be the first American civilian in space. The space shuttle was perceived to be so safe as to make this possible. NASA's engineering and launch teams rely on teamwork. To launch the shuttle, individual team members must affirm each system is functioning nominally. Morton Thiokol engineers who designed and built the Challenger's rocket boosters ignored warnings that cooler temperature during the day of the launch could result in failure and death of the crew. The Space Shuttle Challenger Disaster grounded space shuttle flights for nearly three years. The Challenger case was subject to a more quantitatively oriented test of Janis's groupthink model performed by Esser and Lindoerfer, who found clear signs of positive antecedents to groupthink in the critical decisions concerning the launch of the shuttle. The day of the launch was rushed for publicity reasons. NASA wanted to captivate and hold the attention of America. Having civilian teacher Christa McAuliffe on board to broadcast a live lesson, and the possible mention by president Ronald Reagan in the State of the Union address, were opportunities NASA deemed critical to increasing interest in its potential civilian space flight program. The schedule NASA set out to meet was, however, self-imposed. It seemed incredible to many that an organization with a perceived history of successful management would have locked itself into a schedule it had no chance of meeting. In the corporate world, ineffective and suboptimal group decision-making can negatively affect the health of a company and cause a considerable amount of monetary loss. Aaron Hermann and Hussain Rammal illustrate the detrimental role of groupthink in the collapse of Swissair, a Swiss airline company that was thought to be so financially stable that it earned the title the "Flying Bank". The authors argue that, among other factors, Swissair carried two symptoms of groupthink: the belief that the group is invulnerable and the belief in the morality of the group.: 1056 In addition, before the fiasco, the size of the company board was reduced, subsequently eliminating industrial expertise. This may have further increased the likelihood of groupthink.: 1055 With the board members lacking expertise in the field and having somewhat similar background, norms, and values, the pressure to conform may have become more prominent.: 1057 This phenomenon is called group homogeneity, which is an antecedent to groupthink. Together, these conditions may have contributed to the poor decision-making process that eventually led to Swissair's collapse. Another example of groupthink from the corporate world is illustrated in the United Kingdom-based companies Marks & Spencer and British Airways. The negative impact of groupthink took place during the 1990s as both companies released globalization expansion strategies. Researcher Jack Eaton's content analysis of media press releases revealed that all eight symptoms of groupthink were present during this period. The most predominant symptom of groupthink was the illusion of invulnerability as both companies underestimated potential failure due to years of profitability and success during challenging markets. Up until the consequence of groupthink erupted they were considered blue chips and darlings of the London Stock Exchange. During 1998–1999 the price of Marks & Spencer shares fell from 590 to less than 300 and that of British Airways from 740 to 300. Both companies had previously been prominently featured in the UK press and media for more positive reasons, reflecting national pride in their undeniable sector-wide performance. Recent literature of groupthink attempts to study the application of this concept beyond the framework of business and politics. One particularly relevant and popular arena in which groupthink is rarely studied is sports. The lack of literature in this area prompted Charles Koerber and Christopher Neck to begin a case-study investigation that examined the effect of groupthink on the decision of the Major League Umpires Association (MLUA) to stage a mass resignation in 1999. The decision was a failed attempt to gain a stronger negotiating stance against Major League Baseball.: 21 Koerber and Neck suggest that three groupthink symptoms can be found in the decision-making process of the MLUA. First, the umpires overestimated the power that they had over the baseball league and the strength of their group's resolve. The union also exhibited some degree of closed-mindedness with the notion that MLB is the enemy. Lastly, there was the presence of self-censorship; some umpires who disagreed with the decision to resign failed to voice their dissent.: 25 These factors, along with other decision-making defects, led to a decision that was suboptimal and ineffective. Recent developments Researcher Robert Baron (2005) contends that the connection between certain antecedents which Janis believed necessary has not been demonstrated by the current collective body of research on groupthink. He believes that Janis' antecedents for groupthink are incorrect, and argues that not only are they "not necessary to provoke the symptoms of groupthink, but that they often will not even amplify such symptoms". As an alternative to Janis' model, Baron proposed a ubiquity model of groupthink. This model provides a revised set of antecedents for groupthink, including social identification, salient norms, and low self-efficacy. Aldag and Fuller (1993) argue that the groupthink concept was based on a "small and relatively restricted sample" that became too broadly generalized. Furthermore, the concept is too rigidly staged and deterministic. Empirical support for it has also not been consistent. The authors compare groupthink model to findings presented by Maslow and Piaget; they argue that, in each case, the model incites great interest and further research that, subsequently, invalidate the original concept. Aldag and Fuller thus suggest a new model called the general group problem-solving (GGPS) model, which integrates new findings from groupthink literature and alters aspects of groupthink itself.: 534 The primary difference between the GGPS model and groupthink is that the former is more value neutral and more political.: 544 Later scholars have re-assessed the merit of groupthink by reexamining case studies that Janis originally used to buttress his model. Roderick Kramer (1998) believed that, because scholars today have a more sophisticated set of ideas about the general decision-making process and because new and relevant information about the fiascos have surfaced over the years, a reexamination of the case studies is appropriate and necessary. He argues that new evidence does not support Janis' view that groupthink was largely responsible for President Kennedy's and President Johnson's decisions in the Bay of Pigs Invasion and U.S. escalated military involvement in the Vietnam War, respectively. Both presidents sought the advice of experts outside of their political groups more than Janis suggested.: 241 Kramer also argues that the presidents were the final decision-makers of the fiascos; while determining which course of action to take, they relied more heavily on their own construals of the situations than on any group-consenting decision presented to them.: 241 Kramer concludes that Janis' explanation of the two military issues is flawed and that groupthink has much less influence on group decision-making than is popularly believed. Groupthink, while it is thought to be avoided, does have some positive effects. Choi and Kim found that group identity traits such as believing in the group's moral superiority, were linked to less concurrence seeking, better decision-making, better team activities, and better team performance. This study also showed that the relationship between groupthink and defective decision making was insignificant. These findings mean that in the right circumstances, groupthink does not always have negative outcomes. It also questions the original theory of groupthink. Scholars are challenging the original view of groupthink proposed by Janis. Whyte (1998) argues that a group's collective efficacy, i.e. confidence in its abilities, can lead to reduced vigilance and a higher risk tolerance, similar to how groupthink was described. McCauley (1998) proposes that the attractiveness of group members might be the most prominent factor in causing poor decisions. Turner and Pratkanis (1991) suggest that from social identity perspective, groupthink can be seen as a group's attempt to ward off potentially negative views of the group. Together, the contributions of these scholars have brought about new understandings of groupthink that help reformulate Janis' original model. According to a theory many of the basic characteristics of groupthink – e.g., strong cohesion, indulgent atmosphere, and exclusive ethos – are the result of a special kind of mnemonic encoding (Tsoukalas, 2007). Members of tightly knit groups have a tendency to represent significant aspects of their community as episodic memories and this has a predictable influence on their group behavior and collective ideology, as opposed to what happens when they are encoded as semantic memories (which is common in formal and more loose group formations). According to scientist Todd Rose, Collective Illusions and Groupthink are linked concepts that show how social dynamics affect behavior. Groupthink occurs when individuals who are right about what the group wants, conform to the group's consensus. Collective illusions are a specific form of Groupthink where individuals mistakenly assume the group's wants, leading everyone to behave in ways that don't reflect their true preferences. Both the concepts involve social influence and conformity. In popular culture In the 1979 religious satire Monty Python's Life of Brian, the concept of groupthink is satirized through the reactions of the crowds to Brian and his would-be followers: when he urges them that "You don't need to follow me. You don't need to follow anybody. You've got to think for yourselves. You're all individuals.", they respond in unison "Yes. We're all different." (with one voice saying "I'm not.") The film highlights how easily people can be swayed by charismatic figures, adopt a single, often illogical viewpoint, and blindly follow without individual thought. See also References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Joke#cite_note-FOOTNOTECathcartKlein2007-16] | [TOKENS: 8460]
Contents Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/F_Sharp_(programming_language)] | [TOKENS: 2423]
Contents F Sharp (programming language) F# (pronounced F sharp) is a general-purpose, high-level, strongly typed, multi-paradigm programming language that encompasses functional, imperative, and object-oriented programming methods. It is most often used as a cross-platform Common Language Infrastructure (CLI) language on .NET, but can also generate JavaScript and graphics processing unit (GPU) code. F# is developed by the F# Software Foundation, Microsoft and open contributors. An open source, cross-platform compiler for F# is available from the F# Software Foundation. F# is a fully supported language in Visual Studio and JetBrains Rider. Plug-ins supporting F# exist for many widely used editors including Visual Studio Code, Vim, and Emacs. F# is a member of the ML language family and originated as a .NET Framework implementation of a core of the programming language OCaml. It has also been influenced by C#, Python, Haskell, Scala and Erlang. History JavaScript, GPU JavaScript, GPU JavaScript, GPU JavaScript, GPU JavaScript, GPU JavaScript, GPU JavaScript, GPU JavaScript, GPU JavaScript, GPU JavaScript, GPU F# uses an open development and engineering process. The language evolution process is managed by Don Syme from Microsoft Research as the benevolent dictator for life (BDFL) for the language design, together with the F# Software Foundation. Earlier versions of the F# language were designed by Microsoft and Microsoft Research using a closed development process. F# was first included in Visual Studio in the 2010 edition, at the same level as Visual Basic (.NET) and C# (albeit as an option), and remains in all later editions, thus making the language widely available and well-supported. F# originates from Microsoft Research, Cambridge, UK. The language was originally designed and implemented by Don Syme, according to whom in the fsharp team, they say the F is for "Fun". Andrew Kennedy contributed to the design of units of measure. The Visual F# Tools for Visual Studio are developed by Microsoft. The F# Software Foundation developed the F# open-source compiler and tools, incorporating the open-source compiler implementation provided by the Microsoft Visual F# Tools team. Language overview F# is a strongly typed functional-first language with a large number of capabilities that are normally found only in functional programming languages, while supporting object-oriented features available in C#. Together, these features allow F# programs to be written in a completely functional style and also allow functional and object-oriented styles to be mixed. Examples of functional features are: F# is an expression-based language using eager evaluation and also in some instances lazy evaluation. Every statement in F#, including if expressions, try expressions and loops, is a composable expression with a static type. Functions and expressions that do not return any value have a return type of unit. F# uses the let keyword for binding values to a name. For example: binds the value 7 to the name x. New types are defined using the type keyword. For functional programming, F# provides tuple, record, discriminated union, list, option, and result types. A tuple represents a set of n values, where n ≥ 0. The value n is called the arity of the tuple. A 3-tuple would be represented as (A, B, C), where A, B, and C are values of possibly different types. A tuple can be used to store values only when the number of values is known at design-time and stays constant during execution. A record is a type where the data members are named. Here is an example of record definition: Records can be created as let r = { Name="AB"; Age=42 }. The with keyword is used to create a copy of a record, as in { r with Name="CD" }, which creates a new record by copying r and changing the value of the Name field (assuming the record created in the last example was named r). A discriminated union type is a type-safe version of C unions. For example, Values of the union type can correspond to either union case. The types of the values carried by each union case is included in the definition of each case. The list type is an immutable linked list represented either using a head::tail notation (:: is the cons operator) or a shorthand as [item1; item2; item3]. An empty list is written []. The option type is a discriminated union type with choices Some(x) or None. F# types may be generic, implemented as generic .NET types. F# supports lambda functions and closures. All functions in F# are first class values and are immutable. Functions can be curried. Being first-class values, functions can be passed as arguments to other functions. Like other functional programming languages, F# allows function composition using the >> and << operators. F# provides sequence expressions that define a sequence seq { ... }, list [ ... ] or array [| ... |] through code that generates values. For example, forms a sequence of squares of numbers from 0 to 14 by filtering out numbers from the range of numbers from 0 to 25. Sequences are generators – values are generated on-demand (i.e., are lazily evaluated) – while lists and arrays are evaluated eagerly. F# uses pattern matching to bind values to names. Pattern matching is also used when accessing discriminated unions – the union is value matched against pattern rules and a rule is selected when a match succeeds. F# also supports active patterns as a form of extensible pattern matching. It is used, for example, when multiple ways of matching on a type exist. F# supports a general syntax for defining compositional computations called computation expressions. Sequence expressions, asynchronous computations and queries are particular kinds of computation expressions. Computation expressions are an implementation of the monad pattern. F# support for imperative programming includes Values and record fields can also be labelled as mutable. For example: Also, F# supports access to all CLI types and objects such as those defined in the System.Collections.Generic namespace defining imperative data structures. Like other Common Language Infrastructure (CLI) languages, F# can use CLI types through object-oriented programming. F# support for object-oriented programming in expressions includes: Support for object-oriented programming in patterns includes F# object type definitions can be class, struct, interface, enum, or delegate type definitions, corresponding to the definition forms found in C#. For example, here is a class with a constructor taking a name and age, and declaring two properties. F# supports asynchronous programming through asynchronous workflows. An asynchronous workflow is defined as a sequence of commands inside an async{ ... }, as in The let! indicates that the expression on the right (getting the response) should be done asynchronously but the flow should only continue when the result is available. In other words, from the point of view of the code block, it is as if getting the response is a blocking call, whereas from the point of view of the system, the thread will not be blocked and may be used to process other flows until the result needed for this one becomes available. The async block may be invoked using the Async.RunSynchronously function. Multiple async blocks can be executed in parallel using the Async.Parallel function that takes a list of async objects (in the example, asynctask is an async object) and creates another async object to run the tasks in the lists in parallel. The resultant object is invoked using Async.RunSynchronously. Inversion of control in F# follows this pattern. Since version 6.0, F# supports creating, consuming and returning .NET tasks directly. Parallel programming is supported partly through the Async.Parallel, Async.Start and other operations that run asynchronous blocks in parallel. Parallel programming is also supported through the Array.Parallel functional programming operators in the F# standard library, direct use of the System.Threading.Tasks task programming model, the direct use of .NET thread pool and .NET threads and through dynamic translation of F# code to alternative parallel execution engines such as GPU code. The F# type system supports units of measure checking for numbers: units of measure, such as meters or kilograms, can be assigned to floating point, unsigned integer and signed integer values. This allows the compiler to check that arithmetic involving these values is dimensionally consistent, helping to prevent common programming mistakes by ensuring that, for instance, lengths are not mistakenly added to times. The units of measure feature integrates with F# type inference to require minimal type annotations in user code. The F# static type checker provides this functionality at compile time, but units are erased from the compiled code. Consequently, it is not possible to determine a value's unit at runtime. F# allows some forms of syntax customizing via metaprogramming to support embedding custom domain-specific languages within the F# language, particularly through computation expressions. F# includes a feature for run-time meta-programming called quotations. A quotation expression evaluates to an abstract syntax tree representation of the F# expressions. Similarly, definitions labelled with the [<ReflectedDefinition>] attribute can also be accessed in their quotation form. F# quotations are used for various purposes including to compile F# code into JavaScript and GPU code. Quotations represent their F# code expressions as data for use by other parts of the program while requiring it to be syntactically correct F# code. F# 3.0 introduced a form of compile-time meta-programming through statically extensible type generation called F# type providers. F# type providers allow the F# compiler and tools to be extended with components that provide type information to the compiler on-demand at compile time. F# type providers have been used to give strongly typed access to connected information sources in a scalable way, including to the Freebase knowledge graph. In F# 3.0 the F# quotation and computation expression features are combined to implement LINQ queries. For example: The combination of type providers, queries and strongly typed functional programming is known as information rich programming. F# supports a variation of the actor programming model through the in-memory implementation of lightweight asynchronous agents. For example, the following code defines an agent and posts 2 messages: Development tools Application areas F# is a general-purpose programming language. The SAFE Stack is an end-to-end F# stack to develop web applications. It uses ASP.NET Core on the server side and Fable on the client side. Alternative end-to-end F# options include the WebSharper framework and the Oxpecker framework. F# can be used together with the Visual Studio Tools for Xamarin to develop apps for iOS and Android. The Fabulous library provides a more comfortable functional interface. Among others, F# is used for quantitative finance programming, energy trading and portfolio optimization, machine learning, business intelligence and social gaming on Facebook. In the 2010s, F# has been positioned as an optimized alternative to C#. F#'s scripting ability and inter-language compatibility with all Microsoft products have made it popular among developers. F# can be used as a scripting language, mainly for desktop read–eval–print loop (REPL) scripting. Open-source community The F# open-source community includes the F# Software Foundation and the F# Open Source Group at GitHub. Popular open-source F# projects include: Compatibility F# features a legacy "ML compatibility mode" that can directly compile programs written in a large subset of OCaml roughly, with no functors, objects, polymorphic variants, or other additions. Examples A few small samples follow: A record type definition. Records are immutable by default and are compared by structural equality. A Person class with a constructor taking a name and age and two immutable properties. A simple example that is often used to demonstrate the syntax of functional languages is the factorial function for non-negative 32-bit integers, here shown in F#: Iteration examples: Fibonacci examples: A sample Windows Forms program: Asynchronous parallel programming sample (parallel CPU and I/O tasks): See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Standard_gravity] | [TOKENS: 834]
Contents Standard gravity The standard acceleration of gravity or standard acceleration of free fall, often called simply standard gravity, is the nominal gravitational acceleration of an object in a vacuum near the surface of the Earth. It is a constant defined by ISO standard 80000 as 9.80665 m/s2 (about 32.17405 ft/s2), denoted typically by ɡ0 (sometimes also ɡn, ɡe,[a] or simply ɡ). This value was established by the third General Conference on Weights and Measures (1901, CR 70) and used to define the standard weight of an object as the product of its mass and this nominal acceleration. The acceleration of a body near the surface of the Earth is due to the combined effects of gravity and centrifugal acceleration from the rotation of the Earth (but the latter is small enough to be negligible for most purposes); the total (the apparent gravity) is about 0.5% greater at the poles than at the Equator. Although the symbol ɡ is sometimes used for standard gravity, ɡ (without a suffix) can also mean the local acceleration due to local gravity and centrifugal acceleration, which varies depending on one's position on Earth (see Earth's gravity). The symbol ɡ should not be confused with G, the gravitational constant, or g, the symbol for gram. The ɡ is also used as a unit for any form of acceleration, with the value defined as above (see also: g-force). The value of ɡ0 defined above is a nominal midrange value on Earth, originally based on the acceleration of a body in free fall at sea level at a geodetic latitude of 45°. Although the actual acceleration of free fall on Earth varies according to location, the above standard figure is always used for metrological purposes. In particular, since it is the ratio of the kilogram-force and the kilogram, its numeric value when expressed in coherent SI units is the ratio of the kilogram-force and the newton, two units of force. History Already in the early days of its existence, the International Committee for Weights and Measures (CIPM) proceeded to define a standard thermometric scale, using the boiling point of water. Since the boiling point varies with the atmospheric pressure, the CIPM needed to define a standard atmospheric pressure. The definition they chose was based on the weight of a column of mercury of 760 mm. But since that weight depends on the local gravity, they now also needed a standard gravity. The 1887 CIPM meeting decided as follows: The value of this standard acceleration due to gravity is equal to the acceleration due to gravity at the International Bureau (alongside the Pavillon de Breteuil) divided by 1.0003322, the theoretical coefficient required to convert to a latitude of 45° at sea level. All that was needed to obtain a numerical value for standard gravity was now to measure the gravitational strength at the International Bureau. This task was given to Gilbert Étienne Defforges of the Geographic Service of the French Army. The value he found, based on measurements taken in March and April 1888, was 9.80991(5) m⋅s−2. This result formed the basis for determining the value still used today for standard gravity. The third General Conference on Weights and Measures, held in 1901, adopted a resolution declaring as follows: The value adopted in the International Service of Weights and Measures for the standard acceleration due to Earth's gravity is 980.665 cm/s2, value already stated in the laws of some countries. The numeric value adopted for ɡ0 was, in accordance with the 1887 CIPM declaration, obtained by dividing Defforges's result – 980.991 cm⋅s−2 in the cgs system then en vogue – by 1.0003322 while not taking more digits than are warranted considering the uncertainty in the result. Conversions See also Notes References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Americans] | [TOKENS: 5562]
Contents Americans Page version status This is an accepted version of this page Americans are the citizens and nationals of the United States. U.S. federal law does not equate nationality with race or ethnicity, but rather with citizenship. The U.S. has 37 ancestry groups with more than one million individuals. White Americans form the largest racial and ethnic group at 61.6% of the U.S. population, with non-Hispanic Whites making up 57.8% of the population. Hispanic and Latino Americans form the second-largest group and are 18.7% of the American population. Black Americans constitute the country's third-largest ancestry group and are 12.4% of the total U.S. population. Asian Americans are the country's fourth-largest group, composing 6% of the American population. The country's 3.7 million Native Americans account for about 1.1%, and some 574 native tribes are recognized by the federal government. People of American descent can be found internationally. As many as seven million Americans are estimated to be living abroad, and make up the American diaspora. The majority of Americans trace their roots to immigrants who arrived in what is now the United States, starting with European colonization in the 16th century. This includes European American groups such as the English, Irish, Germans, Italians, and others, as well as Africans forcibly brought as slaves during the Atlantic slave trade. However, the Native American population, whose ancestors inhabited the continent for thousands of years before European contact, are a key exception. Despite its multi-ethnic composition, the culture of the United States held in common by most Americans can also be referred to as mainstream American culture — a Western culture largely derived from the traditions of Northern and Western European colonists, settlers, and immigrants. It also includes significant influences of African-American culture. Westward expansion integrated the French-speaking Creoles and Cajuns of Louisiana and the Hispanos of the American Southwest, who brought close contact with the culture of Mexico. Large-scale immigration in the late 19th and early 20th centuries from Eastern and Southern Europe introduced a variety of new customs. Immigration from Africa, Asia, and Latin America has also had impact. A cultural melting pot, or pluralistic salad bowl, describes the way in which generations of Americans have celebrated and exchanged distinctive cultural characteristics. Racial and ethnic groups The United States is a diverse country, both racially and ethnically. Six races are officially recognized by the United States Census Bureau for statistical purposes: Alaska Native and American Indian, Asian, Black or African American, Native Hawaiian and Other Pacific Islander, White, and people of two or more races. "Some other race" is also an option in the census and other surveys. The United States Census Bureau also classifies Americans as "Hispanic or Latino" and "Not Hispanic or Latino", which identifies Hispanic and Latino Americans as a racially diverse ethnicity that comprises the largest minority group in the nation. White Americans constitute the majority of the 331 million people living in the United States, with 204,277,273 people or 61.6% of the population in the 2020 United States census.[a] The US census defines "white" as "[a] person having origins in any of the original peoples of Europe, the Middle East, or North Africa". Non-Hispanic Whites, which only account for 57.8% of the population, or 191,697,647 people, are the majority in 44 states. There are six minority-majority states: California, Texas, Maryland, New Mexico, Nevada, and Hawaii. In addition, the District of Columbia and the five inhabited U.S. territories have a non-white majority. The state with the highest percentage of non-Hispanic White Americans is Maine, while the state with the lowest percentage is Hawaii. Europe is the largest continent that Americans trace their ancestry to, and many claim descent from various European ethnic groups. The Spaniards were among the first Europeans to establish a continuous presence in what is now the continental United States in 1565 in San Agustín, La Florida then a part of New Spain. Virginia Dare (b. 1587) in Roanoke Island in present-day North Carolina, was the first child born in the original Thirteen Colonies to English parents. The Spaniards also established a continuous presence in what over three centuries later would become a possession of the United States with the founding of the city of San Juan, Puerto Rico, in 1521. Jewish Americans trace their ancestry primarily to Central and Eastern Europe, with smaller numbers from the Middle East and North Africa. Large numbers of Jewish immigrants arrived in the United States during the late 19th and early 20th centuries, fleeing persecution in the Russian Empire and other parts of Eastern Europe. They arrived in cities such as New York City, Chicago, and Philadelphia. In the 2020 United States census, English Americans 46.5 million (19.8%), German Americans 45m (19.1%), Irish Americans 38.6m (16.4%), and Italian Americans 16.8m (7.1%) were the four largest self-reported European ancestry groups in the United States constituting 62.4% of the white American population. However, the English Americans and British Americans demography is considered a serious under-count as they tend to self-report and identify as simply "Americans" (since the introduction of a new "American" category in the 1990 census) due to the length of time they have inhabited America. This is highly over-represented in the Upland South, a region that was settled historically by the British. Overall, as the largest group, European Americans have the lowest poverty rate and the second highest educational attainment levels, median household income, and median personal income of any racial demographic in the nation, second only to Asian Americans in the latter three categories. Hispanic and Latino Americans constitute the largest ethnic minority in the United States. They form the second largest group in the United States, comprising 62,080,044 people or 18.7% of the population according to the 2020 United States census.[b] Hispanic and Latino Americans are not considered a race in the United States census, instead forming an ethnic category. People of Spanish or Hispanic and Latino descent have lived in what is now United States territory since the founding of San Juan, Puerto Rico (the oldest continuously inhabited settlement on American soil) in 1521 by Juan Ponce de León, and the founding of St. Augustine, Florida (the oldest continuously inhabited settlement in the continental United States) in 1565 by Pedro Menéndez de Avilés. In the State of Texas, Spaniards first settled the region in the late 1600s and formed a unique cultural group known as Tejanos. Black and African Americans are citizens and residents of the United States with origins in sub-Saharan Africa. According to the Office of Management and Budget, the grouping includes individuals who self-identify as African American, as well as persons who emigrated from nations in the Caribbean and sub-Saharan Africa. The grouping is thus based on geography, and may contradict or misrepresent an individual's self-identification since not all immigrants from sub-Saharan Africa are "Black". Among these racial outliers are persons from Cape Verde, Madagascar, various Arab states, and Hamito-Semitic populations in East Africa and the Sahel, and the Afrikaners of Southern Africa. African Americans (also referred to as Black Americans or Afro-Americans, and formerly as American Negroes) are citizens or residents of the United States who have origins in any of the black populations of Africa. According to the 2020 United States census, there were 39,940,338 Black and African Americans in the United States, representing 12.4% of the population.[d] Black and African Americans make up the third largest group in the United States, after White and European Americans, and Hispanic and Latino Americans. The majority of the population (55%) lives in the South; compared to the 2000 United States census, there has also been a decrease of African Americans in the Northeast and Midwest. Most African Americans are the direct descendants of captives from Central and West Africa, from ancestral populations in countries like Nigeria, Benin, Sierra Leone, Guinea-Bissau, Senegal, and Angola, who survived the slavery era within the boundaries of the present United States. As an adjective, the term is usually spelled African-American. Montinaro et al. (2014) observed that around 50% of the overall ancestry of African Americans traces back to the Niger-Congo-speaking Yoruba of southwestern Nigeria and southern Benin (before the European colonization of Africa this people created the Oyo Empire), reflecting the centrality of this West African region in the Atlantic slave trade. Zakharaia et al. (2009) found a similar proportion of Yoruba associated ancestry in their African American samples, with a minority also drawn from Mandinka populations (founders of the Mali Empire), and Bantu populations (who had a varying level of social organization during the colonial era, while some Bantu peoples were still tribal, other Bantu peoples had founded kingdoms such as the Kingdom of Kongo). The first West African slaves were brought to Jamestown, Virginia in 1619. The English settlers treated these captives as indentured servants and released them after a number of years. This practice was gradually replaced by the system of race-based slavery used in the Caribbean. All the American colonies had slavery, but it was usually the form of personal servants in the North (where 2% of the people were slaves), and field hands in plantations in the South (where 25% were slaves); by the beginning of the American Revolutionary War 1/5th of the total population was enslaved. During the revolution, some would serve in the Continental Army or Continental Navy, while others would serve the British Empire in the Ethiopian Regiment, and other units. By 1804, the northern states (north of the Mason–Dixon line) had abolished slavery. However, slavery would persist in the southern states until the end of the American Civil War and the passage of the Thirteenth Amendment. Following the end of the Reconstruction era, which saw the first African American representation in Congress, African Americans became disenfranchised and subject to Jim Crow laws, legislation that would persist until the passage of the Civil Rights Act of 1964 and Voting Rights Act due to the civil rights movement. According to United States Census Bureau data, very few African immigrants self-identify as African American. On average, less than 5% of African residents self-reported as "African American" or "Afro-American" on the 2000 U.S. census. The overwhelming majority of African immigrants (~95%) identified instead with their own respective ethnicities. Self-designation as "African American" or "Afro-American" was highest among individuals from West Africa (4%–9%), and lowest among individuals from Cape Verde, East Africa and Southern Africa (0%–4%). African immigrants may also experience conflict with African Americans. Another significant population is the Asian American population, comprising 19,618,719 people in 2020, or 5.9% of the United States population.[e] California is home to 5.6 million Asian Americans, the greatest number in any state. In Hawaii, Asian Americans make up the highest proportion of the population (57 percent). Asian Americans live across the country, yet are heavily urbanized, with significant populations in the Greater Los Angeles Area, New York metropolitan area, and the San Francisco Bay Area. The United States census defines Asian Americans as those with origins to the countries of East Asia, Southeast Asia, and South Asia. Although Americans with roots in West Asia were once classified as "Asian", they are now excluded from the term in modern census classifications. The largest sub-groups are immigrants or descendants of immigrants from Cambodia, mainland China, India, Japan, Korea, Laos, Pakistan, the Philippines, Taiwan, Thailand, and Vietnam. Asians overall have higher income levels than all other racial groups in the United States, including whites, and the trend appears to be increasing in relation to those groups. Additionally, Asians have a higher education attainment level than all other racial groups in the United States. For better or for worse, the group has been called a model minority. While Asian Americans have been in what is now the United States since before the Revolutionary War, relatively large waves of Chinese, Filipino, and Japanese immigration did not begin until the mid-to-late 19th century. Immigration and significant population growth continue to this day. Due to a number of factors, Asian Americans have been stereotyped as "perpetual foreigners". Middle Eastern Americans and North African Americans are Americans with ancestry from the Middle East and North Africa (MENA). According to the American Jewish Archives and the Arab American National Museum, the first Middle Easterners and North Africans (especially Jews) to arrive in the Americas landed in the late 15th to mid-16th centuries. Many fled ethnic or ethnoreligious persecution during the Spanish Inquisition. In 2014, the United States Census Bureau began finalizing the ethnic classification of people of Middle Eastern and North African ("MENA") origins. According to the Arab American Institute (AAI), Arab Americans have family origins in each of the 22 Arab League member states. Following consultations with MENA organizations, the Census Bureau announced in 2014 that it would establish a new MENA ethnic category for populations from the Middle East, North Africa, and the Arab world, separate from the "white" classification that these populations had previously sought in 1909. The groups felt that the earlier "white" designation no longer accurately represents MENA identity, so they successfully lobbied for a distinct categorization. This new category would also include Israeli Americans. The Census Bureau does not currently ask about whether one is Sikh, because it views them as followers of a religion rather than members of an ethnic group, and it does not combine questions concerning religion with race or ethnicity. As of December 2015, the sampling strata for the new MENA category includes the Census Bureau's working classification of 19 MENA groups, as well as Iranian, Turkish, Armenian, Afghan, Azerbaijani, and Georgian groups. In January 2018, it was announced that the Census Bureau would not include the grouping in the 2020 census. According to the 2020 United States census, there are 2,251,699 people who are Native Americans or Alaska Natives alone; they make up 0.7% of the total population.[f] According to the Office of Management and Budget (OMB), an "American Indian or Alaska Native" is a person whose ancestry have origins in any of the original peoples of North, Central, or South America. 2.3 million individuals who are American Indian or Alaskan Native are multiracial; additionally the plurality of American Indians reside in the Western United States (40.7%). Collectively and historically this race has been known by several names; as of 1995, 50% of those who fall within the OMB definition prefer the term "American Indian", 37% prefer "Native American" and the remainder have no preference or prefer a different term altogether. Among Americans today, levels of Native American ancestry (distinct from Native American identity) differ. Based on a sample of users of the 23andMe commercial genetic test, genomes of self-reported African Americans averaged to 0.8% Native American ancestry, those of European Americans averaged to 0.18%, and those of Latinos averaged to 18.0%. Another genetic study focusing on Native American ancestry in the general population found an average of 38% in Latinos, 1% in African Americans, and 0.1% for European American populations, respectively. Native Americans, whose ancestry is indigenous to the Americas, originally migrated to the two continents between 10,000 and 45,000 years ago. These Paleoamericans spread throughout the two continents and evolved into hundreds of distinct cultures during the pre-Columbian era. Following the first voyage of Christopher Columbus, the European colonization of the Americas began, with St. Augustine, Florida becoming the first permanent European settlement in the continental United States. From the 16th through the 19th centuries, the population of Native Americans declined in the following ways: epidemic diseases brought from Europe; genocide and warfare at the hands of European explorers, settlers and colonists, as well as between tribes; displacement from their lands; internal warfare, enslavement; and intermarriage. As defined by the United States Census Bureau and the Office of Management and Budget, Native Hawaiians and other Pacific Islanders are "persons having origins in any of the original peoples of Hawaii, Guam, Samoa, or other Pacific Islands". Previously called Asian Pacific American, along with Asian Americans beginning in 1976, this was changed in 1997. As of the 2020 United States census, there are 622,018 who reside in the United States, and make up 0.2% of the nation's total population.[g] 14% of the population have at least a bachelor's degree, and 15.1% live in poverty, below the poverty threshold. As compared to the 2000 United States census, this population grew by 40%; and 71% live in the West; of those over half (52%) live in either Hawaii or California, with no other states having populations greater than 100,000. The United States territories in the Pacific also have large Pacific Islander populations such as Guam and the Northern Mariana Islands (Chammoro), and American Samoa (Samoan). The largest concentration of Native Hawaiians and other Pacific Islanders, is Honolulu County in Hawaii, and Los Angeles County in the continental United States. The United States has a growing multiracial identity movement, and this group is one of the fastest growing demographics in the country. Multiracial Americans numbered 7.0 million in 2008, or 2.3% of the population; by the 2020 census the multiracial increased to 13,548,983, or 4.1% of the total population. They can be any combination of races (White, Black or African American, Asian, American Indian or Alaska Native, Native Hawaiian or other Pacific Islander, "some other race") and ethnicities. The largest population of Multiracial Americans were those of White and African American descent, with a total of 1,834,212 self-identifying individuals. Barack Obama, the 44th President of the United States, is multiracial — his mother is white (of English and Irish descent) and his father is black (of Kenyan descent) — though he identifies only as African American. According to the 2020 United States census, 8.4% or 27,915,715 Americans chose to self-identify with the "some other race" category, the third most popular option. The vast majority of this group was Hispanic or Latino. "Some other race" formed the single largest racial group of Hispanics, with 42.2% of Hispanic/Latino Americans, or 26,225,882 people, choosing to identify as some other race, as these Hispanic/Latinos may feel the United States census does not describe their mixed European and American Indian ancestry as they understand it to be. A significant portion of the Hispanic and Latino population self-identifies as Mestizo, particularly the Mexican and Central American community. Mestizo is not a racial category in the United States census, but signifies someone who has both European and American Indian ancestry. National personification Uncle Sam is a national personification of the United States and sometimes more specifically of the American government, with the first usage of the term dating from the War of 1812. He is depicted as a stern elderly white man with white hair and a goatee beard, and dressed in clothing that recalls the design elements of the flag of the United States – for example, typically a top hat with red and white stripes and white stars on a blue band, and red and white striped trousers. Columbia is a poetic name for the Americas and the feminine personification of the United States of America, made famous by African American poet Phillis Wheatley during the American Revolutionary War in 1776. It has inspired the names of many persons, places, objects, institutions, and companies in the Western Hemisphere and beyond, including the District of Columbia, the seat of government of the United States. Language English is the national language and official language of the United States at the federal level. Additionally, some laws—such as U.S. naturalization requirements—standardize English. In 2020, about 245 million, or 78% of the population aged five years and older, spoke only English at home. Spanish, spoken by 13.4% of the population at home, is the second most common language and the most widely taught second language. Prior to the signing of Executive Order 14224 in March 2025, which declared English the official language of the U.S., some Americans advocated making English the country's official language, as it is in at least 30 out of the 50 states. Both English and Hawaiian are official languages in Hawaii by state law. Alaska has declared its 20 Native American languages to be official, along with English. In South Dakota, both dialects of the Sioux language have been declared official, along with English. While neither has an official language, New Mexico has laws providing for the use of both English and Spanish, as Louisiana does for English and French. Other states, such as California, mandate the publication of Spanish versions of certain government documents. The latter include court forms. Several insular territories grant official recognition to their native languages, along with English: Samoan and Chamorro are recognized by American Samoa and Guam, respectively; Carolinian and Chamorro are recognized by the Northern Mariana Islands; Spanish is an official language of Puerto Rico. Religion Religion in the United States has a high adherence level compared to other developed countries and a diversity in beliefs. The First Amendment to the country's Constitution prevents the Federal government from making any "law respecting an establishment of religion, or prohibiting the free exercise thereof". The U.S. Supreme Court has interpreted this as preventing the government from having any authority in religion. A majority of Americans report that religion plays a "very important" role in their lives, a proportion unusual among developed countries. However, similar to the other nations of the Americas. Many faiths have flourished in the United States, including both later imports spanning the country's multicultural immigrant heritage, as well as those founded within the country; these have led the United States to become the most religiously diverse country in the world. The United States has the world's largest Christian population. The majority of Americans (76%) are Christians, mostly within Protestant and Catholic denominations; these adherents constitute 48% and 23% of the population, respectively. Other religions include Buddhism, Hinduism, Islam, and Judaism, which collectively make up about 4% to 5% of the adult population. Another 15% of the adult population identifies as having no religious belief or no religious affiliation. According to the American Religious Identification Survey, religious belief varies considerably across the country: 59% of Americans living in Western states (the "Unchurched Belt") report a belief in God, yet in the South (the "Bible Belt") the figure is as high as 86%. Several of the original Thirteen Colonies were established by settlers who wished to practice their religion without discrimination: the Massachusetts Bay Colony was established by English Puritans, Pennsylvania by Irish and English Quakers, Maryland by English and Irish Catholics, and Virginia by English Anglicans. Although some individual states retained established religious confessions well into the 19th century, the United States was the first nation to have no official state-endorsed religion. Modeling the provisions concerning religion within the Virginia Statute for Religious Freedom, the framers of the Constitution rejected any religious test for office. The First Amendment specifically denied the federal government any power to enact any law respecting either an establishment of religion or prohibiting its free exercise, thus protecting any religious organization, institution, or denomination from government interference. European Rationalist and Protestant ideals mainly influenced the decision. Still, it was also a consequence of the pragmatic concerns of minority religious groups and small states that did not want to be under the power or influence of a national religion that did not represent them. Culture American culture is primarily a Western culture, but is influenced by Native American, West African, Latin American, East Asian, and Polynesian cultures. The United States of America has its own unique social and cultural characteristics, such as dialect, music, arts, social habits, cuisine, and folklore. Its chief early European influences came from English, Scottish, Welsh, and Irish settlers of colonial America during British rule. British culture, due to colonial ties with Britain that spread the English language, legal system and other cultural inheritances, had a formative influence. Other important influences came from other parts of Europe, especially Germany, France, and Italy. Original elements also play a strong role, such as Jeffersonian democracy. Thomas Jefferson's Notes on the State of Virginia was perhaps the first influential domestic cultural critique by an American and a reaction to the prevailing European consensus that America's domestic originality was degenerate. Prevalent ideas and ideals that evolved domestically, such as national holidays, uniquely American sports, military tradition, and innovations in the arts and entertainment give a strong sense of national pride among the population as a whole. American culture includes both conservative and liberal elements, scientific and religious competitiveness, political structures, risk taking and free expression, materialist and moral elements. Despite certain consistent ideological principles (e.g. individualism, egalitarianism, faith in freedom and democracy), the American culture has a variety of expressions due to its geographical scale and demographic diversity. Diaspora Americans have migrated to many places around the world, including Argentina, Australia, Brazil, Canada, Chile, China, Costa Rica, France, Germany, Hong Kong, India, Japan, Mexico, New Zealand, Pakistan, the Philippines, South Korea, the United Arab Emirates, and the United Kingdom. Unlike migration from other countries, United States migration is not concentrated in specific countries, possibly as a result of the roots of immigration from so many different countries to the United States. As of 2016[update], there were approximately 9 million United States citizens living outside of the United States. As the result of U.S. tax and financial reporting requirements that apply to non-resident citizens, record numbers of American citizens renounced their U.S. citizenship in the decade from 2010 to 2020. In 2024 a new organization was created to lobby the U.S. Congress for relief from citizenship-based taxation that is often cited as the reason for the record renunciations. See also Notes References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Mathematical_sociology] | [TOKENS: 4285]
Contents Mathematical sociology 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias Mathematical sociology is an interdisciplinary field of research concerned with the use of mathematics within sociological research. History Starting in the early 1940s, Nicolas Rashevsky, and subsequently in the late 1940s, Anatol Rapoport and others, developed a relational and probabilistic approach to the characterization of large social networks in which the nodes are persons and the links are acquaintanceship. During the late 1940s, formulas were derived that connected local parameters such as closure of contacts – if A is linked to both B and C, then there is a greater than chance probability that B and C are linked to each other – to the global network property of connectivity. Moreover, acquaintanceship is a positive tie, but what about negative ties such as animosity among persons? To tackle this problem, graph theory, which is the mathematical study of abstract representations of networks of points and lines, can be extended to include these two types of links and thereby to create models that represent both positive and negative sentiment relations, which are represented as signed graphs. A signed graph is called balanced if the product of the signs of all relations in every cycle (links in every graph cycle) is positive. Through formalization by mathematician Frank Harary, this work produced the fundamental theorem of this theory. It says that if a network of interrelated positive and negative ties is balanced, e.g. as illustrated by the psychological principle that "my friend's enemy is my enemy", then it consists of two sub-networks such that each has positive ties among its nodes and there are only negative ties between nodes in distinct sub-networks. The imagery here is of a social system that splits into two cliques. There is, however, a special case where one of the two sub-networks is empty, which might occur in very small networks. In another model, ties have relative strengths. 'Acquaintanceship' can be viewed as a 'weak' tie and 'friendship' is represented as a strong tie. Like its uniform cousin discussed above, there is a concept of closure, called strong triadic closure. A graph satisfies strong triadic closure If A is strongly connected to B, and B is strongly connected to C, then A and C must have a tie (either weak or strong). In these two developments we have mathematical models bearing upon the analysis of structure. Other early influential developments in mathematical sociology pertained to process. For instance, in 1952 Herbert A. Simon produced a mathematical formalization of a published theory of social groups by constructing a model consisting of a deterministic system of differential equations. A formal study of the system led to theorems about the dynamics and the implied equilibrium states of any group. The emergence of mathematical models in the social sciences was part of the zeitgeist in the 1940s and 1950s in which a variety of new interdisciplinary scientific innovations occurred, such as information theory, game theory, cybernetics and mathematical model building in the social and behavioral sciences. Approaches Focusing on mathematics within sociological research, mathematical sociology uses mathematics to construct social theories. Mathematical sociology aims to take sociological theory and to express it in mathematical terms. The benefits of this approach include increased clarity and the ability to use mathematics to derive implications of a theory that cannot be arrived at intuitively. In mathematical sociology, the preferred style is encapsulated in the phrase "constructing a mathematical model." This means making specified assumptions about some social phenomenon, expressing them in formal mathematics, and providing an empirical interpretation for the ideas. It also means deducing properties of the model and comparing these with relevant empirical data. Social network analysis is the best-known contribution of this subfield to sociology as a whole and to the scientific community at large. The models typically used in mathematical sociology allow sociologists to understand how predictable local interactions are and they are often able to elicit global patterns of social structure. Further developments In 1954, a critical expository analysis of Rashevsky's social behavior models was written by sociologist James S. Coleman. Rashevsky's models and as well as the model constructed by Simon raise a question: how can one connect such theoretical models to the data of sociology, which often take the form of surveys in which the results are expressed in the form of proportions of people believing or doing something. This suggests deriving the equations from assumptions about the chances of an individual changing state in a small interval of time, a procedure well known in the mathematics of stochastic processes. Coleman embodied this idea in his 1964 book Introduction to Mathematical Sociology, which showed how stochastic processes in social networks could be analyzed in such a way as to enable testing of the constructed model by comparison with the relevant data. The same idea can and has been applied to processes of change in social relations, an active research theme in the study of social networks, illustrated by an empirical study appearing in the journal Science. In other work, Coleman employed mathematical ideas drawn from economics, such as general equilibrium theory, to argue that general social theory should begin with a concept of purposive action and, for analytical reasons, approximate such action by the use of rational choice models (Coleman, 1990). This argument is similar to viewpoints expressed by other sociologists in their efforts to use rational choice theory in sociological analysis although such efforts have met with substantive and philosophical criticisms. Meanwhile, structural analysis of the type indicated earlier received a further extension to social networks based on institutionalized social relations, notably those of kinship. The linkage of mathematics and sociology here involved abstract algebra, in particular, group theory. This, in turn, led to a focus on a data-analytical version of homomorphic reduction of a complex social network (which along with many other techniques is presented in Wasserman and Faust 1994). In regard to Rapoport's random and biased net theory, his 1961 study of a large sociogram, co-authored with Horvath turned out to become a very influential paper. There was early evidence of this influence. In 1964, Thomas Fararo and a co-author analyzed another large friendship sociogram using a biased net model. Later in the 1960s, Stanley Milgram described the small world problem and undertook a field experiment dealing with it. A highly fertile idea was suggested and applied by Mark Granovetter in which he drew upon Rapoport's 1961 paper to suggest and apply a distinction between weak and strong ties. The key idea was that there was "strength" in weak ties. Some programs of research in sociology employ experimental methods to study social interaction processes. Joseph Berger and his colleagues initiated such a program in which the central idea is the use of the theoretical concept "expectation state" to construct theoretical models to explain interpersonal processes, e.g., those linking external status in society to differential influence in local group decision-making. Much of this theoretical work is linked to mathematical model building, especially after the late 1970s adoption of a graph theoretic representation of social information processing, as Berger (2000) describes in looking back upon the development of his program of research. In 1962 he and his collaborators explained model building by reference to the goal of the model builder, which could be explication of a concept in a theory, representation of a single recurrent social process, or a broad theory based on a theoretical construct, such as, respectively, the concept of balance in psychological and social structures, the process of conformity in an experimental situation, and stimulus sampling theory. The generations of mathematical sociologists that followed Rapoport, Simon, Harary, Coleman, White and Berger, including those entering the field in the 1960s such as Thomas Fararo, Philip Bonacich, and Tom Mayer, among others, drew upon their work in a variety of ways. Present research Mathematical sociology remains a small subfield within the discipline, but it has succeeded in spawning a number of other subfields which share its goals of formally modeling social life. The foremost of these fields is social network analysis, which has become among the fastest growing areas of sociology in the 21st century. The other major development in the field is the rise of computational sociology, which expands the mathematical toolkit with the use of computer simulations, artificial intelligence and advanced statistical methods. The latter subfield also makes use of the vast new data sets on social activity generated by social interaction on the internet. One important indicator of the significance of mathematical sociology is that the general interest journals in the field, including such central journals as The American Journal of Sociology and The American Sociological Review, have published mathematical models that became influential in the field at large. More recent trends in mathematical sociology are evident in contributions to The Journal of Mathematical Sociology (JMS). Several trends stand out: the further development of formal theories that explain experimental data dealing with small group processes, the continuing interest in structural balance as a major mathematical and theoretical idea, the interpenetration of mathematical models oriented to theory and innovative quantitative techniques relating to methodology, the use of computer simulations to study problems in social complexity, interest in micro–macro linkage and the problem of emergence, and ever-increasing research on networks of social relations. Thus, topics from the earliest days, like balance and network models, continue to be of contemporary interest. The formal techniques employed remain many of the standard and well-known methods of mathematics: differential equations, stochastic processes and game theory. Newer tools like agent-based models used in computer simulation studies are prominently represented. Perennial substantive problems still drive research: social diffusion, social influence, social status origins and consequences, segregation, cooperation, collective action, power, and much more. Research programs Many of the developments in mathematical sociology, including formal theory, have exhibited notable decades-long advances that began with path-setting contributions by leading mathematical sociologists and formal theorists. This provides another way of taking note of recent contributions but with an emphasis on continuity with early work through the use of the idea of “research program,” which is a coherent series of theoretical and empirical studies based on some fundamental principle or approach. There are more than a few of these programs and what follows is no more than a brief capsule description of leading exemplars of this idea in which there is an emphasis on the originating leadership in each program and its further development over decades. (1) Rational Choice Theory and James S. Coleman: After his 1964 pioneering Introduction to Mathematical Sociology, Coleman continued to make contributions to social theory and mathematical model building and his 1990 volume, Foundations of Social Theory was the major theoretical work of a career that spanned the period from 1950s to 1990s and included many other research-based contributions. The Foundation book combined accessible examples of how rational choice theory could function in the analysis of such sociological topics as authority, trust, social capital and the norms (in particular, their emergence). In this way, the book showed how rational choice theory could provide an effective basis for making the transition from micro to macro levels of sociological explanation. An important feature of the book is its use of mathematical ideas in generalizing the rational choice model to include interpersonal sentiment relations as modifiers of outcomes and doing so such that the generalized theory captures the original more self-oriented theory as a special case, as point emphasized in a later analysis of the theory. The rationality presupposition of the theory led to debates among sociological theorists. Nevertheless, many sociologists drew upon Coleman's formulation of a general template for micro-macro transition to gain leverage on the continuation of topics central to his and the discipline's explanatory focus on a variety of macrosocial phenomena in which rational choice simplified the micro level in the interest of combining individual actions to account for macro outcomes of social processes. (2) Structuralism (Formal) and Harrison C. White: In the decades since his earliest contributions, Harrison White has led the field in putting social structural analysis on a mathematical and empirical basis, including the 1970 publication of Chains of Opportunity: System Models of Mobility in Organizations which set out and applied to data a vacancy chain model for mobility in and across organizations. His very influential other work includes the operational concepts of blockmodel and structural equivalence which start from a body of social relational data to produce analytical results using these procedures and concepts. These ideas and methods were developed in collaboration with his former students François Lorraine, Ronald Breiger, and Scott Boorman. These three are among the more than 30 students who earned their doctorates under White in the period 1963-1986. The theory and application of blockmodels has been set out in detail in a recent monograph. White's later contributions include a structuralist approach to markets and, in 1992, a general theoretical framework, later appearing in a revised edition. (3) Expectation states theory and Joseph Berger: Under Berger's intellectual and organizational leadership, Expectation States Theory branched out into a large number of specific programs of research on specific problems, each treated in terms of the master concept of expectation states. He and his colleague and frequent collaborator Morris Zelditch Jr not only produced work of their own but created a doctoral program at Stanford University that led to an enormous outpouring of research by notable former students, including Murray Webster, David Wagner, and Hamit Fisek. Collaboration with mathematician Robert Z. Norman led to the use of mathematical graph theory as a way of representing and analyzing social information processing in self-other interactions. Berger and Zelditch also advanced work in formal theorizing and mathematical model building as early as 1962 with a collaborative expository analysis of types of models. Berger and Zelditch stimulated advances in other theoretical research programs by providing outlets for the publication of new work, culminating in a 2002 edited volume that includes a chapter that presents an authoritative overview of Expectation states theory as a program of cumulative research dealing with group processes. (4) Formalization in Theoretical Sociology and Thomas J. Fararo: Many of this sociologist's contributions have been devoted to bringing mathematical thinking into greater contact with sociological theory. He organized a symposium attended by sociological theorists in which formal theorists delivered papers that were subsequently published in 2000. Through collaborations with students and colleagues his own theoretical research program dealt with such topics as macrostructural theory and E-state structuralism (both with former student John Skvoretz), subjective images of stratification (with former student Kenji Kosaka), tripartite structural analysis (with colleague Patrick Doreian) and computational sociology (with colleague Norman P. Hummon). Two of his books are extended treatments of his approach to theoretical sociology. (5) Social Network Analysis and Linton C. Freeman: In the early 1960s Freeman directed a sophisticated empirical study of community power structure. In 1978 he established the journal Social Networks. It rapidly became a major outlet for original research papers that used mathematical techniques to analyze network data. The journal also publishes conceptual and theoretical contributions, including his paper “Centrality in Social Networks: Conceptual Clarification.” In turn, the mathematical concept defined in that paper led to further elaborations of the ideas, to experimental tests, and to numerous applications in empirical studies. He is the author of a study of the history and sociology of the field of social network analysis. (6) Quantitative Methodology and Kenneth C. Land: Kenneth Land has been on the frontier of quantitative methodology in sociology as well as formal theoretical model building. The influential yearly volume Sociological Methodology has been one of Land's favorite outlets for the publication of papers that often lie in the intersection of quantitative methodology and mathematical sociology. Two of his theoretical papers appeared early in this journal: “Mathematical Formalization of Durkheim's Theory of Division of Labor” (1970) and “Formal Theory” (1971). His decades-long research program includes contributions relating to numerous special topics and methods, including social statistics, social indicators, stochastic processes, mathematical criminology, demography and social forecasting. Thus Land brings to these fields the skills of a statistician, a mathematician and a sociologist, combined. (7) Affect Control Theory and David R. Heise: In 1979, Heise published a groundbreaking formal and empirical study in the tradition of interpretive sociology, especially symbolic interactionism, Understanding Events: Affect and the Construction of Social Action. It was the origination of a research program that has included his further theoretical and empirical studies and those of other sociologists, such as Lynn Smith-Lovin, Dawn Robinson and Neil MacKinnon. Definition of the situation and self-other definitions are two of the leading concepts in affect control theory. The formalism used by Heise and other contributors uses a validated form of measurement and a cybernetic control mechanism in which immediate feelings and compared with fundamental sentiments in such a way as to generate an effort to bring immediate feelings in a situation into correspondence with sentiments. In the simplest models, each person in an interactive pair, is represented in terms of one side of a role relationship in which fundamental sentiments associated with each role guide the process of immediate interaction. A higher level of the control process can be activated in which the definition of the situation is transformed. This research program comprises several of the key chapters in a 2006 volume of contributions to control systems theory (in the sense of Powers 1975 ) in sociology. (8) "Distributive Justice Theory" and Guillermina Jasso: Since 1980, Jasso has treated problems of distributive justice with an original theory that uses mathematical methods. She has elaborated upon and applied this theory to a wide range of social phenomena. Her most general mathematical apparatus – with the theory of distributive justice as a special case—deals with any subjective comparison between some actual state and some reference level for it, e.g., a comparison of an actual reward with an expected reward. In her justice theory, she starts with a very simple premise, the justice evaluation function (the natural logarithm of the ratio of actual to just reward) and then derives numerous empirically testable implications. (9) Collaborative research and John Skvoretz. A major feature of modern science is collaborative research in which the distinctive skills of the participants combine to produce original research. Skvoretz, in addition to this other contributions, has been a frequent collaborator in a variety of theoretical research programs, often using mathematical expertise as well as skills in experimental design, statistical data analysis and simulation methods. Some examples are: (1) Collaborative work on theoretical, statistical and mathematical problems in biased net theory. (2) Collaborative contributions to Expectation States Theory. (3) Collaborative contributions to Elementary Theory. (4) Collaboration with Bruce Mayhew in a structuralist research program. From the early 1970s, Skvoretz has been one of the most prolific of contributors to the advance of mathematical sociology. The above discussion could be expanded to include many other programs and individuals including European sociologists such as Peter Abell and the late Raymond Boudon. Awards in mathematical sociology The Mathematical Sociology section of The American Sociological Association in 2002 initiated awards for contributions to the field, including The James S. Coleman Distinguished Career Achievement Award. (Coleman had died in 1995 before the section had been established.) Given every other year, the awardees include some of those just listed in terms of their career-long research programs: The section's other categories of awards and their recipients are listed at ASA Section on Mathematical Sociology Texts and journals Mathematical sociology textbooks cover a variety of models, usually explaining the required mathematical background before discussing important work in the literature (Fararo 1973, Leik and Meeker 1975, Bonacich and Lu 2012). An earlier text by Otomar Bartos (1967) is still of relevance. Of wider scope and mathematical sophistication is the text by Rapoport (1983). A very reader-friendly and imaginative introduction to explanatory thinking leading to models is Lave and March (1975, reprinted 1993). The Journal of Mathematical Sociology (started in 1971) has been open to papers covering a broad spectrum of topics employing a variety of types of mathematics, especially through frequent special issues. Other journals in sociology who publish papers with substantial use of mathematics are Computational and Mathematical Organization Theory, Journal of social structure, Journal of Artificial Societies and Social Simulation Articles in Social Networks, a journal devoted to social structural analysis, very often employ mathematical models and related structural data analyses. In addition – importantly indicating the penetration of mathematical model building into sociological research – the major comprehensive journals in sociology, especially The American Journal of Sociology and The American Sociological Review, regularly publish articles featuring mathematical formulations. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Ordovician_radiation] | [TOKENS: 2001]
Contents Great Ordovician Biodiversification Event The Great Ordovician Biodiversification Event (GOBE) was an evolutionary radiation of marine animal life throughout the Ordovician period, 40 million years after the Cambrian explosion, whereby the distinctive Cambrian fauna fizzled out to be replaced with a Paleozoic fauna rich in suspension feeder and pelagic animals. It followed a series of Cambrian–Ordovician extinction events, and the resulting fauna went on to dominate the Palaeozoic oceans relatively unchanged. Marine diversity increased to levels typical of the Palaeozoic, and morphological disparity was similar to today's. The diversity increase was neither global nor instantaneous; it happened at different times in different places. The interplay of many geological and ecological factors likely produced the diversification. Consequently, there is unlikely to be a simple or straightforward explanation for the whole event. Duration According to a comprehensive study of biodiversity throughout the Palaeozoic, GOBE began 497.05 Ma and ended 467.33 Ma, lasting for 29.72 Myr. GOBE did not constitute one single event, as different clades diversified during different time intervals of the Late Cambrian and Early and Middle Ordovician. During the late Ordovician, diversification slowed down thanks to increased endemism and interbasinal dispersal, bringing an end to GOBE. Causes Possible causes include an increase in marine oxygen content, changes in palaeogeography or tectonic activity, a modified nutrient supply, or global cooling. The dispersed positions of the continents, high level of tectonic/volcanic activity, warm climate, and high CO2 levels would have created a large, nutrient-rich ecospace, favoring diversification. There seems to be an association between orogeny and the evolutionary radiation, with the Taconic orogeny in particular being singled out as a driver of the GOBE by enabling greater erosion of nutrients such as iron and phosphorus and their delivery to the oceans around Laurentia. In addition, the changing geography led to a more diverse landscape, with more different and isolated environments; this no doubt facilitated the emergence of bioprovinciality, and speciation by isolation of populations. The widespread reef development on the Baltican shelf in particular is attributable to the landmass's northward drift into more oligotrophic waters, enabling diversification of its reef biota. Widespread volcanism and its delivery of biologically important trace metals has similarly been proposed as a GOBE trigger, albeit controversially. On the other hand, global cooling has also been offered as a cause of the radiation, with long-term biodiversity trends showing a positive correlation between cooling and biodiversity during GOBE. An uptick in fossil diversity correlates with the increasing abundance of cool-water carbonates over the course of this time interval. A transient high magnitude shift towards more positive carbon isotope ratios during the Floian may reflect the initiation of a cooling through organic carbon burial that has been proposed to have kickstarted GOBE. In the longer term as well, increasing carbon isotope ratios track biodiversity increase, further pointing to a link between cooling and GOBE. The cooling during the Middle and early Late Ordovician in particular is known for its associated burst of biodiversification. The volcanic activity that created the Flat Landing Brook Formation in New Brunswick, Canada, may have caused rapid climatic cooling and biodiversification. Thallium isotope shifts show an expansion of oxic waters throughout deep water and shallow shelf environments during the latest Cambrian and earliest Ordovician coeval with increasing burrowing depth and complexity observed among ichnofossils and increasing morphological complexity among body fossils. Thus, heightened oxygen availability may have been a key trigger for GOBE. Furthermore, Ordovician biodiversification pulses were closely linked to terminations of positive carbon isotope excursions, which are characteristic of anoxia, suggesting that diversification occurred in concert with increasing oxygen content. After the Steptoean positive carbon isotope excursion about 500 million years ago, the extinction in the ocean would have opened up new niches for photosynthetic plankton, who would absorb CO2 from the atmosphere and release large amount of oxygen. More oxygen and a more diversified photosynthetic plankton as the bottom of the food chain, would have affected the diversity of higher marine organisms and their ecosystems. In the Middle to Late Ordovician, after GOBE, an expansion of anoxic waters occurred in sync with a ~50% decline in benthic invertebrates in various epicontinental seas, providing further indirect support for a coupling of seawater oxygenation with Ordovician biodiversity. Another alternative is that the breakup of an asteroid led to the Earth being consistently pummelled by meteorites, although the proposed Ordovician meteor event happened at 467.5±0.28 million years ago. Another effect of a collision between two asteroids, possibly beyond the orbit of Mars, is a reduction in sunlight reaching the Earth's surface due to the vast dust clouds created. Evidence for this geological event comes from the relative abundance of the isotope helium-3, found in ocean sediments laid down at the time of the biodiversification event. The most likely cause of the production of high levels of helium-3 is the bombardment of lithium by cosmic rays, something which could only have happened to material which travelled through space. However, rather than sparking evolutionary diversification, other lines of evidence point to the Ordovician meteor event instead postdating the Darriwilian biodiversity burst by about 600 kyr and the start of glaciation by 800 kyr. Instead of facilitating the radiation, the meteor event may have antagonistically acted to temporarily retard and halt biological diversification according to this thesis. The above triggers would have been amplified by ecological escalation, whereby any new species would co-evolve with others, creating new niches through niche partitioning, trophic layering, or by providing a new habitat. As with the Cambrian Explosion, it is likely that environmental changes drove the diversification of plankton, which permitted an increase in diversity and abundance of plankton-feeding lifeforms, including suspension feeders on the sea floor, and nektonic organisms in the water column. Effects If the Cambrian Explosion is thought of as "producing" the modern phyla, the GOBE can be considered as the "filling out" of these phyla with the modern (and many extinct) classes and lower-level taxa. The GOBE is considered to be one of the most potent speciation events of the Phanerozoic era, increasing global diversity severalfold and leading to the establishment of the Palaeozoic evolutionary fauna. Notable taxonomic diversity explosions during this period include that of articulated brachiopods, gastropods, and bivalves. The acritarch record (the majority of acritarchs were probably marine algae) displays the Ordovician radiation beautifully; both diversity and disparity peaked in the middle Ordovician. The warm waters and high sea level (which had been rising steadily since the early Cambrian) permitted large numbers of phytoplankton to prosper; the accompanying diversification of the phytoplankton may have caused an accompanying radiation of zooplankton and suspension feeders. Taxonomic diversity increased manifold; the total number of marine orders doubled, and families tripled. Marine biodiversity reached levels comparable to those of the present day. Beta diversity was the most important component of biodiversity increase from the Furongian to the Tremadocian. From the Floian onward, alpha diversity dethroned beta diversity as the greater contributor to regional diversity patterns. In addition to a diversification, the event also marked an increase in the complexity of both organisms and food webs. The number of different life modes among hard-bodied organisms doubled. Taxa began to exhibit greater provincialism and have more localized ranges, with different faunas at different parts of the globe. Communities in reefs and deeper water began to take on a character of their own, becoming more clearly distinct from other marine ecosystems. Benthic environments drastically increase in the amount and variety of bioturbation. The planktonic realm was invaded as never before, with several invertebrate lineages colonising the open waters and initiating new food chains at the end of the Cambrian into the early Ordovician. Among the newcomers colonising the planktonic realm were trilobites and cephalopods. Estuarine environments also experienced increased colonisation by living organisms. And as ecosystems became more diverse, with more species being squeezed into the food web, a more complex tangle of ecological interactions resulted, promoting strategies such as ecological tiering. The global fauna that emerged during the GOBE went on to be remarkably stable until the catastrophic end-Permian extinction and the ensuing Mesozoic Marine Revolution. Relationship to the Cambrian Explosion Recent work has suggested that the Cambrian Explosion and GOBE, rather than being two distinct events, represented one continual evolutionary radiation of marine life occurring over the entire Early Palaeozoic. An analysis of the Paleobiology Database (PBDB) and Geobiodiversity Database (GBDB) found no statistical basis for separating the two radiations into discrete events. A proposed biodiversity gap known as the Furongian Gap is thought by some researchers to have existed between the Cambrian Explosion and GOBE existed during the Furongian epoch, the final epoch of the Cambrian. However, whether this gap is real or an artefact of an incomplete fossil record is controversial. Analysis of the Guole Konservat-Lagerstätte and other sites in South China suggests the Furongian Gap did not exist, instead portraying this interval as one of rapid biotic turnovers. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/San_Francisco_Bay_Area] | [TOKENS: 15443]
Contents San Francisco Bay Area The San Francisco Bay Area, commonly known as the Bay Area, is a region of California surrounding and including San Francisco Bay, and anchored by the cities of Oakland, San Francisco, and San Jose. The Association of Bay Area Governments defines the Bay Area as including the nine counties that border the estuaries of San Francisco Bay, San Pablo Bay, and Suisun Bay: Alameda, Contra Costa, Marin, Napa, San Mateo, Santa Clara, Solano, Sonoma, and San Francisco. Other definitions may be either smaller or larger, and may include neighboring counties which are not officially part of the San Francisco Bay Area, such as the Central Coast counties of Santa Cruz, San Benito, and Monterey, or the Central Valley counties of San Joaquin, Merced, and Stanislaus. The Bay Area is known for its natural beauty, prominent universities, technology companies, and affluence. The Bay Area contains many cities, towns, airports, and associated regional, state, and national parks, connected by a complex multimodal transportation network. The earliest archaeological evidence of human settlements in the Bay Area dates back to 8000–10,000 BC. The oral tradition of the Ohlone and Miwok people suggests they have been living in the Bay Area for several hundreds if not thousands of years. The Spanish empire claimed the area beginning in the early period of Spanish colonization of the Americas. The earliest Spanish exploration of the Bay Area took place in 1769. The Mexican government controlled the area from 1821 until the 1848 Treaty of Guadalupe Hidalgo. Also in 1848, James W. Marshall discovered gold in nearby mountains, resulting in explosive immigration to the area and the precipitous decline of the Native population. The California gold rush brought rapid growth to San Francisco. California was admitted as the 31st state in 1850. A major earthquake and fire leveled much of San Francisco in 1906. During World War II, the Bay Area played a major role in America's war effort in the Asiatic-Pacific Theater, with the San Francisco Port of Embarkation, of which Fort Mason was one of 14 installations and location of the headquarters, acting as a primary embarkation point for American forces. Since then, the Bay Area has experienced numerous political, cultural, and artistic movements, developing unique local genres in music and art and establishing itself as a hotbed of progressive politics. The postwar Bay Area saw large growth in the financial and technology industries, creating an economy with a gross domestic product of over $1.3 trillion. In 2018 it was home to the third-highest concentration of Fortune 500 companies in the United States. The Bay Area is home to approximately 7.52 million people. The larger federal classification, the combined statistical area of the region which includes thirteen counties, is the second-largest in California—after the Greater Los Angeles area—and the fifth-largest in the United States, with over nine million people. The Bay Area's population is ethnically diverse: roughly three-fifths of the region's residents are Hispanic/Latino, Asian, African/Black, or Pacific Islander, all of whom have a significant presence throughout the region. Most of the remaining two-fifths of the population is non-Hispanic White American. The most populous cities of the Bay Area are Oakland, San Francisco, and San Jose, the latter of which had a population of 969,655 in 2023, making San Jose the area's largest city and the 13th-most populous in the United States. Despite its urban character, San Francisco Bay is one of California's most ecologically sensitive habitats, providing important ecosystem services such as filtering the pollutants and sediments from rivers and supporting a number of endangered species. In addition, the Bay Area is known for its stands of coast redwoods, many of which are protected in state and county parks. The region is additionally known for the complexity of its landforms, the result of millions of years of tectonic plate movements. Because the Bay Area is crossed by six major earthquake faults, the region is particularly exposed to hazards presented by large earthquakes. The climate is temperate and conducive to outdoor recreational and athletic activities such as hiking, running, and cycling. The Bay Area is host to teams in each of the five largest North American men's professional sports leagues and is a cultural center for music, theater, and the arts. It is also host to numerous higher education institutions, including research universities such as the University of California, Berkeley, and Stanford University, the latter known for helping to create the high tech center called Silicon Valley. Home to 101 municipalities and 9 counties, governance in the Bay Area involves numerous local and regional jurisdictions, often with broad and overlapping responsibilities. History The Coyote Hills Shell Mound, the earliest known archaeological evidence of human habitation of the Bay Area estuaries, dates to around 10,000 BCE, with evidence pointing to even earlier settlement in Point Reyes in Marin County. It has been conjectured that the people living in the Bay Area at the time of first European contact were descended from Siberian tribes who arrived at around 1,000 BCE by sailing over the Arctic Ocean and following the salmon migration. However the current academic consensus is compatible with the oral tradition of the Ohlone and Miwok peoples, which suggests they have been living in the Bay Area for several hundreds if not thousands of years. At the time of colonization, the Ohlone peoples in the Bay Area primarily lived on the San Francisco Peninsula, in the South Bay and in the East Bay, and the Miwok primarily lived in the North Bay, northern East Bay, and Central Valley. Ohlone villages were spread across the Peninsula, East Bay, South Bay, as well as further south into the Monterey Bay area. There were eight major divisions of Ohlone people, four of which were based in the Bay Area: the Karkin of the Carquinez Strait, the Chochenyo of the East Bay, the Ramaytush of the San Francisco Peninsula, and the Tamien of the South Bay. The Miwok had two major groups in the Bay Area: the Bay Miwok of Contra Costa and the Coast Miwok of Marin and Sonoma. In 1542, Juan Rodríguez Cabrillo explored the Pacific coast near the Bay Area though the expedition did not see the Golden Gate or the estuaries, likely due to fog. Sir Francis Drake became the first European to land in the area and claim it in June 1579, when he landed at Drakes Bay near Point Reyes. Even though he claimed the region for Queen Elizabeth I as Nova Albion or New Albion, the English made no immediate follow up to the claim. In 1595, Philip II of Spain tasked Sebastião Rodrigues Soromenho with mapping the west coast of the Americas. Soromenho set sail on Manila Galleon San Agustin on July 5, 1595 and in early November they reached land between Point St. George and Trinidad Head, north of the Bay Area, in the Lost Coast. The expedition followed the coast southward and on November 7 the San Agustin anchored in Drakes Bay, and claimed the region as Puerto y Bahía de San Francisco. In late November, a storm sank the San Agustin and killed between 7 and 12 people. On December 8, 80 remaining crew members set sail on the San Buenaventura, a launch which was partially constructed en route from the Philippines. Seeking the fastest route south, the expedition sailed past the Golden Gate, arriving at Puerto de Chacala, Mexico on January 17, 1596. The Bay Area estuaries remained unknown to Europeans until members of the Portolá expedition, while trekking along the California coast, encountered them in 1769 when the Golden Gate blocked their continued journey north. Several missions were founded in the Bay Area during this period. In 1806, a Spanish expedition led by Gabriel Moraga began at the Presidio, traveled south of the bay, and then east to explore the San Joaquin Valley. In 1821, Mexico gained its independence from Spain and the Bay Area became part of the Mexican province of Alta California, a period characterized by ranch life and visiting American trappers. Mexico's control of the territory would be short-lived, however, and in 1846 a party of settlers occupied Sonoma Plaza and proclaimed the independence of the new Republic of California. That same year, the Mexican–American War began, and American captain John Berrien Montgomery sailed the USS Portsmouth into the bay and seized San Francisco, which was then known as Yerba Buena, and raised the American flag for the first time over Portsmouth Square. In 1848, James W. Marshall's discovery of gold in the American River sparked the California gold rush, and within half a year 4,000 men were panning for gold along the river and finding $50,000 in gold per day. The promise of fabulous riches quickly led to a stampede of wealth-seekers descending on Sutter's Mill. The Bay Area's population quickly emptied out as laborers, clerks, waiters, and servants joined the rush to find gold, and California's first newspaper, The Californian, was forced to announce a temporary freeze in new issues due to labor shortages. By the end of 1849, news had spread across the world and newcomers flooded into the Bay Area at a rate of one thousand per week on their way to California's interior, including the first large influx of Chinese immigrants to the U.S. The rush was so great that vessels were abandoned by the hundreds in San Francisco's ports as crews rushed to the goldfields. The unprecedented influx of new arrivals spread the nascent government authorities thin, and the military was unable to prevent desertions. As a result, numerous vigilante groups formed to provide order, but many tasked themselves with forcibly moving or killing local Native Americans, and by the end of the gold rush, two thirds of the indigenous population had been killed. During this same time, a constitutional convention was called to determine California's application for statehood into the United States. After statehood was granted, the capital city moved between three cities in the Bay Area: San Jose (1849–1851), Vallejo (1851–1852), and Benicia (1852–1853) before permanently settling in Sacramento in 1854. As the gold rush subsided, wealth generated from the endeavor led to the establishment of Wells Fargo Bank and the Bank of California, and immigrant laborers attracted by the promise of wealth transformed the demographic makeup of the region. Construction of the First transcontinental railroad from the Oakland Long Wharf attracted so many laborers from China that by 1870, eight percent of San Francisco's population was of Asian origin. The completion of the railroad connected the Bay Area with the rest of the United States, established a truly national marketplace for the trade of goods, and accelerated the urbanization of the region. In the early morning of April 18, 1906, a large earthquake with an epicenter near the city of San Francisco hit the region. Immediate casualty estimates by the U.S. Army's relief operations were 498 deaths in San Francisco, 64 deaths in Santa Rosa, and 102 in or near San Jose, for a total of about 700. More recent studies estimate the total death count to be over 3,000, with over 28,000 buildings destroyed. Rebuilding efforts began immediately. Amadeo Peter Giannini, owner of the Bank of Italy (now known as the Bank of America), had managed to retrieve the money from his bank's vaults before fires broke out through the city and was the only bank with liquid funds readily available and was instrumental in loaning out funds for rebuilding efforts. Congress immediately approved plans for a reservoir in Hetch Hetchy Valley in Yosemite National Park, a plan they had denied a few years earlier, which now provides drinking water for 2.4 million people in the Bay Area. By 1915, the city had been sufficiently rebuilt and advertised itself to the world during the Panama Pacific Exposition that year, although the effects of the quake hastened the loss of the region's dominant status in California to the Los Angeles metropolitan area. During the 1929 stock market crash and subsequent economic depression, not a single San Francisco-based bank failed, while the region attempted to spur job growth by simultaneously undertaking two large infrastructure projects: construction of the Golden Gate Bridge, which would connect San Francisco with Marin County, and the Bay Bridge, which would connect San Francisco with Oakland and the East Bay. After the United States joined World War II in 1941, the Bay Area became a major domestic military and naval hub, with large shipyards constructed in Sausalito and across the East Bay to build ships for the war effort. The Army's San Francisco Port of Embarkation was the primary origin for Army forces shipping out to the Pacific Theater of Operations. That command consisted of fourteen installations including Fort Mason, the Oakland Army Base, Camp Stoneman and Fort McDowell in San Francisco Bay and the sub port of Los Angeles. After World War II, the United Nations was chartered in San Francisco, and in September 1951, the Treaty of San Francisco to re-establish peaceful relations between Japan and the Allied Powers was signed in San Francisco, entering into force a year later. In the years immediately following the war, the Bay Area saw a huge wave of immigration as populations increased across the region. Between 1950 and 1960, San Francisco welcomed over 100,000 new residents, inland suburbs in the East Bay saw their populations double, Daly City's population quadrupled, and Santa Clara's population quintupled. By the early 1960s, the Bay Area and the rest of Northern California became the center of the counterculture movement. Telegraph Avenue in Berkeley and the Haight-Ashbury neighborhood in San Francisco were seen as centers of activity, with the hit American pop song San Francisco (Be Sure to Wear Flowers in Your Hair) further enticing like-minded individuals to join the movement in the Bay Area and leading to the Summer of Love. In the proceeding decades, the Bay Area would cement itself as a hotbed of New Left activism, student activism, opposition to the Vietnam War and other anti-war movements, the black power movement, and the gay rights movement. At the same time, parts of San Mateo and Santa Clara counties began to rapidly develop from an agrarian economy into a hotbed of the high-tech industry. Fred Terman, the director of a top-secret research project at Harvard University during World War II, joined the faculty at Stanford University in order to reshape the university's engineering department. His students, including David Packard and William Hewlett, would later help usher in the region's high-tech revolution. In 1955, Shockley Semiconductor Laboratory opened for business in Mountain View near Stanford, and although the business venture was a financial failure, it was the first semiconductor company in the Bay Area, and the talent that it attracted to the region eventually led to a high-tech cluster of companies later known as Silicon Valley. In 1989, in the middle of a World Series game between two Bay Area baseball teams, the Loma Prieta earthquake struck and caused widespread infrastructural damage, including the failure of the Bay Bridge, a major link between San Francisco and Oakland. Even so, the Bay Area's technology industry continued to expand and growth in Silicon Valley accelerated: the United States census confirmed that year that San Jose had overtaken San Francisco in terms of population. The commercialization of the Internet in the middle of the decade rapidly created a speculative bubble in the high-tech economy known as the dot-com bubble. This bubble began collapsing in the early 2000s and the industry continued contracting for the next few years, nearly wiping out the market. Companies like Amazon.com and Google managed to weather the crash however, and following the industry's return to normalcy, their market value increased significantly. Even as the growth of the technology sector transformed the region's economy, progressive politics continued to guide the region's political environment. By the turn of the millennium, non-Hispanic whites, the largest ethnic group in the United States, were only half of the population in the Bay Area as immigration among minority groups accelerated. During this time, the Bay Area was the center of the LGBT rights movement: in 2004, San Francisco began issuing marriage licenses to same-sex couples, a first in the United States, and four years later a majority of voters in the Bay Area rejected California Proposition 8, which sought to constitutionally restrict marriage to opposite-sex couples but ultimately passed statewide. The Bay Area was also the center of contentious protests concerning racial and economic inequality. In 2009, an African-American man named Oscar Grant was fatally shot by Bay Area Rapid Transit police officers, precipitating widespread protests across the region and even riots in Oakland. His name was symbolically tied to the Occupy Oakland protests two years later that sought to fight against social and economic inequality. Geography The borders of the San Francisco Bay Area are not officially delineated, and the unique development patterns influenced by the region's topography, as well as unusual commute patterns caused by the presence of three central cities and employment centers located in various suburban locales, has led to considerable disagreement between local and federal definitions of the area. Because of this, professor of geography at the University of California, Berkeley Richard Walker claimed that "no other U.S. city-region is as definitionally challenged [as the Bay Area]." When the region began to rapidly develop during and immediately after World War II, local planners settled on a nine-county definition for the Bay Area, consisting of the counties that directly border the San Francisco, San Pablo, and Suisun estuaries: Alameda, Contra Costa, Marin, Napa, San Francisco, San Mateo, Santa Clara, Solano, and Sonoma counties. Today, this definition is accepted by most local governmental agencies including San Francisco Regional Water Quality Control Board, Bay Area Air Quality Management District, the San Francisco Bay Restoration Authority, the Metropolitan Transportation Commission, and the Association of Bay Area Governments, the latter two of which partner to deliver a Bay Area Census using the nine-county definition. Various U.S. Federal government agencies use definitions that differ from their local counterparts' nine-county definition. For example, the Federal Communications Commission (FCC) which regulates broadcast, cable, and satellite transmissions, includes nearby Colusa, Lake and Mendocino counties in their "San Francisco-Oakland-San Jose" media market, but excludes eastern Solano County. On the other hand, the United States Office of Management and Budget, which designates metropolitan statistical areas (MSAs) and combined statistical areas (CSA) for populated regions across the country, has five MSAs which include, wholly or partially, areas within the nine-county definition, and one CSA which includes eight Bay Area counties (excluding Sonoma), but including neighboring San Benito, Santa Cruz, San Joaquin, Merced, and Stanislaus counties. The Association of Bay Area Health Officers (ABAHO), an organization that has fought local outbreaks of HIV/AIDS in 1980s and with COVID-19 pandemic and Deltacron hybrid variant (2020–22), consists of the public health officers of nine Bay Area counties, in addition to the Central Coast counties of Santa Cruz, San Benito, and Monterey and the city of Berkeley. Among locals, the nine-county Bay Area is divided into five sub-regions: the East Bay, North Bay, Peninsula, city of San Francisco, and South Bay. The "East Bay" is the densest region of the Bay Area outside of San Francisco and includes cities and towns in Alameda and Contra Costa counties centered around Oakland. As one of the larger subregions, the East Bay includes a variety of enclaves, including the suburban Tri-Valley area and the highly urban western part of the subregion that runs alongside the bay, including Oakland. The "North Bay" includes Marin, Sonoma, Napa, and Solano counties, and is the geographically largest and least populated subregion. The western counties of Marin and Sonoma are encased by the Pacific Ocean on the west and the bay on the east and are characterized by their mountainous and woody terrain. Sonoma and Napa counties are known internationally for their grape vineyards and wineries, and Solano County to the east, centered around Vallejo, is the fastest growing region in the Bay Area. The "Peninsula" subregion includes the cities and towns on the San Francisco Peninsula, excluding the titular city of San Francisco. Its eastern half, which runs alongside the Bay, is highly populated, while its less populated western coast traces the coastline of the Pacific Ocean and is known for its open space and hiking trails. Roughly coinciding with the borders of San Mateo County, it also includes the northwestern Santa Clara County cities of Palo Alto, Mountain View, and Los Altos. Although geographically located on the tip of the San Francisco Peninsula, the city of San Francisco is not considered part of the "Peninsula" subregion, but as a separate entity. The term "South Bay" has different meanings to different groups: Writing in 1959 for the Army Corps of Engineers, the United States Department of Commerce defined the South Bay as comprising five counties, corresponding to their two-way division of the bay into north and south regions. In 1989, the federal Environmental Protection Agency defined the South Bay as the northern part of Santa Clara County and the southeastern part of San Mateo County. The Bay Area is located in the warm-summer Mediterranean climate zone (Köppen Csb) that is a characteristic of California's coast, featuring mild to cool winters with occasional rainfall, and warm to hot, dry summers. It is largely influenced by the cold California Current, which penetrates the natural mountainous barrier along the coast by traveling through various gaps. In terms of precipitation, this means that the Bay Area has pronounced seasons. The winter season, which roughly runs between November and March, is the source of about 82% of annual precipitation in the area. In the South Bay and further inland, while the winter season is cool and mild, the summer season is characterized by warm sunny days, while in San Francisco and areas closer to the Golden Gate strait, the summer season is periodically affected by fog. Due to the Bay Area's diverse topographic relief (itself the result of the clashing tectonic plates), the region is home to numerous microclimates that lead to pronounced differences in climate and temperature over short distances. Within the city of San Francisco, natural and artificial topographical features direct the movement of wind and fog, resulting in startlingly varied climates between city blocks. Along the Golden Gate Strait, oceanic wind and fog from the Pacific Ocean are able to penetrate the mountain barriers inland into the Bay Area. During the summer, rising hot air in California's interior valleys creates a low pressure area that draws winds from the North Pacific High through the Golden Gate, which creates the city's characteristic cool winds and fog. The microclimate phenomenon is most pronounced during this time, when fog penetration is at its maximum in areas near the Golden Gate strait, while the South Bay and areas further inland are sunny and dry. Along the San Francisco peninsula, gaps in the Santa Cruz Mountains, one south of San Bruno Mountain and another in Crystal Springs, allow oceanic weather into the interior, causing a cooling effect for cities along the Peninsula and even as far south as San Jose. This weather pattern is also the source for delays at San Francisco International Airport. In Marin county north of the Golden Gate strait, two gaps north of Muir Woods bring cold air across the Marin Headlands, with the cooling effect reaching as far north as Santa Rosa. Further inland, the East Bay receives oceanic weather that travels through the Golden Gate strait, and further diffuses that air through the Berkeley Hills, Niles Canyon and the Hayward Pass into the Livermore Valley and Altamont Pass. Here, the resulting breeze is so strong that it is home to one of the world's largest array of wind turbines. Further north, the Carquinez Strait funnels the ocean weather into the San Joaquin River Delta, causing a cooling effect in Stockton and Sacramento, so that these cities are also cooler than their Central Valley counterparts in the south. The Bay Area is home to a diverse array of wildlife and, along with the connected San Joaquin River Delta represents one of California's most important ecological habitats. California's Dungeness crab, Pacific halibut, and the California scorpionfish are all significant components of the bay's fisheries. The bay's salt marshes now represent most of California's remaining salt marsh and support a number of endangered species and provide key ecosystem services such as filtering pollutants and sediments from the rivers. Most famously, the bay is a key link in the Pacific Flyway and with millions of shorebirds annually visiting the bay shallows as a refuge, is the most important component of the flyway south of Alaska. Many endangered species of birds are also found here: the California least tern, the California clapper rail, the snowy egret, and the black crowned night heron. There is also a significant diversity of salmonids present in the bay. Steelhead populations in California have dramatically declined due to human and natural causes; in the Bay Area, all naturally spawned anadromous steelhead populations below natural and manmade impassable barriers in California streams from the Russian River to Aptos Creek, and the drainages of San Francisco, San Pablo, and Suisun Bays are listed as threatened under the Federal Endangered Species Act. The Central California Coast coho salmon population is the most endangered of the many troubled salmon populations on the west coast of the United States, including populations residing in tributaries to San Francisco Bay. California Coast Chinook salmon were historically native to the Guadalupe River in San Francisco Bay, and Chinook salmon runs persist today in the Guadalupe River, Coyote Creek, Napa River, and Walnut Creek. Industrial, mining, and other uses of mercury have resulted in a widespread distribution of that poisonous metal in the bay, with uptake in the bay's phytoplankton and contamination of its sportfish. Aquatic mammals are also present in the bay. Before 1825, Spanish, French, English, Russians and Americans were drawn to the Bay Area to harvest prodigious quantities of beaver, river otter, marten, fisher, mink, fox, weasel, harbor and fur seals and sea otter. This early fur trade, known as the California Fur Rush, was more than any other single factor, responsible for opening up the West and the San Francisco Bay Area, in particular, to world trade. By 1817 sea otter in the area were practically eliminated. Since then, the California golden beaver re-established a presence in Alhambra Creek, followed by the Napa River and Sonoma Creek in the north, and the Guadalupe River and Coyote Creek in the south. The North American river otter which was first reported in Redwood Creek at Muir Beach in 1996, has since been spotted in the North Bay's Corte Madera Creek, the South Bay's Coyote Creek, as well as in 2010 in San Francisco Bay itself at the Richmond Marina. Other mammals include the internationally famous sea lions who began inhabiting San Francisco's Pier 39 after the 1989 Loma Prieta earthquake and the locally famous Humphrey the Whale, a humpback whale who entered San Francisco Bay twice on errant migrations in the late 1980s and early 1990s. Bottlenose dolphins and harbor porpoises have recently returned to the bay, having been absent for many decades. Historically, this was the northern extent of their warm-water species range. In addition to the many species of marine birds that can be seen in the Bay Area, many other species of birds make the Bay Area their home, making the region a popular destination for birdwatching. Many birds are listed as endangered species despite once being common in the region. Western burrowing owls were originally listed as a species of special concern by the California Department of Fish and Game in 1979. California's population declined 60% from the 1980s to the early 1990s, and continues to decline at roughly 8% per year. A 1992–93 survey reported little to no breeding burrowing owls in most of the western counties in the Bay Area, leaving only Alameda, Contra Costa, and Solano counties as remnants of a once large breeding range. Bald eagles were once common in the Bay Area, but habitat destruction and thinning of eggs from DDT poisoning reduced the California state population to 35 nesting pairs. Bald eagles disappeared from the Bay Area in 1915, and only began returning in recent years. In the 1980s an effort to re-introduce the species to the area began with the Santa Cruz Predatory Bird Research Group and the San Francisco Zoo importing birds and eggs from Vancouver Island and northeastern California, and there are now nineteen nesting couples in eight of the Bay Area's nine counties. Other once absent species that have returned to the Bay Area include Swainson's hawk, white tailed kite, and the osprey. In 1927, zoologist Joseph Grinnell wrote that osprey were only rare visitors to the San Francisco Bay Area, although he noted records of one or two used nests in the broken tops of redwood trees along the Russian River. In 1989, the southern breeding range of the osprey in the Bay Area was Kent Lake, although osprey were noted to be extending their range further south in the Central Valley and the Sierra Nevada. In 2014, a Bay Area-wide survey found osprey had extended their breeding range southward with nesting sites as far south as Hunters Point in San Francisco on the west side and Hayward on the east side, while further studies have found nesting sites as far south as the Los Gatos Creek watershed, indicating that the nesting range now includes the entire length of San Francisco Bay. Most nests were built on man-made structures close to areas of human disturbance, likely due to lack of mature trees near the Bay. The wild turkey population was introduced in the 1960s by state game officials, and by 2015 have become a common sight in East Bay communities. The Bay Area is well known for the complexity of its landforms that are the result of the forces of plate tectonics acting over of millions of years, since the region is located in the middle of a meeting point between two plates. Nine out of eleven distinct assemblages have been identified in a single county, Alameda. Diverse assemblages adjoin in complex arrangements due to offsets along the many faults (both active and stable) in the area. As a consequence, many types of rock and soil are found in the region. The oldest rocks are metamorphic rocks that are associated with granite in the Salinian Block west of the San Andreas Fault. These were formed from sedimentary rocks of sandstone, limestone, and shale in uplifted seabeds. Volcanic deposits also exist in the Bay Area, left behind by the movement of the San Andreas Fault, whose movement sliced a subduction plate and allowed magma to briefly flow to the surface. The region has considerable vertical relief in its landscapes that are not in the alluvial plains leading to the bay or in the inland valleys. The topography, and geologic history, of the Bay Area can largely be attributed to the compressive forces between the Pacific Plate and the North American plate. The three major ridge structures in the Bay Area, part of the Pacific Coast Range, are all roughly parallel to the major faults. The Santa Cruz Mountains along the San Francisco Peninsula and the Marin Hills in Marin County follow the San Andreas fault, The Berkeley Hills, San Leandro Hills and their southern ridgeline extension through Mission Peak roughly follow the Hayward fault, and the Diablo Range, which includes Mount Diablo and Mount Hamilton and runs along the Calaveras fault. In total, the Bay Area is traversed by seven major fault systems with hundreds of related faults, all of which are stressed by the relative motion between the Pacific Plate and the North American Plate or by compressive stresses between these plates. The fault systems include the Hayward Fault Zone, Concord-Green Valley Fault, Calaveras Fault, Clayton-Marsh Creek-Greenville Fault, Rodgers Creek Fault, and the San Gregorio Fault. Significant blind thrust faults (faults with near vertical motion and no surface ruptures) are associated with portions of the Santa Cruz Mountains and the northern reaches of the Diablo Range and Mount Diablo. These "hidden" faults, which are not as well known, pose a significant earthquake hazard. Among the more well-understood faults, as of 2014, scientists estimate a 72% probability of a magnitude 6.7 earthquake occurring along either the Hayward, Rogers Creek, or San Andreas fault, with an earthquake more likely to occur in the East Bay's Hayward Fault. Two of the largest earthquakes in recent history were the 1906 San Francisco earthquake and the 1989 Loma Prieta earthquake. The Bay Area is home to a complex network of watersheds, marshes, rivers, creeks, reservoirs, and bays that predominantly drain into the San Francisco Bay and Pacific Ocean. The largest bodies of water in the Bay Area are the San Francisco, San Pablo, and Suisun estuaries. Major rivers of the North Bay include the Napa River, the Petaluma River, the Gualala River, and the Russian River; the former two drain into San Pablo Bay, the latter two into the Pacific Ocean. In the South Bay, the Guadalupe River drains into San Francisco Bay near Alviso. There are also several lakes present in the Bay Area, including man-made lakes like Lake Berryessa and natural albeit heavily modified lakes like Lake Merritt. Prior to the introduction of European agricultural methods, the shores of San Francisco Bay consisted mostly of tidal marshes. Today, the bay has been significantly altered heavily re-engineered to accommodate the needs of water delivery, shipping, agriculture, and urban development, with side effects including the loss of wetlands and the introduction of contaminants and invasive species. Approximately 85% of those marshes have been lost or destroyed, but about 50 marshes and marsh fragments remain. Huge tracts of the marshes were originally destroyed by farmers for agricultural purposes, then repurposed to serve as salt evaporation ponds to produce salt for food and other purposes. Today, regulations limit the destruction of tidal marshes, and large portions are currently being rehabilitated to their natural state. Over time, droughts and wildfires have increased in frequency and become less seasonal and more year-round, further straining the region's water security. Demographics According to the 2020 United States census, the population of the nine-county Bay Area was 7.76 million, with 49.6% male and 50.4% female. The racial makeup was 35.8% White (non-Hispanic), 27.7% Asian, 24.4% Hispanic or Latino (of any race), 5.6% non-Hispanic Black or African American, 0.5% Pacific Islander, 0.2% Native American or Alaska Native, 5.7% two or more races. In 2017, approximately 2.3 million Bay Area residents were foreign born (30% of 2020 census population). Demographically, the San Francisco Bay Area's population has the third-oldest median age in the U.S., following two Florida metros, and the Bay Area is the fastest-aging of any metropolitan area. Non-Hispanic whites form majorities of the population in Marin, Napa, and Sonoma counties. Whites also make up the majority in the eastern regions of the East Bay centered around the Lamorinda and Tri-Valley areas. Like much of the U.S., the Bay Area has a large Irish population, and this is reflected in the Richmond District area of San Francisco.[citation needed] San Jose has a Little Portugal, and San Francisco's North Beach district, now considered the Little Italy of the city, was once home to a significant Italian-American community. San Francisco, Marin County and the Lamorinda area all have substantial Jewish communities. There is a Little Russia community in northwestern San Francisco, and there are Russian communities throughout the Bay Area, especially in San Mateo County and Santa Clara County; there are also Eastern European American groups such as Ukrainians and Poles in dozens of thousands to hundreds of thousands especially in San Francisco and in the Peninsula, including recent immigrants and American-born citizens of Eastern European descent. There are numerous Russian-, Ukrainian-, and Polish-speaking churches in San Francisco, the South Bay, the East Bay, and on the Peninsula.[citation needed] The Latino population is spread throughout the Bay Area, but among the nine counties, the greatest number live in Santa Clara County, while Contra Costa County has seen the highest growth rate. The largest Hispanic or Latino groups were those of Mexican (17.9%), Salvadoran (1.3%), Guatemalan (0.6%), Puerto Rican (0.6%) and Nicaraguan (0.5%) ancestry. Mexican Americans make up the largest share of Hispanic residents in Napa county, while Central Americans make up the largest share in San Francisco, many of whom live in the Mission District which is home to many residents of Salvadoran and Guatemalan descent. The Asian-American population in the Bay Area is one of the largest in North America. Asian-Americans make up the plurality in two major counties in the Bay Area: Santa Clara County and Alameda County. The largest Asian-American groups were those of Chinese (7.9%), Filipino (5.1%), Indian (3.3%), Vietnamese (2.5%), and Japanese (0.9%) heritage. Asian Americans also constitute a majority in Cupertino, Fremont, Milpitas, Union City and significant populations in Dublin, Foster City, Hercules, Millbrae, San Ramon, Saratoga, Sunnyvale and Santa Clara. The cities of San Jose and San Francisco had the third and fourth most Asian-American residents in the United States. In San Francisco, Chinese Americans constitute 21.4% of the population and constitute the single largest ethnic group in the city. The Bay Area is home to over 382,950 Filipino Americans, one of the largest communities of Filipino people outside of the Philippines with the largest proportion of Filipino Americans concentrating themselves within American Canyon, Daly City, Fairfield, Hercules, South San Francisco, Union City and Vallejo. Santa Clara county, and increasingly the East Bay, house a significant Indian American community. There are more than 100,000 people of Vietnamese ancestry residing within San Jose city limits, the largest Vietnamese population of any city proper outside Vietnam. In addition, there is a sizable community of Korean Americans in Santa Clara county, where San Jose is located. East Bay cities such as Richmond, San Pablo, and Oakland, and the North Bay city of Santa Rosa, have plentiful populations of Laotian and Cambodians in certain neighborhoods. Pacific Islanders such as Samoans and Tongans have the largest presence in East Palo Alto, where they constitute over 7% of the population. San Bruno also has a large Tongan population and so does San Mateo and South San Francisco, which also have smaller communities of Samoans. The Visitacion Valley has a designated Pacific Islander district and Samoan and Tongans have a presence in Southeast San Francisco and Daly City's Bayshore neighborhood.[citation needed] The African-American population of San Francisco was formerly substantial, had a thriving jazz scene and was known as "Harlem of the West."[citation needed] While black residents formed one-seventh of the city's population in 1970, today they have mostly moved to parts of the East Bay and North Bay, including Antioch, Fairfield and out of the Bay Area entirely. The South Park neighborhood of Santa Rosa was once home to a primarily black community until the 1980s, when many Latino immigrants settled in the area. Other cities with large numbers of African Americans include Vallejo (28%), Richmond (26%), East Palo Alto (17%) and the CDP of Marin City (38%). Suisun City and Vacaville both have African American populations that have accelerated in population since the 2000s.[citation needed] There are also Eritrean, Ethiopian and Nigerian communities.[citation needed] There is also a significant Middle Eastern and Balkan population. There are 4,000 Armenians in San Francisco, and some in the San Jose area. The San Jose area, especially the Campbell area and some areas off of San Jose's Stevens Creek Blvd contain a Bosnian community. There are several thousand Turks in San Francisco, and a Palestinian population is concentrated in Daly City and San Francisco.[citation needed] Since the economy of the Bay Area heavily relies on innovation and high-tech skills, a relatively educated population exists in the region. Roughly 87.4% of Bay Area residents have attained a high school degree or higher, while 46% of adults in the Bay Area have earned a post-secondary degree or higher. As of 2025, the San Francisco Bay Area's population has the third-oldest median age of U.S. metropolitan regions and is the fastest-aging of any metropolitan area in the U.S. The Bay Area is the wealthiest region per capita in the United States, due primarily to the economic power engines of San Jose, San Francisco, and Oakland. The Bay Area city of Pleasanton has the second-highest household income in the country after New Canaan, Connecticut. However, discretionary income is very comparable with the rest of the country, primarily because the higher cost of living offsets the increased income. By 2014, the Bay Area's wealth gap was considerable: the top ten percent of income-earners took home over eleven times as much as the bottom ten percent, and a Brookings Institution study found the San Francisco metro area, which excludes four Bay Area counties, to be the third most unequal urban area in the country. Among the wealthy, forty-seven Bay Area residents made Forbes magazine's 400 richest Americans list, published in 2007. Statistics regarding crime rates in the Bay Area generally fall into two categories: violent crime and property crime. Historically, violent crime has been concentrated in a few cities in the East Bay, namely Oakland, Richmond, Martinez, and Antioch, but also in East Palo Alto within the Peninsula, Vallejo in the North Bay, and San Francisco. Nationally, Oakland's murder rate ranked 18th among cities with over 100,000 residents, and third for violent crimes per capita. According to a 2015 Federal Bureau of Investigation report, Oakland was also the source of the most violent crime in the Bay Area, with 16.9 reported incidents per thousand people. Vallejo came in second, at 8.7 incidents per thousand people, while San Pablo, Antioch, and San Francisco rounded out the top five. East Palo Alto, which used to have the Bay Area's highest murder rate, saw violent crime incidents drop 65% between 2013 and 2014, while Oakland saw violent crime incidents drop 15%. Meanwhile, San Jose, which was one of the safest large cities in the United States in the early 2000s, has seen its violent crime rates trend upwards. Cities with the lowest rate of violent crime include the Peninsula cities of Los Altos and Foster City, East Bay cities of San Ramon and Danville, and southern foothill cities of Saratoga and Cupertino. In 2015, 45 Bay Area cities counted zero homicides, the largest of which was Daly City. In 2015, Oakland also saw the highest rates of property crime in the Bay Area, at 59.4 incidents per thousand residents, with San Francisco following close behind at 53 incidents per thousand residents. The East Bay cities Pleasant Hill, Berkeley, and San Leandro rounded out the top five. Saratoga and Windsor saw the least rates of property crime. Additionally, San Francisco saw the most reports of arson. Several street gangs operate in the Bay Area, including the Sureños and Norteños in San Francisco's Mission District. Oakland, which also sees organized gang violence, implemented Operation Ceasefire in 2012 in an effort to reduce the violence, with limited success.[citation needed] Economy The three principal cities of the Bay Area represent separate employment clusters and are dominated by different but commingled industries. San Francisco is home to the region's tourism, financial industry, and is host to numerous conventions. The East Bay, centered around Oakland, is home to heavy industry, metalworking, oil, and shipping, while San Jose is the heart of Silicon Valley where a major pole of economic activity around the technology industry resides. Furthermore, the North Bay is a major player in the country's agriculture and wine industry. In all, the Bay Area is home to the second-highest concentration of Fortune 500 companies, after the New York metropolitan area, with thirty such companies based throughout the region. In 2024, the greater thirteen-county statistical area had a GDP of $1.408 trillion, the third-highest among combined statistical areas. The smaller nine-county Bay Area had a GDP of $1.332 trillion in the same year, which nonetheless would rank it fifth among U.S. states and 16th among countries. The COVID-19 pandemic caused an exodus of businesses from the downtown cores of San Francisco, San Jose, and Oakland, as remote work became more widespread, especially in the tech and retail industries, and the area's locational relevance declined. Some observers have warned that this could lead to an economic doom loop for Bay Area cities, particularly San Francisco, while others have argued that these concerns are restricted to the downtown cores. Many retailers in Downtown San Francisco and Downtown Oakland have closed since 2020, with some citing complex challenges with visible homelessness and crime in the area. This exodus has reversed since 2024 as San Francisco has become the epicenter of AI development, with OpenAI and Anthropic headquartered in the city. The Bay Area is home to five of the world's ten largest companies by market capitalization; and several major corporations are headquartered in the Bay Area, including Google, Facebook, Apple Inc., Clorox, Hewlett-Packard, Intel, Adobe Inc., Applied Materials, eBay, Cisco Systems, Symantec, Netflix, Sony Interactive Entertainment, Electronic Arts, and Salesforce; energy company PG&E; financial service company Visa Inc.; apparel retailers Gap Inc., Levi Strauss & Co., and Ross Stores; aerospace and defense contractor Lockheed Martin; local grocer Safeway; and biotechnology companies Genentech and Gilead Sciences. The largest manufacturers include Tesla Inc., Lam Research, Bayer, and Coca-Cola. The Port of Oakland is the fifth-largest container shipping port in the United States, and Oakland is also a major rail terminus. In research, NASA's Ames Research Center and the federal research facility Lawrence Livermore National Laboratory are based in Mountain View and Livermore, respectively. In the North Bay, Napa and Sonoma counties are well known for their wineries, including Fantesca Estate & Winery, Domaine Chandon California, and D'Agostini Winery. In spite of the San Francisco Bay Area's industries contributing to the aforementioned economic growth, there is a significant level of poverty in the region. Rising housing prices and gentrification in the San Francisco Bay Area are often framed as symptomatic of high-income tech workers moving in to previously low-income, underserved neighborhoods. Two notable policy strategies to prevent eviction due to rising rents include rent control and subsidies such as Section 8 and Shelter Plus Care. Moreover, in 2002, then San Francisco Supervisor Gavin Newsom introduced the "Care Not Cash" initiative, diverting funds away from cash handouts (which he argued encouraged drug use) to housing. This proved controversial, with some suggesting his rhetoric criminalized poverty, while others supporting the prioritizing of housing as a solution. Contrary to historical patterns of low incomes within the inner city, poverty rates in the Bay Area are shifting such that they are increasing more rapidly in suburban areas than in urban areas. It is not yet clear whether the suburbanization of poverty is due to the relocation of poor populations or shifting income levels in the respective regions. However, the mid-2000s housing boom encouraged city dwellers to move into the newly cheap houses in suburbs outside of the city, and these suburban housing developments were then most affected by the 2008 housing bubble burst. As such, people in poverty experience decreased access to transportation due to underdeveloped public transport infrastructure in suburban areas. Suburban poverty is most prevalent among Hispanics and Blacks, and affects native-born people more significantly than foreign-born. As greater proportions of their incomes are spent on rent, many impoverished populations in the San Francisco Bay Area also face food insecurity and health setbacks. Housing The Bay Area is the most expensive location to live in the United States outside of Manhattan. Strong economic growth has created hundreds of thousands of jobs, but this coupled with severe zoning restrictions on building new housing units, has resulted in an extreme housing shortage. For example, from 2012 to 2017, the San Francisco metropolitan area added 400,000 new jobs, but only 60,000 new housing units. As of 2016, the entire Bay Area had 3.6 M jobs, and 2.6 M housing units, for a ratio of 1.4 jobs per housing unit, significantly above the ratio for the US as a whole, which stands at 1.1 jobs per housing unit. (152M jobs, 136M housing units) As of 2017, the average income needed in order to purchase a house in the region was $179,390, while the median price for a house was $895,000 and the average cost of a home in the Bay Area was $440,000, more than twice the national average. Additionally, the average monthly rent was $1,240, 50 percent more than the national average. In 2018, a Bay Area household income of $117,000 was classified as "low income" by the Department of Housing and Urban Development. With high costs of living, many Bay Area residents allocate large amounts of their income towards housing. 20 percent of Bay Area homeowners spend more than half their income on housing, while roughly 25 percent of renters in the Bay Area spend more than half of their incomes on rent. Expending an average of more than $28,000 per year on housing in addition to roughly $13,400 on transportation, Bay Area residents spend around $41,420 per year to live in the region. This combined total of housing and transportation signifies 59 percent of the Bay Area's median household income, conveying the extreme costs of living. The high rate of homelessness in the Bay Area can be attributed to the high cost of living. No approximate number of homeless people living in the Bay Area can be determined due to the difficulty of tracking homeless residents. However, according to San Francisco's Department of Public Health, the number of homeless people in San Francisco alone is 9,975. Additionally, San Francisco was revealed to have the most unsheltered homeless people in the country. Because of the high cost of housing, many workers in the Bay Area live far from their place of employment, contributing to one of the highest percentages of extreme commuters in the United States, or commutes that take over ninety minutes in one direction. For example, about 50,000 people commute from neighboring San Joaquin County into the nine-county Bay Area daily, and more extremely, some workers commute semimonthly by flying. Education The Bay Area is home to a large number of colleges and universities. The first institution of higher education in the Bay Area, Santa Clara University, was founded by Jesuits in 1851, who also founded the University of San Francisco in 1855. San Jose State University was founded in 1857 and is the oldest public college on the West Coast of the United States. According to the Brookings Institution, 45% of residents of the two-county San Jose metro area have a college degree, and 43% of residents in the five-county San Francisco metro area have a college degree, the second and fourth-highest ranked metro areas in the country for higher educational attainment. As of 2024[update], Stanford University is the highest ranked university in the Bay Area by U.S. News & World Report, and its business school is ranked No. 1 in the US, Canada, Europe and Asia by Bloomberg Businessweek. The University of California, Berkeley has been among the two highest-ranked public universities in the country for over two decades. Additionally, San Jose State University and Sonoma State University were ranked 3rd and 12th, respectively, among regional public colleges in the West Coast by U.S. News & World Report in 2024. The city of San Francisco is host to two additional University of California schools, neither of which confer undergraduate degrees. The University of California, San Francisco, is entirely dedicated to graduate education in health and biomedical sciences. It is ranked among the top five medical schools in the United States and operates the UCSF Medical Center, which is the highest-ranked hospital in California. The University of California, College of the Law, founded in Civic Center in 1878, is the oldest law school in California and more judges on the state bench are its graduates than any other institution. The city is also host to a California State University school, San Francisco State University. Additional campuses of the California State University system in the Bay Area are Cal State East Bay in Hayward and Cal Maritime in Vallejo. California Community Colleges System also operates a number of community colleges in the Bay Area. According to CNNMoney, the Bay Area community college with the highest "success" rate is De Anza College in Cupertino, which is also the tenth-highest ranked in the nation. Other well-ranked Bay Area community colleges include Foothill College, City College of San Francisco, West Valley College, Diablo Valley College, and Las Positas College. Many scholars have pointed out the overlap of education and the economy within the Bay Area. According to multiple reports, research universities such as Stanford; University of California, Santa Cruz; and University of California, Berkeley, are essential to the culture and economy in the area. These universities also provide public programs that teach and enhance skills relevant to the local economies. These opportunities not only provide educational services to the community, but also generate significant amounts of revenue. Public primary and secondary education in the Bay Area is provided through school districts organized through three structures (elementary school districts, high school districts, or unified school districts) and are governed by an elected board. In addition, many Bay Area counties and the city of San Francisco operate "special service schools" that are geared towards providing education to students with handicaps or special needs. An alternative public educational setting is offered by charter schools, which may be established with a renewable charter of up to five years by third parties. The mechanism for charter schools in the Bay Area is governed by the California Charter Schools Act of 1992. According to rankings compiled by U.S. News & World Report, the highest-ranked high school in California is the Pacific Collegiate School, located in Santa Cruz. Within the traditional nine-county boundaries, the highest ranked high school is KIPP San Jose Collegiate in San Jose. Among the top twenty high schools in California include Lowell, Monta Vista, Lynbrook, University Preparatory Academy, Mission San Jose, Oakland Charter, Henry M. Gunn, Gilroy Early College Academy, and Saratoga. Transportation Transportation in the San Francisco Bay Area is reliant on a complex multimodal infrastructure consisting of roads, bridges, highways, rail, tunnels, airports, ferries, and bike and pedestrian paths. The development, maintenance, and operation of these different modes of transportation are overseen by various agencies, including the California Department of Transportation (Caltrans), San Francisco Municipal Transportation Agency, and the Metropolitan Transportation Commission. These and other organizations collectively manage several interstate highways and state routes, two subway networks, three commuter rail agencies, eight trans-bay bridges, transbay ferry service, local bus service, three international airports (San Francisco, San Jose, and Oakland), and an extensive network of roads, tunnels, and paths such as the San Francisco Bay Trail. The Bay Area hosts an extensive freeway and highway system that is particularly prone to traffic congestion, with one study by Inrix concluding that the Bay Area's traffic was the fourth worst in the world. There are some city streets in San Francisco where gaps occur in the freeway system, partly the result of the Freeway Revolt, and additional damage that occurred in the wake of the 1989 Loma Prieta earthquake resulted in freeway segments being removed instead of reinforced or rebuilt. The greater Bay Area contains the three principal north–south highways in California: Interstate 5, U.S. Route 101, and California State Route 1. U.S. 101 and State Route 1 directly serve the traditional nine-county region, while Interstate 5 bypasses to the east in San Joaquin County to provide a more direct Los Angeles–Sacramento route. Additional local highways connect the various subregions of the Bay Area together. There are over two dozen public transit agencies in the Bay Area with overlapping service areas that utilize different modes, with designated connection points between the various operators. Bay Area Rapid Transit (BART), a heavy rail/metro system, operates in five counties and connects San Francisco and Oakland via the Transbay Tube. Other commuter rail systems link San Francisco with the Peninsula and San Jose (Caltrain), San Jose with the Tri-Valley Area and San Joaquin County (ACE), and Sonoma with Marin County (SMART). In addition, Amtrak provides frequent commuter service between San Jose and the East Bay with Sacramento, and long-distance service to other parts of the United States. Muni Metro operates a hybrid streetcar/subway system within the city of San Francisco, and VTA operates a light rail system in Santa Clara County. These rail systems are supplemented by numerous bus agencies and transbay ferries such as Golden Gate Ferry and the San Francisco Bay Ferry. Most of these agencies accept the Clipper Card, a reloadable contactless smart card, as a universal electronic payment system. Government and politics Government in the San Francisco Bay Area consists of multiple actors, including 101 city and nine county governments, a dozen regional agencies, and a large number of single-purpose special districts such as municipal utility districts and transit districts. Incorporated cities are responsible for providing police service, zoning, issuing building permits, and maintaining public streets among other duties. County governments are responsible for elections and voter registration, vital records, property assessment and records, tax collection, public health, agricultural regulations, and building inspections, among other duties. Public education is provided by independent school districts and are managed by an elected school board. A variety of special districts also exist and provide a single purpose, such as delivering public transit in the case of the Bay Area Rapid Transit District, or monitoring air quality levels in the case of the Bay Area Air Quality Management District. Politics in the Bay Area is widely regarded as one of the most liberal in California and in the United States. Since the late 1960s, the Bay Area has cemented its role as the most liberal region in California politics, giving greater support for the center-left Democratic Party's candidates than any other region of the state, even as California trended towards the Democratic Party over time. According to research by the Public Policy Institute of California, the Bay Area and the North Coast counties of Humboldt and Mendocino were the most consistently and strongly liberal areas in California. According to the California Secretary of State, the Democratic Party holds a voter registration advantage in every congressional district, State Senate district, State Assembly district, State Board of Equalization district, all nine counties, and all of the 101 incorporated municipalities in the Bay Area. On the other hand, the center-right Republican Party holds a voter registration advantage in only one State Assembly sub-district (the portion of the 4th in Solano County). According to the Cook Partisan Voting Index (CPVI), the Bay Area's districts tend to favor Democratic candidates by roughly 40 to 50 percentage points, considerably above the mean for California and the nation overall. In U.S. Presidential elections since 1960, the nine-county Bay Area voted for Republican candidates only two times, in both cases voting for a candidate from California: Richard Nixon in 1972 and Ronald Reagan in 1980. The last county to vote for a Republican presidential candidate was Napa county, who voted for George H. W. Bush in 1988. Since then, all nine Bay Area counties have voted consistently for the Democratic candidate, and currently, both of California's Senators are Democrats, as are all twelve congressional districts wholly or partially in the Bay Area. Additionally, every Bay Area member of the California State Senate and the California State Assembly is a registered Democrat. The Bay Area's association with progressive politics has led to the term "San Francisco values" being used pejoratively by conservative commentators to describe the secular progressive culture in the area. The Metropolitan Transportation Commission (MTC), formed in 1970 by the California Legislature, is the region's metropolitan planning organization, while the Association of Bay Area Governments (ABAG) serves as the regional planning agency and council of local governments. The ABAG and MTC functionally merged by consolidating their staff into a single team, effective July 1, 2017, although they maintain separate governing boards. ABAG and MTC developed Plan Bay Area, the area's regional transportation plan, in 2013 with a goal date of 2040. Other regional governance agencies include the Bay Area Air Quality Management District, Bay Area Toll Authority, San Francisco Bay Restoration Authority, and the Bay Conservation & Development Commission. Culture The Bay Area was a hub of painting's Abstract Expressionism movement. It is associated with the works of Clyfford Still, who began teaching at the California School of Fine Arts (now the San Francisco Art Institute) in 1946. In 1950, Abstract Expressionist David Park painted Kids on Bikes, which retained many abstract expressionism aspects but with original distinguishing features that would later lead to the Bay Area Figurative Movement. While both the Figurative and Abstract Expressionism movements arose from art schools, Funk art rose out of the region's underground and was characterized by an informal sharing of technique in "cooperative" galleries instead of formal museums. The Bay Area art movement would also be heavily influenced by the counterculture movement. The San Francisco Renaissance was an era of poetic activity centered in San Francisco and on poets such as Gary Snyder, Allen Ginsberg, Lawrence Ferlinghetti in the 1950s. The movement, which often included visual and performing arts, was heavily influenced by cross-cultural interests, particularly Buddhism, Taoism, and general East Asian cultures. The Bay Area is also home to a thriving computer animation industry led by Pixar Animation Studios and Industrial Light & Magic. The Bay Area has been home to several musical movements that left lasting influences on the genres they affected. San Francisco in particular was the center of the counterculture movement that led to the rise of The Grateful Dead, Jefferson Airplane, and Janis Joplin, all three of which are closely associated with the 1967 Summer of Love. Jimi Hendrix also had strong connections to the Bay Area, as he lived in Berkeley briefly as a child and played in many local venues in the 1960s. By the 1970s, San Francisco had developed a vibrant jazz scene, earning the moniker, "Harlem of the West". At the same time, Bay Area bands such as Creedence Clearwater Revival became known for their political and socially-conscious lyrics, particularly against the Vietnam War. Carlos Santana also rose to fame in the early 1970s with his Santana band. Two former members of Santana, Neal Schon and Gregg Rolie later led the band Journey. During the 1980s and early 1990s, the Bay Area became home to heavy metal and hard rock bands and also to one of the largest and most influential thrash metal scenes, with contributions from Exodus, Testament, Death Angel, Forbidden, Vio-lence, Lȧȧz Rockit, Possessed and Blind Illusion. Additionally, three of the "Big Four" thrash metal bands, Metallica, Slayer and Megadeth, while from Los Angeles, contributed to Bay Area thrash metal by frequently playing shows in the area, especially early in their careers. The post-grunge era in the 1990s featured prominent Bay Area bands Third Eye Blind, Counting Crows, and Smash Mouth, and pop punk rock bands such as Green Day. The 1990s also saw the emergence of the hyphy movement in hip hop, derived from the Oakland slang for "hyperactive", and pioneered by Bay Area rappers Andre "Mac Dre" Hicks, Mistah Fab, and E-40. Other notable rappers from the Bay Area include Lil B, Tupac Shakur, MC Hammer, Too $hort, and G-Eazy. Today, much of Oakland and East Bay rap is "conscious rap", which concerns itself with social issues and awareness. The Bay Area is also home to hundreds of classical music ensembles, from community choirs to professional orchestras, such as the San Francisco Symphony, California Symphony, Fremont Symphony Orchestra, Oakland Symphony and the San Francisco Chamber Orchestra. The Bay Area is the third largest center of activity for theater companies and actors in the United States, after the New York City and Chicago metropolitan areas, with 400 companies in the region. Theatre Bay Area was founded in 1976 by the Magic Theatre and American Conservatory Theater (ACT) in San Francisco and the Berkeley Repertory Theatre in Berkeley. The latter two, along with the San Francisco Mime Troupe and Palo Alto-based Theatreworks, have each won one Regional Theatre Tony Award. Several famous actors have emerged from the Bay Area's theatre community, including Daveed Diggs and Darren Criss. Other local actors include James Carpenter, a stage actor who has performed at the ACT, Berkeley Repertory, and San Jose Repertory Theatre among others, and Rod Gnapp of the Magic Theatre Company, Sean San Jose, Campo Santo member Margo Hall, and one of the founders of the Campo Santo theater. The Bay Area also has an active youth theater scene. ACT and the Berkeley Repertory both run classes and camps for young actors, as do the Peninsula Youth Theater, Willow Glen Children's Theatre, Bay Area Children's Theater, Danville Children's Musical Theater, Marin Shakespeare, and many others. Media The San Francisco Bay Area is the tenth-largest television market and the fourth-largest radio market in the U.S. The Bay Area's oldest radio station, KCBS (AM), began as an experimental station in San Jose in 1909, before the beginning of commercial broadcasting. KALW was the Bay Area's first FM radio station, and first radio station to begin commercial broadcasting west of the Mississippi River when it signed on the air in 1941. KPIX, which began broadcasting in 1948, was the first television station to air in the Bay Area and Northern California. All major U.S. television networks have affiliates serving the region, including KTVU 2 (FOX), KRON-TV 4 (The CW), KPIX 5 (CBS), KGO-TV 7 (ABC), KQED-TV 9 (PBS), KNTV 11 (NBC), KICU-TV 36 (MyNetworkTV), KPYX 44 (Independent), KQEH 54 (PBS), and KKPX 65 (Ion). Bloomberg West, a show that focuses on topics pertaining to technology and business, was launched in 2011 and continues to broadcast from San Francisco. Public broadcasting outlets include both a television and a radio station, both broadcasting from near the Potrero Hill neighborhood under the call letters KQED. KQED-FM is the most-listened-to National Public Radio affiliate in the country. Another local broadcaster, KPOO, is an independent, African-American owned and operated noncommercial radio station established in 1971. The largest newspapers in the Bay Area and the most widely circulated in Northern California are the San Francisco Chronicle and San Jose Mercury News. The Chronicle is best known for the late Herb Caen, whose daily musings attracted critical acclaim and represented the "voice of San Francisco". The San Francisco Examiner, once the cornerstone of William Randolph Hearst's media empire and the home of Ambrose Bierce, declined in circulation over the years and now takes the form of a free daily tabloid. Additionally, most of the Bay Area's local regions and municipalities also have their own newspapers, such as the East Bay Times and San Mateo Daily Journal. The national newsmagazine Mother Jones is also based in San Francisco, and non-English-language newspapers include El Mundo, a free Spanish-language weekly distributed by Mercury News, and several Chinese-language papers, the largest of which is Sing Tao Daily. Sports and recreation The Bay Area is home to five professional major league franchises in men's sports: the San Francisco 49ers of the National Football League (NFL), the San Francisco Giants of Major League Baseball (MLB), the Golden State Warriors of the National Basketball Association (NBA), the San Jose Sharks of the National Hockey League (NHL), and the San Jose Earthquakes of Major League Soccer (MLS). A professional women's soccer team, Bay FC of the National Women's Soccer League (NWSL), debuted in 2024, and the Golden State Valkyries played their first season in the Women's National Basketball Association (WNBA) in 2025. In football, the 49ers play in Levi's Stadium and have won five Super Bowls (XVI, XIX, XXIII, XXIV, XXIX) and lost three (XLVII, LIV, and LVIII). A second NFL team in the Bay Area, the Oakland Raiders, played from 1970 to 1981 and 1995 to 2019 at the Oakland Coliseum; they won the Super Bowl twice during their tenure there. The team relocated to Los Angeles from 1982 to 1994 and to Las Vegas in 2020. In baseball, the Giants play at Oracle Park and have won eight World Series titles, three (2010, 2012, and 2014) since relocating to San Francisco in 1958. The Athletics, who played at the Oakland Coliseum, have won nine World Series titles, four in Oakland (1972, 1973, 1974, and 1989). The Athletics left Oakland in 2024 and will become the Las Vegas Athletics in 2028 after temporarily playing in the Sacramento area. In basketball, the Warriors play at Chase Center and have won seven NBA Finals, five (1975, 2015, 2017, 2018 and 2022) since relocating to the Bay Area in 1962. The Warriors own the Valkyries, who will also play at Chase Center. In ice hockey, the Sharks play at the SAP Center. They made their first and only Stanley Cup Final appearance in 2016 but did not win. In soccer, the Earthquakes play at PayPal Park and have won the MLS Cup twice, in 2001 and 2003. Bay FC joined the Earthquakes at PayPal Park and have competed in NWSL since 2024. The Bay Area hosted matches during the 1994 FIFA World Cup at Stanford Stadium and will host matches during the 2026 FIFA World Cup at Levi's Stadium. The Bay Area also hosted soccer competition during the 1984 Summer Olympics and will do so again during the 2028 Summer Olympics. The Bay Area is also home to numerous minor league franchises. In hockey, the San Jose Barracuda play in the American Hockey League and are the top affiliate of the San Jose Sharks, sharing the SAP Center in San Jose. In baseball, the San Jose Giants are the Low-A affiliate of the San Francisco Giants and play out of San Jose Municipal Stadium in the California League of Minor League Baseball. In soccer, the Oakland Roots SC play in the second division of American soccer and moved to the Oakland Coliseum in 2025. In the Indoor Football League, the Bay Area Panthers play at the SAP Center. Six Bay Area universities are members of NCAA Division I, the highest level of college sports in the U.S. All three football-playing schools are in the Football Bowl Subdivision, the highest level of NCAA college football. The California Golden Bears and Stanford Cardinal compete in the Atlantic Coast Conference, and the San Jose State Spartans compete in the Mountain West Conference. The Cardinal and Golden Bears are intense rivals, with their football teams competing annually in the Big Game for the Stanford Axe. One of the most famous games in the rivalry occurred in 1982, when the Golden Bears defeated the Cardinal on a last-second kickoff return known as "The Play". The Bay Area has an ideal climate for outdoor recreation, and activities such as hiking, cycling and jogging are popular. San Francisco alone contains more than 200 parks; more than 200 mi (320 km) of bicycle paths, lanes, and routes; and the Embarcadero and Marina Green are favored sites for skateboarding. Extensive public tennis facilities are available in Golden Gate Park and Dolores Park, as well as at smaller neighborhood courts throughout the city. Boating, sailing, windsurfing, and kitesurfing are popular on the bay, and the bay area was host to the 2013 America's Cup. San Francisco maintains a yacht harbor in the Marina District, where the St. Francis Yacht Club and Golden Gate Yacht Club are located, while the South Beach Yacht Club is located next to Oracle Park. Other Bay Area yacht clubs include the Alameda, Berkeley, Corinthian, Oakland, Presidio, Sausalito, and Sequoia. See also References External links 37°49′N 122°22′W / 37.81°N 122.37°W / 37.81; -122.37
========================================