text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/East_Frisian_jokes] | [TOKENS: 759]
Contents East Frisian jokes In German humour, East Frisian jokes (German: Ostfriesenwitz) belong to the group of riddle jokes about certain nationalities, in this case the East Frisians of northern Germany. The basic structure of these jokes takes the form of a simple question and answer; the question often asking something about the nature of the East Frisian and the humorous reply usually being at the expense of the supposedly stupid and/or primitive East Frisian. Often the East Frisians are portrayed as farmers, rural folk or coastal dwellers. Many punch lines describe the foolishness of East Frisians by using figure of speech or a word used in a different sense (a pun or play on words). Sometimes the reverse situation also occurs in which the East Frisians are the wiser; and are contrasted usually with a group of people from the southern German-speaking world. Comedians such as Otto Waalkes and Karl Dall include East Frisian jokes in their repertoires, usually in a free format. In East Frisia itself these jokes are usually accepted. The positive effect of a greater awareness of the relatively small region of East Frisia resulting from this humour is recognized and welcomed. A modern legend even suggests that these jokes were invented by the East Frisians. Examples History The East Frisian form of joke arose in the late 1960s and triggered one of the first large, nationwide waves of jokes in Germany. Unlike other jokes about specific people groups, the history of East Frisian jokes is fairly well known. The grammar school in Westerstede in Ammerland, a region neighbouring East Frisia, was and is attended by East Frisian pupils. As with many other nearby regions, there is frequent taunting and teasing between the peoples of East Frisia and the Ammerland. At the aforementioned school it culminated in 1968 and 1969, when the student Borwin Bandelow [de], who later became a famous psychiatrist, published a series in the school newspaper, Der Trompeter, called "From research and teaching." This series was about the so-called "Homo ostfrisiensis", the supposedly clumsy and stupid people of East Frisia. Wiard Raveling, himself an East Frisian and teacher at this school, published the "History of East Frisian Jokes" in book form in 1993. What followed from the series in the student newspaper, was a joke wave, which spread, first in the region, but was soon publicized on radio, newspapers and magazines in Germany. Media such as Stern or Spiegel reported on the curious neighbourhood disputes between East Frisians and Ammerlanders - and spread it by passing on the jokes. These were soon overtaken by the adaptations of the Polish jokes that had recently arisen in the 1960s in the U.S. with numerous variations, as well of jokes about other people groups. In 1971 the East Frisian comedian and singer, Hannes Flesner, released several LPs with the then new East Frisian jokes ("East Frisia, as it laughs and sings"). Later, the two comedians from East Frisia, Otto Waalkes and Karl Dall, among others, built their careers on East Frisian jokes or the stereotype of the East Frisians and their country. Later joke waves, such as that in the 1980s about Federal Chancellor, Helmut Kohl, or those about Opel Manta drivers, or shortly thereafter about blondes in the 1990s partly took over the structure and content of the East Frisian jokes. References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/PySide] | [TOKENS: 406]
Contents PySide PySide is a Python binding of the cross-platform GUI toolkit Qt developed by The Qt Company, as part of the Qt for Python project. It is one of the alternatives to the standard library package Tkinter. Like Qt, PySide is free software. PySide supports Linux/X11, macOS, and Microsoft Windows. The project can also be cross compiled to embedded systems like Raspberry Pi, and Android devices. History By 2009, Nokia, the then owners of the Qt toolkit, wanted Python binding available under the LGPL license. Nokia failed to reach an agreement with Riverbank Computing, the developers of the PyQt Python binding. In August, Nokia released PySide. It provided similar functionality, but under the LGPL. 'Side' is Finnish for binding. There have been three major versions of PySide: PySide version 1 was released in August 2009 under the LGPL by Nokia, then the owner of the Qt toolkit, after it failed to reach an agreement with PyQt developers Riverbank Computing to change its licensing terms to include LGPL as an alternative license. It supported Qt 4 under the operating systems Linux/X11, Mac OS X, Microsoft Windows, Maemo and MeeGo, while the PySide community added support for Android. PySide2 was started by Christian Tismer to port PySide from Qt 4 to Qt 5 in 2015. The project was then folded into the Qt Project. It was released in December 2018. PySide6 was released in December 2020. It added support for Qt 6 and removed support for all Python versions older than 3.6. The project started out using Boost. Python from the Boost C++ libraries for the bindings. It later created its own binding generator named Shiboken, to reduce the size of the binaries and the memory footprint.[when?] "Hello, world!" example See also References External links This free and open-source software article is a stub. You can help Wikipedia by adding missing information.
========================================
[SOURCE: https://en.wikipedia.org/wiki/White_Mosque_(Ramla)] | [TOKENS: 1533]
Contents White Mosque of Ramle The White Mosque (Arabic: المسجد الأبيض, romanized: al-Masjid al-Abyad; Hebrew: המסגד הלבן, romanized: HaMisgad HaLavan) was an Umayyad-era mosque, now in partial ruins, located in Ramle, in central Israel. Only its minaret is still standing. According to local Islamic tradition, the northwestern section of the mosque contained the shrine of an Islamic prophet, Salih. The minaret is also known as the Tower of the Forty Martyrs. Islamic tradition dating from 1467 CE claims that forty companions of the Islamic prophet Muhammad were buried at the mosque, which influenced an erroneous Western Christian tradition from the 16th century that the White Mosque was originally a church dedicated to the Forty Martyrs of Sebaste. In 2000, the mosque site was added to the UNESCO World Heritage Tentative List. History Construction on the White Mosque was initiated by the Umayyad governor (and future caliph) Sulayman ibn Abd al-Malik in 715–717 CE, and was completed by his successor Umar II in 720. The mosque was constructed of marble, while its courtyard was made of other local stone. Some two-and-a-half centuries later, Al-Maqdisi (c. 945/946–991) described it as follows: “The chief mosque of al-Ramla is in the market, and it is even more beautiful and graceful than that of Damascus (Umayyad Mosque). It is called al-Abyad the White Mosque. In all Islam there is found no finer mihrab (prayer niche) than the one here, and its pulpit is the most splendid to be seen after that of Jerusalem; also it possesses a beautiful minaret, built by the caliph Hisham ibn Abd al-Malik. I have heard my uncle relate that when this caliph was about to build the minaret, it was reported to him that the Christians possessed columns of marble, at this time lying buried beneath the sand, which they had prepared for the Church of Baliʾah (Abu Ghosh). Thereupon the caliph Hisham informed the Christians that either they must show him where these columns lay, or that he would demolish their church at Lydda (Church of Saint George), and employ its columns for the building of his mosque. So the Christians pointed out where they had buried their columns. They are very thick, and tall, and beautiful. The covered portion (or main building) of the mosque is flagged with marble, and the court with other stone, all carefully laid together. The gates of the main-building are made of cypress-wood and cedar, carved in the inner parts, and very beautiful in appearance.” An earthquake in January 1034 destroyed the mosque, "leaving it in a heap of ruins", along with a third of the city. In 1047, Nasir Khusraw reported that the mosque had been rebuilt. After the initial reconstruction, Saladin ordered in 1190 one of his outstanding architects, Ilyas Ibn ʿAbd Allah, to supervise what is considered the second construction phase of the mosque. Ilyas built the mosque's western side and the western enclosure wall, together with the central wudu building.[dubious – discuss] The third phase, in 1267–1268, began after the final fall of the Christian Kingdom of Jerusalem. On the orders of the Mamluk sultan al-Zahir Baibars, the mosque was rededicated and modified by adding the minaret, the dome, a new pulpit and prayer niche, a portico east of the minaret, and two halls outside the enclosure. Later Mamluk sultan al-Nasir Muhammad renovated the minaret after an earthquake in October 1318. The Mamluks again commissioned restoration works in 1408. The last restoration of the White Mosque of Ramle took place during between 1844-1918. Since then, the mosque has been mostly destroyed, except for its minaret. Architecture The White Mosque's compound is rectangular, 93 by 84 meters (305 by 276 ft), and oriented to the cardinal points. A large, open sahn is surrounded by built structures and walls. The 12-meter-wide (39 ft) prayer hall stands along the southern wall, with twelve openings northwards to the sahn. Its ceiling consists of cross-vaults supported by a central row of pillars. The ceiling and the western part of the prayer hall are 12th-century additions made by Saladin, who also had a new mihrab (prayer niche) built. Much of the mosque was built in white marble with cypress and cedar wood used for the doors. Of its four facades, the eastern one is in disrepair.[dubious – discuss] The current Mamluk-built minaret, officially the Tower of the Forty Martyrs, also known as "The White Tower", stands on the northern side of the mosque compound, is square in shape and five stories high, each adorned with window niches, and has a balcony towards the top. The minaret was probably influenced by Crusader-era Christian architecture, but it was built by the Mamluks. 27 meters (89 ft) tall, it is accessed via a staircase with 125 steps and contains small rooms, which could be used for resting or as study rooms. Al-Maqdisi mentioned a minaret in the 10th century. There is speculation about a minaret predating the Mamluk one that may have been located closer to the centre of the mosque, as remnants of a square foundation have been found there. However, this may have been just a fountain.[dubious – discuss] Below the central courtyard of the mosque there are three large and well-preserved underground cisterns with barrel-vaults carried by pillars. Two cisterns (the southern and western ones) were filled by an underground water duct probably connected to the aqueduct built simultaneously with the mosque and city, which brought spring water (probably from the vicinity of Gezer to the east). The third eastern cistern was supplied by runoff rainwater. The reservoirs provided water for worshippers at the mosque and filled the pool for wudu at the center of the courtyard, of which only the foundation remains today. Archaeological excavations Excavations conducted by the State of Israel in 1949 on behalf of the Ministry of Religious Services and the Israel Department of Antiquities and Museums revealed that the mosque enclosure was built in the form of a quadrangle and included the mosque itself; two porticoes along the quadrangle's east and west walls; the north wall; the minaret; an unidentified building in the centre to the area; and three subterranean cisterns. The mosque was a broad-house, with a qibla facing Mecca. Two inscriptions were found that mention repairs to the mosque: the first relates that sultan Baibars built a dome over the minaret and added a door; the second inscription states that in 1408, Seif ed-Din Baighut ez-Zahiri had the walls of the southern cistern coated with plaster. See also References Further reading External links
========================================
[SOURCE: https://he.wikipedia.org/wiki/MusicBrainz] | [TOKENS: 467]
תוכן עניינים MusicBrainz MusicBrainz הוא מאגר מידע מוזיקלי אינטרנטי, שמבוסס על קוד פתוח. האתר מכיל מידע על אמנים: זמרים, זמרים יוצרים, הרכבים מוזיקליים ומוזיקאים, תקליטיהם ושיריהם ועל היחסים ביניהם. המידע על האלבומים מכיל את כותרות האלבום, שמות השירים ואורכם. המידע באתר מתוחזק על ידי עורכים מתנדבים. כמו כן, המידע על התקליטורים כולל גם תאריכי הוצאה, מקומות הפצה ומספרי זיהוי של התקליטורים. נכון ל-9 ביולי 2016 האתר מכיל מידע על 1,094,897 אמנים, 1,636,741 אלבומים ו-20,509,451 שירים. קישורים חיצוניים הערות שוליים
========================================
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_note-9] | [TOKENS: 9291]
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Maccabean_Revolt] | [TOKENS: 10097]
Contents Maccabean Revolt The Maccabean Revolt (Hebrew: מֶרֶד הַמַּכַּבִּים) or the Hasmonean revolt (מֶרֶד הַחַשְׁמוֹנָאִים) was a Jewish rebellion led by the Maccabees against the Seleucid Empire and against Hellenistic influence on Jewish life. The main phase of the revolt lasted from 167 to 160 BCE and ended with the Seleucids in control of Judea, but conflict between the Maccabees, Hellenized Jews, and the Seleucids continued until 134 BCE, with the Maccabees eventually attaining independence. Seleucid King Antiochus IV Epiphanes launched a massive campaign of repression against the Jewish religion in 168 BCE. The reason he did so is not entirely clear, but it seems to have been related to the King mistaking an internal conflict among the Jewish priesthood as a full-scale rebellion. Jewish practices were banned, Jerusalem was placed under direct Seleucid control, and the Second Temple in Jerusalem was made the site of a syncretic Pagan-Jewish cult. This repression triggered the revolt that Antiochus IV had feared, with a group of Jewish fighters led by Judas Maccabeus (Judah Maccabee) and his family rebelling in 167 BCE and seeking independence. The rebels as a whole would come to be known as the Maccabees, and their actions would be chronicled later in the books of 1 Maccabees and 2 Maccabees. The rebellion started as a guerrilla movement in the Judean countryside, raiding towns and terrorizing Greek officials far from direct Seleucid control, but it eventually developed a proper army capable of attacking the fortified Seleucid cities. In 164 BCE, the Maccabees captured Jerusalem, a significant early victory. The subsequent cleansing of the temple and rededication of the altar on 25 Kislev is the source of the festival of Hanukkah. The Seleucids eventually relented and unbanned Judaism, but the more radical Maccabees, not content with merely reestablishing Jewish practices under Seleucid rule, continued to fight, pushing for a more direct break with the Seleucids. Judas Maccabeus died in 160 BCE at the Battle of Elasa against the Greek general Bacchides, and the Seleucids reestablished direct control for a time, but remnants of the Maccabees under Judas's brother Jonathan Apphus continued to resist from the countryside. Eventually, internal division among the Seleucids and problems elsewhere in their empire would give the Maccabees their chance for proper independence. In 141 BCE, Simon Thassi succeeded in expelling the Greeks from their citadel in Jerusalem. An alliance with the Roman Republic helped guarantee their independence. Simon would go on to establish an independent Hasmonean state, which his line, the Hasmonean dynasty, governed. Background Beginning in 338 BCE, Alexander the Great began an invasion of the Persian Empire. In 333–332 BCE, Alexander's Macedonian forces conquered the Levant, including Palestine. At the time, Judea was home to many Jews who had returned from exile in Babylon thanks to the Persians. Alexander's empire was partitioned in 323 BCE after Alexander's death, and after the Wars of the Diadochi, the territory was taken by what would become Ptolemaic Egypt in 302–301 BCE. Another of the Greek successor states, the Seleucid Empire, would conquer Judea from Egypt during a series of campaigns from 235–198 BCE. During both Ptolemaic and Seleucid rule, many Jews learned Koine Greek, especially upper class Jews and Jewish minorities in towns further afield from Jerusalem and more attached to Greek trading networks. Greek philosophical ideas spread through the region as well. A Greek translation of the scriptures, the Septuagint, was also created during the third century BCE. Many Jews adopted dual names with both a Greek name and a Hebrew name, such as Jason and Joshua. Still, many Jews continued to speak the Aramaic language, the language that descended from what was spoken during the Babylonian exile. In general, the ruling Greek policy during this time period was to let Jews manage their own affairs and not interfere overtly with religious matters. Greek authors in the third century BCE who wrote about Judaism did so mostly positively. Cultural change did happen, but was largely driven by Jews themselves inspired by ideas from abroad; Greek rulers did not undertake explicit programs of forced Hellenization. Antiochus IV Epiphanes came to the throne of the Seleucids in 175 BCE, and did not change this policy. He appears to have done little to antagonize the region at first, and the Jews were largely content under his rule. One element that would come to later prominence was Antiochus IV replacing the high priest Onias III with his brother Jason after Jason offered a large sum of money to Antiochus. Jason also sought and received permission to make Jerusalem a self-governing polis, albeit with Jason able to control the citizenship lists of who would be able to vote and hold political office. These changes did not immediately appear to rouse any particular complaint from the majority of the citizenry in Jerusalem, and presumably he still kept the basic Jewish laws and tenets. Three years later, a newcomer named Menelaus offered an even larger bribe to Antiochus IV for the position of high priest. Jason, resentful, turned against Antiochus IV; additionally, a rumor spread that Menelaus had sold golden temple artifacts to help pay for the bribe, leading to unhappiness, especially among the city council Jason had established. This conflict was largely political rather than cultural; all sides, at this point, were "Hellenized", content with Seleucid rule, and primarily divided over Menelaus's alleged corruption and sacrilege. In 170–168 BCE, the Sixth Syrian War between the Seleucids and the Ptolemaic Egyptians arose. Antiochus IV led an army to attack Egypt. On his way back through Jerusalem after the successful campaign, High Priest Menelaus allegedly invited Antiochus inside the Second Temple (in violation of Jewish law), and he raided the temple treasury for 1800 talents.[note 1] Tensions with the Ptolemaic dynasty continued, and Antiochus rode out on campaign again in 168 BCE. Jason heard a rumor that Antiochus had perished, and launched an attempted coup against Menelaus in Jerusalem. Hearing of this, Antiochus, who was not dead, apparently interpreted this factional infighting as a revolt against his personal authority, and sent an army to crush Jason's plotters. From 168–167 BCE, the conflict spiraled out of control, and government policy radically shifted. Thousands in Jerusalem were killed and thousands more were enslaved; the city was attacked twice; new Greek governors were sent; the government seized land and property from Jason's supporters; and the Temple in Jerusalem was made the site of a syncretic Greek-Jewish religious group, polluting it in the eyes of the devout Jews. A new citadel garrisoned by Greeks and pro-Seleucid Jews, the Acra, was built in Jerusalem. Antiochus IV issued decrees officially suppressing the Jewish religion; subjects were required to eat pork and violate Jewish dietary law, work on the Jewish Sabbath, cease circumcising their sons, and so on.[note 2] The policy of tolerance of Jewish worship was at an end. The rebellion For Antiochus the unexpected conquest of the city (Jerusalem), the looting, and the wholesale slaughter were not enough. His psychopathic tendency was exacerbated by resentment at what the siege had cost him, and he tried to force the Jews to violate their traditional codes of practice by leaving their infant sons uncircumcised and sacrificing pigs on the altar. These orders were universally ignored, and Antiochus had the most prominent recusants butchered. — Flavius Josephus, The Jewish War, Book 1.34–35 In the aftermath of Antiochus IV issuing his decrees forbidding Jewish religious practice, a campaign of land confiscations paired with shrine and altar-building took place in the Judean countryside. A rural Jewish priest from Modein, Mattathias (Hebrew: Matityahu) of the Hasmonean family, sparked the revolt against the Seleucid Empire by refusing to worship the Greek gods at Modein's new altar. Mattathias killed a Jew who had stepped forward to take Mattathias' place in sacrificing to an idol as well as the Greek officer who was sent to enforce the sacrifice. He then destroyed the altar. Afterwards, he and his five sons fled to the nearby mountains, which sat directly next to Modein. After Mattathias' death about one year later in 166 BCE, his son Judas Maccabeus (Hebrew: Judah Maccabee) led a band of Jewish dissidents that would eventually absorb other groups opposed to Seleucid rule and grow into an army. While unable to directly strike Seleucid power at first, Judas's forces could maraud the countryside and attack Hellenized Jews, of whom there were many. The Maccabees destroyed Greek altars in the villages, forcibly circumcised boys, burnt villages, and drove Hellenized Jews off their land. Judas's nickname "Maccabee", now used to describe the Jewish partisans as a whole, is probably taken from the word "hammer" (Aramaic: maqqaba; Hebrew: makebet); the term "Maccabee" or "Maccabeus" would later be used as an honorific for Judas's brothers as well. Judas's campaign in the countryside became a full-scale revolt. Maccabean forces employed guerrilla tactics emphasizing speed and mobility. While less trained and under-equipped for pitched battles, the Maccabees could control which battles they took and retreat into the wilderness when threatened. They defeated two minor Seleucid forces at the Battle of the Ascent of Lebonah in 167 BCE and the Battle of Beth Horon in 166 BCE. Toward the end of summer in 165 BCE, Antiochus IV departed for Babylonia in the eastern half of his empire, and left Lysias in charge of the western half as regent. Shortly afterward, the Maccabees won a more substantial victory at the Battle of Emmaus. The factions attempted to negotiate a compromise, but failed; a large Seleucid army was sent to quash the revolt. After the Battle of Beth Zur in 164 BCE as well as news of the death of Antiochus IV in Persia, the Seleucid troops returned to Syria. The Maccabees entered Jerusalem in triumph. They ritually cleansed the Second Temple, reestablishing traditional Jewish worship there; 25 Kislev, the date of the cleansing in the Hebrew calendar, would later become the date when the festival of Hanukkah begins. Regent Lysias, preoccupied with internal Seleucid affairs, agreed to a political compromise that revoked Antiochus IV's ban on Jewish practices. This proved a wise decision: many Hellenized Jews had cautiously supported the revolt due to the suppression of their religion. With the ban retracted, their religious goals were accomplished, and the Hellenized Jews could more easily be potential Seleucid loyalists again. The Maccabees did not consider their goals complete, however, and continued their campaign for a starker break from Greek influence and full political independence. The rebels suffered a loss of support from moderates as a result. With the rebels now in control of most of Jerusalem and its environs, a second phase of the revolt began. The rebellion had additional resources, but also additional responsibilities. Rather than being able to retreat to the mountains, the rebels now had territory to defend; abandoning cities would leave their loyalists open to reprisals if the pro-Seleucid forces were allowed to take control again. As such, they focused on being able to win open battles, with additional trained heavy infantry. A civil struggle of low-level violence, reprisals, and murders arose in the countryside, especially in more distant areas where Jewish people were in the minority. Judas launched expeditions to these regions outlying Judea to fight non-Jewish Idumeans, Ammonites, and Galileans. He recruited devout Jews and sent them into Judea to concentrate his allies where they could be protected, although this influx of refugees would soon create food scarcity issues in the land the Maccabees held. In 162 BCE, Judas began a long siege of the fortified Acra citadel in Jerusalem, still controlled by Seleucid loyalist Jews and a Greek garrison. Regent Lysias, having dealt with rivals back in Antioch, returned to Judea with an army to aid the Seleucid forces. The Seleucids besieged Beth-Zur and took it without a fight, as it was a fallow year and food supplies were meager. They battled Judas's forces in an open fight at the Battle of Beth Zechariah next, with the Seleucids defeating the Maccabees. Judas's younger brother Eleazar Avaran died in battle after bravely attacking a war elephant and being crushed. Lysias's army next besieged Jerusalem. With supplies of food short on both sides and reports of a political rival returning from the eastern provinces to Antioch, Lysias decided to sign an agreement with the rebels and confirm the repeal of the anti-Jewish decrees; the rebels, in return, abandoned their siege of the Seleucid Acra. Lysias and his army then returned to Antioch, with the province officially at peace, but neither the Hellenized Jews nor the Maccabees laid down their arms. At some point from 163–162 BCE, Lysias ordered the execution of despised High Priest Menelaus as another gesture of reconciliation to the Jews. Shortly afterward, both regent Lysias and 11-year old king Antiochus V were executed after losing a succession struggle with Demetrius I Soter, who became the new Seleucid king. In the winter of late 162 BCE to early 161 BCE, Demetrius I appointed a new high priest, Alcimus, to replace Menelaus and sent an army led by general Bacchides to enforce Alcimus's station. Judas did not give battle, perhaps still rebuilding after his defeat at Beth Zechariah. Alcimus was accepted into Jerusalem, and proved more effective at rallying moderate Hellenists to the pro-Seleucid faction than Menelaus had been. Still, violent tensions between the Maccabees and the Hellenized Jews continued. Bacchides returned to Syria, and a new general, Nicanor, was appointed military governor of Judea. A truce was briefly made between Nicanor and the Maccabees, but was soon broken. Nicanor gained the hatred of the Maccabees after reports surfaced that he had blasphemed in the Temple and threatened to burn it. Nicanor took his forces into the field, and fought the Maccabees first at Caphar-salama, and then at the Battle of Adasa in late winter of 161 BCE. Nicanor was killed early in the fight, and the rest of his army fled afterward. Judas had been negotiating with the Roman Republic and extracted a vague agreement of potential support. While this would be cause for caution to the Seleucid Empire in the long term, it was not a particular concern in the short term, as the Romans would be unlikely to intervene if the Judean unrest could be decisively crushed. In 160 BCE, Seleucid King Demetrius I went on campaign in the east to fight the rebellious Timarchus. He left his general Bacchides to govern the western part of the empire. Bacchides led an army of 20,000 infantry and 2,000 cavalry into Judea on a second expedition intending to reconquer the restive province before it grew too used to autonomy. The size of the rebel army facing them is disputed; 1 Maccabees implausibly claims that Judas's army at Elasa was tiny, with 3,000 men of which only 800–1,000 would fight. Historians suspect the true numbers were larger and possibly as many as 22,000 soldiers, and the author downplayed their strength in an attempt to explain the defeat. The Seleucid army marched through Judea after carrying out a massacre in Galilee. This tactic would force Judas to respond in open battle, lest his reputation be damaged by inaction and Alcimus's faction gain strength by claiming he was better positioned to protect the people from future killings. Bacchides advanced toward Jerusalem, while Judas encamped on the rough terrain at Elasa to intercept the Seleucid army. Judas opted to attack the right flank of the Seleucid army hoping to kill the commander, similar to the victory over Nicanor at Adasa. The elite horsemen on the right retreated, and the rebels pursued. This may have been a tactic from Bacchides, however, to feign weakness and draw the Maccabees in where they could be surrounded and defeated, their own retreat cut off. Regardless of whether it was intentional or not, the Seleucids regained their formation and trapped the rebel army with their own left flank. Judas was eventually killed and the remaining Judeans fled. The Seleucids had reasserted their authority in Jerusalem. Bacchides fortified cities across the land, put allied Greek-friendly Jews in command in Jerusalem, and ensured children of leading families were held as hostages as a guarantee of good behavior. Judas's younger brother Jonathan Apphus (Hebrew: Yonatan) became the new leader of the Maccabees. A new tragedy struck the Hasmonean family when Jonathan's brother John Gaddi was seized and killed while on a mission in Nabatea. Jonathan fought Bacchides and his troops for a time, but the two eventually made a pact for a cease-fire. Bacchides then returned to Syria in 160 BCE. While the Maccabees had lost control of the cities, they seem to have built a rival government in the countryside from 160–153 BCE. The Maccabees avoided direct conflict with the Seleucids, but the internal Jewish civil struggle continued: the rebels harassed, exiled, and killed Jews seen as insufficiently anti-Greek. According to 1 Maccabees, "Thus the sword ceased from Israel. Jonathan settled in Michmash and began to judge the people; and he destroyed the godless out of Israel." The Maccabees were handed an opportunity as the Seleucids broke into infighting in a series of civil wars, the Seleucid Dynastic Wars. The Seleucid rival claimants to the throne needed all their troops elsewhere, and also wished to deny possible allies to other claimants, thus giving the Maccabees leverage. In 153–152 BCE, a deal was struck between Jonathan and Demetrius I. King Demetrius was fending off a challenge from Alexander Balas, and agreed to withdraw Seleucid forces from the fortified towns and garrisons in Judea, barring Beth-Zur and Jerusalem. The hostages were also released. Seleucid control over Judea was weakened, and then weakened further; Jonathan promptly betrayed Demetrius I after Alexander Balas offered an even better deal. Jonathan was granted the title of both High Priest and strategos by Alexander, essentially acknowledging that the Maccabee faction was a more relevant ally to would-be Seleucid leaders than the Hellenist faction. Jonathan's forces fought against Demetrius I, who would die in battle in 150 BCE. From 152–141 BCE, the rebels achieved a state of informal autonomy akin to a suzerain. The land was de jure part of the Seleucid Empire, but continuing civil wars gave the Maccabees considerable autonomy. Jonathan was given official authority to build and maintain an army in exchange for his aid. During this period, the legitimized armies of Jonathan fought in these civil wars and border struggles to maintain the favor of allied Seleucid leaders. The Seleucids did send an army back into Judea during this period, but Jonathan evaded it and refused battle until it eventually returned to the Seleucid heartland. In 143 BCE, regent Diodotus Tryphon, perhaps eager to reassert control over the restive province, invited Jonathan to a conference. The conference was a trap; Jonathan was captured and executed, despite Jonathan's brother Simon raising the requested ransom and sending hostages. This betrayal led to an alliance between the new leader of the Maccabees, Simon Thassi (Hebrew: Simeon), and Demetrius II Nicator, a rival of Diodotus Tryphon and claimant to the Seleucid throne. Demetrius II exempted Judea from payment of taxes in 142 BCE, essentially acknowledging its independence. The Seleucid settlement and garrison in Jerusalem, the Acra, finally came under Simon's control, peacefully, as did the remaining Seleucid garrison at Beth-Zur. Simon was appointed High Priest around 141 BCE, but he did so by acclamation from the Jewish people rather than appointment by the Seleucid king. Both Jonathan and now Simon had maintained diplomatic contact with the Roman Republic; official recognition by Rome came in 139 BCE, as the Romans were eager to weaken and divide the Greek states. This new Hasmonean-Roman alliance was also worded more firmly than Judas Maccabeus's hazy agreement 22–23 years earlier. Continuing strife between rival Seleucid rulers made a government response to formal independence of the new state difficult. New Seleucid King Antiochus VII Sidetes refused an offer of help from Simon's troops while pursuing their mutual enemy Diodotus Tryphon, and made demands for both tribute and for Simon to cede control of the border towns Joppa and Gazara. Antiochus VII sent an army to Judea at some point between 139 and 138 BCE under command of a general named Cendebeus, but it was repulsed. The Hasmonean leaders did not immediately call themselves "king" or establish a monarchy; Simon called himself merely "nasi" (in Hebrew, "Prince" or "President") and "ethnarch" (in Koine Greek, "Governor"). Aftermath In 135 BCE, Simon and two of his sons (Mattathias and Judas) were murdered by his son-in-law, Ptolemy son of Abubus, at a feast in Jericho. All five sons of Mattathias were now gone with Simon joining his brothers in death, leaving leadership to the next generation. Simon's third son, John Hyrcanus, became High Priest of Israel. King Antiochus VII would personally invade and besiege Jerusalem in 134 BCE, but after Hyrcanus paid a ransom and ceded the cities of Joppa and Gazara, the Seleucids left peacefully. The conflict ceased, and Hyrcanus and Antiochus VII joined themselves in an alliance, with Antiochus making a respectful donation of a sacrifice at the Temple. For the reprieve and donation, Antiochus VII was referred to as "Eusebes" ("Pious") by the grateful populace. With the suzerainty briefly re-established, Judea sent troops to aid Antiochus VII in his campaigns in Persia. After Antiochus VII's death in 129 BCE, the Hasmoneans ceased offering aid or tribute to the remnants of the declining Seleucid Empire. John Hyrcanus and his children would go on to centralize power more than Simon had done. Hyrcanus's son Aristobulus I called himself "basileus" (king), abandoning pretensions that the High Priest managing political matters was a temporary arrangement. The Hasmoneans exiled leaders on the council or gerusia that they felt might threaten their power. The council of elders – which some see as a precursor to the Sanhedrin – ceased to be an independent check on the monarchy. After the success of the Maccabean Revolt, leaders of the Hasmonean dynasty continued their conquest to surrounding areas of Judea, especially under Alexander Jannaeus. The Seleucid Empire was too riven with internal unrest to stop this, and Ptolemaic Egypt maintained largely friendly relations. The Hasmonean court at Jerusalem would not make a sharp break from Hellenic culture and language, and continued with a blend of Jewish traditions and Greek ones. They continued to be known by Greek names, would use both Hebrew and Greek on their coinage, and hired Greek mercenaries, but also restored Judaism to a place of primacy in Judea and fostered the new sense of Jewish nationalism that had sprouted during the revolt. The dynasty would last until 37 BCE, when Herod the Great, making use of heavy Roman support, defeated the last Hasmonean ruler to become a Roman client king. Tactics and technology Both sides were influenced by Hellenistic army composition and tactics. The basic Hellenistic battle deployment consisted of heavy infantry in the center, mounted cavalry on the flanks, and mobile skirmishers in the vanguard. The most common infantry weapon used was the sarissa, the Macedonian pike. The sarissa was a powerful weapon; it was held in two hands and had great reach (approximately ~6 meters), making it difficult for opponents to approach a phalanx of sarissa-wielding infantry safely. Hellenistic cavalry also used pikes, albeit slightly shorter ones. The Seleucids also had access to trained war elephants imported from India, who sported natural armor in their thick hides and could terrify opposing soldiers and their horses. Rarely, they also made use of scythed chariots. In terms of army size, the respected historian Polybius reports that in 165 BCE, a military parade near the Seleucid capital Antioch held by Antiochus IV consisted of 41,000 foot soldiers and 4,500 cavalrymen. These soldiers were preparing to fight in an expedition to the east, not in Judea, but give a rough estimate to the total size of the Seleucid forces in the Western part of their empire capable of being deployed wherever the ruler needed them, not including local auxiliaries and garrisons. Antiochus IV appears to have augmented the size of his army by hiring additional mercenaries, at cost to the Seleucid treasury. Most of the forces at that parade would be deployed on matters more important to the Seleucid leadership than suppressing the Judean rebellion, however, and as such only a portion of them likely participated in the battles of the rebellion. They may have been supplemented by local Seleucid-allied militias and garrisons, however. The Maccabees started as a guerrilla force that likely used the traditional weapons effective in small unit combat in mountainous terrain: archers, slingers, and light infantry peltasts armed with sword and shield. Later writers would romantically portray the Maccabees as ordinary people fighting as irregulars, but the Maccabees did eventually train a standing army similar to the Seleucids, complete with Hellenic-style heavy infantry phalanxes, horse-mounted cavalry, and siege weaponry. However, while manufacturing the mostly wooden sarissa would have been easy for the rebels, their body armor was lower quality. They likely used simple leather armor due to a paucity of metals and craftsmen capable of making Greek-style metal armor. It is speculated that diaspora Jews in countries hostile to the Seleucids, such as Ptolemaic Egypt and Pergamon, may have joined the cause as volunteers, bringing their own local talents to the rebel army. The rebel forces grew with time. There were 6,000 men in Judas's army near the start of the revolt, 10,000 men at the Battle of Beth Zur, and possibly as many as 22,000 soldiers by the time of the defeat at Elasa. In several battles, the rebels may have had numerical superiority to compensate for shortfalls in training and equipment.[note 3] After Jonathan was legitimized as high priest and governor by the Seleucid rulers, the Hasmoneans had easier access to recruitment; 20,000 soldiers are reported as repulsing Cendebeus in 139 BCE. Much of the combat in the revolt took place in hilly and mountainous terrain, which complicated warfare. Seleucid phalanxes trained for mountain combat would fight at somewhat greater distance from each other compared to packed lowland formations, and used slightly shorter but more maneuverable Roman-style pikes. Writings The most detailed contemporaneous writings that survived were the deuterocanonical books of First Maccabees and Second Maccabees, as well as Josephus's The Jewish War and Book XII and XIII of Jewish Antiquities. The authors were not disinterested parties; the authors of the books of Maccabees were favorable to the Maccabees, portraying the conflict as a divinely sanctioned holy war and elevating the stature of Judas and his brothers to heroic levels. In comparison, Josephus did not want to offend Greek pagan readers of his work, and is ambivalent toward the Maccabees. The book of 1 Maccabees is considered mostly reliable, as it was seemingly written by an eyewitness early in the reign of the Hasmoneans, most likely during John Hyrcanus's reign. Its depictions of battles are detailed and seemingly accurate, although it portrays implausibly large numbers of Seleucid soldiers, to better emphasize God's aid and Judas's talents. The book also acts as Hasmonean dynasty propaganda in its editorial slant on events. The new rule of the Hasmoneans was not without its own internal enemies; the office of High Priest had been occupied for generations by a descendant of the High Priest Zadok. The Hasmoneans, while of the priestly line (Kohens), were seen by some as usurpers, did not descend from Zadok, and had taken the office originally only via a deal with a Seleucid king. As such, the book emphasizes that the Hasmoneans' actions were in line with heroes of older scripture; they were God's new chosen and righteous rulers. For example, it dismisses a defeat suffered by other commanders named Joseph and Azariah as because "they did not listen to Judas and his brothers. But they did not belong to the family of those men through whom deliverance was given to Israel." 2 Maccabees is an abridgment by an unknown Egyptian Jew of a lost five-volume work by an author named Jason of Cyrene. It is a separate work from 1 Maccabees and not a continuation of it. 2 Maccabees has a more directly religious focus than 1 Maccabees, crediting God and divine intervention for events more prominently than 1 Maccabees; it also focuses personally on Judas rather than other Hasmoneans. It has a special focus on the Second Temple: the controversies over the position of High Priest, its pollution by Menelaus into a Greek-Jewish mix, its eventual cleansing, and the threats by Nicanor at the Temple. 2 Maccabees also represents an attempt to take the cause of the Maccabees outside Judea, as it encourages Egyptian Jews and other diaspora Jews to celebrate the cleansing of the temple (Hanukkah) and revere Judas Maccabeus. In general, 2 Maccabees portrays the prospects of peace and cooperation more positively than 1 Maccabees. In 1 Maccabees, the only way for the Jews to honorably make a deal with the Seleucids involved first defeating them militarily and attaining functional independence. In 2 Maccabees, intended for an audience of Egyptian Jews who still lived under Greek rule, peaceful coexistence was possible, but misunderstandings or troublemakers forced the Jews into defensive action. Josephus wrote over two centuries after the revolt, but his friendship with the Flavian dynasty Roman emperors meant he had access to resources undreamt of by other scholars. Josephus appears to have used 1 Maccabees as one of his main sources for his histories, but supplements it with knowledge of events of the Seleucid Empire from Greek histories as well as unknown other sources. Josephus seems to be familiar with the work of historians Polybius and Strabo, as well as the mostly lost works of Nicolaus of Damascus. The Book of Daniel appears to have been written during the early stages of the revolt around 165 BCE, and would eventually be included in the Hebrew Bible and the Christian Old Testament.[note 4] While the setting of the book is 400 years earlier in Babylon, the book is a literary response to the situation in Judea during the revolt (Sitz im Leben); the writer chose to move the setting either for esoteric reasons or to evade scrutiny from would-be censors. It urges its readers to remain steadfast in the face of persecution. For example, Babylonian King Nebuchadnezzar orders his court to eat the king's rich food; the prophet Daniel and his companions keep kosher and eat a diet of vegetables and water, yet emerge healthier than all the king's courtiers. The message is clear: defy Antiochus's decree and keep Jewish dietary law. Daniel predicts the king will go insane; Antiochus's title, "Epiphanes" ("Chosen of God"), was mocked by his enemies as "Epimanes" ("Madman"), and he was known to keep odd habits. When Daniel and the Jews are threatened with death, they face it calmly, and are saved in the end, a relevant message among Jewish opposition to Antiochus IV. The final chapters of the book of Daniel include apocalyptic visions of the future. One of the motives for the author was to give heart to devout Jews that their victory was foreseen by prophecy 400 years earlier. Daniel's final vision refers to Antiochus Epiphanes as the "king of the north" and describes his earlier actions, such as being repelled and humiliated by the Romans in his second campaign in Egypt, but also that the king of the north would "meet his end". Additionally, all those who had died under the king of the north would be revived, with those who suffered rewarded while those who had prospered would be subjected to shame and contempt. The main historical items taken away from Daniel is in its depiction of the king of the north desecrating the temple with an abomination of desolation, and stopping the tamid, the daily sacrifice at the Temple; these agree with the depictions in 1 and 2 Maccabees of the changes at the Second Temple. Other works which appear to have at least been influenced by the Maccabean Revolt include the Book of Judith, the Testament of Moses, and parts of the Book of Enoch. The Book of Judith is a historical novel that describes Jewish resistance against an overwhelming military threat. While the parallels are not as stark as Daniel, some of its depictions of oppression seem influenced by Antiochus's persecution, such as General Holofernes demolishing shrines, cutting down sacred groves, and attempting to destroy all worship other than of the king. Judith, the story's heroine, also bears the feminine form of the name "Judas". The Testament of Moses, similar to the Book of Daniel, provides a witness to Jewish attitudes leading up to the revolt: it describes persecution, denounces impious leaders and priests as collaborators, praises the virtues of martyrdom, and predicts God's retribution upon the oppressors. The Testament is usually considered to have been written in the first century CE, but it is at least possible it was written much earlier, in the Maccabean or Hasmonean era, and then appended onto with first century CE updates. Even if it was entirely written in the first century CE, it was still likely influenced by the experience of Antiochus IV's reign. The Book of Enoch's early chapters were written around 300–200 BCE, but new sections were appended over time invoking the authority of Enoch, the great-grandfather of Noah. One section, the "Apocalypse of Weeks", is hypothesized to have been written around 167 BCE, just after Antiochus's persecution began. Similar to Daniel, after the Apocalypse of Weeks recounts world history up to the point of the persecution, it predicts that the righteous will eventually triumph, and encourages resistance. Another section of Enoch, the "Book of Dreams", was likely written after the Revolt had at least partially succeeded; it portrays the events of the revolt in the form of prophetic dream visions. A more uncertain work that has nevertheless attracted much interest is the Qumran Habakkuk Commentary, part of the Dead Sea Scrolls. The Qumran religious community was not on good terms with the Hasmonean religious establishment in Jerusalem, and is believed to have favored the Zadokite line of succession to the High Priesthood. The commentary (pesher) describes a situation wherein a "Righteous Teacher" is unfairly driven from their post and into exile by a "Wicked Priest" and a "Man of the Lie" (possibly the same person). Many figures have been proposed as the identity of the people behind these titles; one theory goes that the Righteous Teacher was whoever held the High Priest position after Alcimus's death in 159 BCE, perhaps a Zadokite. If this person even existed, they lost their position after Jonathan Apphus, backed by his Maccabee army and his new alliance with Seleucid royal claimant Alexander Balas, took over the High Priest position in 152 BCE. Thus, the Wicked Priest would be Jonathan, and the Qumran community of the era would have consisted of religious opposition to the Hasmonean takeover: the first Essenes. The date of the work is unknown, and others scholars have proposed different candidates as possible identities of the Wicked Priest, so the identification with Jonathan is only a possibility, yet an intriguing and plausible one. In the First and Second Books of the Maccabees, the Maccabean Revolt is described as a collective response to cultural oppression and national resistance to a foreign power. Written after the revolt was complete, the books urged unity among the Jews; they describe little of the Hellenizing faction other than to call them lawless and corrupt, and downplay their relevance and power in the conflict. While many scholars still accept this basic framework, that the Hellenists were weak and dependent on Seleucid aid to hold influence, this view has since been challenged. In the revisionist view, the heroes and villains were both Jews: a majority of the Jews cautiously supported Hellenizing High Priest Menelaus; Antiochus IV's edicts only came about due to pressure from Hellenist Jews; and the revolt was best understood as a civil war between traditionalist Jews in the countryside and Hellenized Jews in the cities, with only occasional Seleucid intervention. Elias Bickerman is generally credited as popularizing this alternative viewpoint in 1937, and other historians such as Martin Hengel have continued the argument. For example, Josephus's account directly blames Menelaus for convincing Antiochus IV to issue his anti-Jewish decrees. Alcimus, Menelaus's replacement as High Priest, is blamed for instigating a massacre of devout Jews in 1 Maccabees, rather than the Seleucids directly. The Maccabees themselves fight and exile Hellenists as well, most clearly in the final expulsion from the Acra, but also in the earlier countryside struggles against the Tobiad clan of Hellenist-friendly Jews. In general, scholarly opinion is that Hellenistic historians were biased, but also that the bias did not result in excessive distortion or fabrication of facts, and they are mostly reliable sources once the bias is removed. There exist revisionist scholars who are inclined to discount the reliability of the primary histories more aggressively, however. Daniel R. Schwartz argues that Antiochus IV's initial attacks on Jerusalem from 168–167 BCE were not out of pure malice, as 1 Maccabees depicts, or a misunderstanding as 2 Maccabees depicts (and most scholars accept), but rather suppressing an authentic rebellion whose members were lost to history, as the Hasmoneans wished to show only themselves as capable of bringing victory. Sylvie Honigman argues that the depictions of Seleucid religious oppression are misleading and likely false. She advances the view that the loss of civil rights by the Jews in 168 BCE was an administrative punishment in the aftermath of local unrest over increased taxes; that the struggle was fundamentally economic, and merely interpreted as religiously driven in retrospect. She also argues that the moralistic slant of the sources means that their depictions of impious acts by Hellenists cannot be trusted as historical. For example, the claim that Menelaus stole temple vessels to pay for a bribe to Antiochus is merely aimed at delegitimizing them both. John Ma argues that the Temple was restored in 164 BCE upon petition by Menelaus to Antiochus, not liberated and rededicated by the Maccabees. These views have attracted partial support, but have not become a new consensus themselves. Modern defenders of more direct readings of the sources cite that evidence of such an unrecorded popular rebellion is thin-to-nonexistent. Assuming that Antiochus IV would not have started an ethno-religious persecution for irrational reasons is an ahistorical position in this criticism, as many leaders both ancient and modern clearly were motivated by religious concerns. Later scholars and archaeologists have found and preserved various artifacts from the time period and analyzed them, which have informed historians on the plausibility of various elements in the books. For recent examples, a stele (the "Helidorus stele") was discovered and deciphered in 2007 that dated from around 178 BCE, and gives insight to Seleucid government appointments and policy in the era immediately preceding the revolt. The Givati Parking Lot dig in Jerusalem from 2007–2015 has found possible evidence of the Acra; it might resolve a seeming contradiction between Josephus's account of the Acra's fate (he claimed it was torn down) and 1 Maccabees's account (it was merely occupied) in favor of the 1 Maccabees version. Legacy The Jewish festival of Hanukkah celebrates the rededication of the Temple following Judas Maccabeus's victory over the Seleucids. According to rabbinic tradition, the victorious Maccabees could only find a small jug of oil that had remained pure and uncontaminated by virtue of a seal, and although it only contained enough oil to sustain the Menorah for one day, it miraculously lasted for eight days, by which time further oil had been procured. During the era of the Hasmonean kingdom, Hanukkah was observed prominently; it acted as a "Hasmonean Independence Day" to commemorate the success of the revolt and the legitimacy of the Hasmonean rulers. Diaspora Jews celebrated it as well, fostering a sense of Jewish collective identity: it was a liberation day for all Jews, not merely Judean Jews.[note 5] As a result, Hanukkah outlasted Hasmonean rule, although its importance receded as time passed. Hanukkah would gain new prominence in the 20th century and rekindle interest in its origins in the Maccabees. The Jewish victory at the Battle of Adasa led to an annual festival as well, albeit one less prominent and remembered than Hanukkah. The defeat of Seleucid general Nicanor is celebrated on 13 Adar as Yom Nicanor. The traumatic time period helped define the genre of the apocalypse and heightened Jewish apocalypticism. The portrayal of an evil tyrant like Antiochus IV attacking the holy city of Jerusalem in the Book of Daniel became a common theme during later Roman rule of Judea, and would contribute to Christian conceptions of the Antichrist. The persecution of the Jews under Antiochus, and the Maccabees response, would influence and create new trends in Jewish strains of thought with regard to divine rewards and punishments. In earlier Jewish works, devotion to God and adherence to the law led to rewards and punishments in life: the observant would prosper, and disobedience would result in disaster. The persecution of Antiochus IV directly contradicted this teaching: for the first time, Jews were suffering precisely because they refused to violate Jewish law, and thus the most devout and observant Jews were the ones suffering the most. This resulted in literature suggesting that those who suffered in their earthly life would be rewarded afterward, such as the Book of Daniel describing a future resurrection of the dead, or 2 Maccabees describing in detail the martyrdom of a woman and her seven sons under Antiochus, but who would be rewarded after their deaths. As a victory of the "few over the many", the revolt served as inspiration for future Jewish resistance movements, such as the Zealots. The most famous of these later revolts are the First Jewish–Roman War in 66–73 CE (also called the "Great Revolt") and the Bar Kochba revolt from 132 to 136 CE. After the failure of these revolts, Jewish interpretation of the Maccabean Revolt became more spiritual; it instead focused on stories of Hanukkah and God's miracle of the oil, rather than practical plans for an independent Jewish polity backed by armed might. The Maccabees were also discussed less as time went on; they appear only rarely in the mishnah, the writings of the Tannaim, after these Jewish defeats. Rabbinical displeasure with the later rule of the Hasmoneans after the revolt also contributed to this; even when stories were explicitly set during the Maccabean period, references to Judas by name were explicitly removed to avoid hero-worship of the Hasmonean line. The books of Maccabees were downplayed and relegated in the Jewish tradition and not included in the Jewish Tanakh (Hebrew Bible); it would be Christians who would produce more art and literature referencing the Maccabees during the medieval era, as the books of Maccabees were included in the Catholic and Orthodox Biblical canon. Medieval Christians during the Carolingian era esteemed the Maccabees as early examples of chivalry and knighthood, and the Maccabees were invoked in the later Middle Ages as holy warriors to emulate during the Crusades. In the 14th century, Judas Maccabeus was included in the Nine Worthies, medieval exemplars of chivalry for knights to model their conduct on. The Jewish downplaying of the Maccabees would be challenged centuries later in the 19th century and early 20th century, as Jewish writers and artists held up the Maccabees as examples of independence and victory. Proponents of Jewish nationalism of that era saw past events, such as the Maccabees, as a hopeful suggestion to what was possible, influencing the nascent Zionist movement. A British Zionist organization formed in 1896 is named the Order of Ancient Maccabeans, and the Jewish sporting organization Maccabi World Union names itself after them.[note 6] The revolt is featured in plays of the playwrights Aharon Ashman [he], Ya'akov Cahan, and Moshe Shamir. Various organizations in the modern state of Israel name themselves after the Maccabees and the Hasmoneans or otherwise honor them. See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/DIGITAL_Command_Language] | [TOKENS: 1121]
Contents DIGITAL Command Language DIGITAL Command Language (DCL) is the standard command language for many of the operating systems created by Digital Equipment Corporation. DCL was originally implemented for IAS as the Program Development System (PDS), and later added to RSX-11M, RT-11 and RSTS/E, but took its most powerful form in VAX/VMS (later OpenVMS). DCL continues to be developed by VSI as part of OpenVMS. DCL is a scripting language supporting several data types, including strings, integers, bit arrays, arrays and Booleans, but not floating point numbers. Access to OpenVMS system services (kernel API) is through lexical functions, which perform the same as their compiled language counterparts and allow scripts to get information on system state. DCL includes IF-THEN-ELSE, access to all the Record Management Services (RMS) file types including stream, indexed, and sequential, but lacks a DO-WHILE or other looping construct, requiring users to make do with IF and GOTO-label statements instead. DCL is available for other operating systems as well, including DCL is the basis of the XLNT language, implemented on Windows by an interpreter-IDE-WSH engine combination with CGI capabilities distributed by Advanced System Concepts Inc. from 1997. Command-line parser For the OpenVMS implementation, the command line parser is a runtime library (CLI$) that can be compiled into user applications and therefore gives a consistent command line interface for both OS supplied commands and user written commands. The command line must start with a verb and is then followed by up to 8 parameters (arguments) and/or qualifiers (switches in Unix terminology) which begin with a '/' character. Unlike Unix (but similar to DOS), a space is not required before the '/'. Qualifiers can be position independent (occurring anywhere on the command line) or position dependent, in which case the qualifier affects the parameter it appears after. Most qualifiers are position independent. Qualifiers may also be assigned values or a series of values. Only the first most significant part of the verb and qualifier name is required. Parameters can be integers or alphanumeric text. An example OS command may look like: The second show command could also be typed as: While DCL documentation usually shows all DCL commands in uppercase, DCL commands are case-insensitive and may be typed in upper-, lower-, or mixed-case. Some implementations such as OpenVMS and RSX used a minimum uniqueness scheme in allowing commands to be shortened. Unlike other systems which use paths for locating commands, DCL requires commands to be defined explicitly, either via CLD (Command Language Definition) definitions or a foreign symbol. Most OpenVMS-native commands are defined via CLD files; these are compiled by the CDU, the Command Definition Utility, and added to a DCL 'table' -- SYS$LIBRARY:DCLTABLES.EXE by default, although processes are free to use their own tables—and can then be invoked by the user. For example, defining a command FOO that accepts the option "/BAR" and is implemented by the image SYS$SYSEXE:FOO.EXE could be done with a CLD file similar to: The user can then type "FOO", or "FOO/BAR", and the FOO program will be invoked. The command definition language supports many types of options, for example dates and file specifications, and allows a qualifier to change the image invoked—for example "CREATE", to create a file, vs. "CREATE/DIRECTORY" to create a directory. The other (simpler, but less flexible) method to define commands is via foreign commands. This is more akin to the Unix method of invoking programs. By giving the command: the command 'FOO' will invoke FOO.EXE, and supply any additional arguments literally to the program, for example, "foo -v". This method is generally used for programs ported from Unix and other non-native systems; for C programs using argc and argv command syntax. Versions of OpenVMS DCL starting with V6.2 support the DCL$PATH logical name for establishing Unix-style command paths. This mechanism is known as an Automatic Foreign Command. DCL$PATH allows a list of directories to be specified, and these directories are then searched for DCL command procedures (command.COM) and then for executable images (command.EXE) with filenames that match the command that was input by the user. Like traditional foreign commands, automatic foreign commands also allow Unix-style command input. Scripting DCL scripts look much like any other scripting language, with some exceptions. All DCL verbs in a script are preceded with a $ symbol; other lines are considered to be input to the previous command. For example, to use the TYPE command to print a paragraph onto the screen, one might use a script similar to: Indirect variable referencing It is possible to build arrays in DCL that are referenced through translated symbols. This allows the programmer to build arbitrarily sized data structures using the data itself as an indexing function. In this example the variable rainbowblue is assigned the value "red", and rainbowgreen is assigned the value "yellow". Commands The following is a list of DCL commands for common computing tasks that are supported by the OpenVMS command-line interface. Lexical functions Lexical functions provide string functions and access to VMS-maintained data. Some Lexicals are: See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Pi1_Orionis] | [TOKENS: 240]
Contents Pi1 Orionis Pi1 Orionis (π1 Ori, π1 Orionis) is a star in the equatorial constellation of Orion. It is faintly visible to the naked eye with an apparent visual magnitude of 4.74. Based upon an annual parallax shift of 28.04 mas, it is located about 116 light-years from the Sun. This is an A-type main-sequence star with a stellar classification of A3 Va. It is a Lambda Boötis star, which means the spectrum shows lower-than-expected abundances for heavier elements. Pi1 Orionis is a relatively young star, just 100 million years old, and is spinning fairly rapidly with a projected rotational velocity of 120 km/s. It has nearly double the mass of the Sun and 173% of the Sun's radius. The star radiates 16.9 times the solar luminosity from its outer atmosphere at an effective temperature of 8,900 K. An infrared excess indicates there is a debris disk with a temperature of 80 K orbiting 49 AU from the star. The dust has a combined mass 2.2% that of the Earth. References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Open_source] | [TOKENS: 6594]
Contents Open source Open source is software that is made freely available for possible modification and redistribution, also in form of source code. The licensing conditions include permission to use and view the source code, design documents, or content of the product. The open source model is a decentralized software development model that encourages open collaboration. A main principle of open source software development is peer production, with products such as source code, blueprints, and documentation freely available to the public. The open source movement in software began[a] as a response to the limitations of proprietary code. The model is used for projects such as in open source eCommerce, open source appropriate technology, and open source drug discovery. Open source promotes universal access via an open-source or free license to a product's design or blueprint, and universal redistribution of that design or blueprint. Before the phrase open source became widely adopted, developers and producers used a variety of other terms, such as free software, shareware, and public domain software. The term Open source was introduced in 1998 and gained hold with the rise of the Internet. The open-source software movement arose to clarify copyright, licensing, domain, and consumer issues. Generally, open source refers to a computer program in which the source code is available to the general public for usage, modification from its original design, and publication of their version (fork) back to the community. Many large formal institutions have sprung up to support the development of the open-source movement, including the Apache Software Foundation, which supports community projects such as the open-source framework and the open-source HTTP server Apache HTTP. History The sharing of technical information predates the Internet and the personal computer considerably. For instance, in the early years of automobile development a group of capital monopolists owned the rights to a 2-cycle gasoline-engine patent originally filed by George B. Selden. By controlling this patent, they were able to monopolize the industry and force car manufacturers to adhere to their demands, or risk a lawsuit. In 1911, independent automaker Henry Ford won a challenge to the Selden patent. The result was that the Selden patent became virtually worthless and a new association (which would eventually become the Motor Vehicle Manufacturers Association) was formed. The new association instituted a cross-licensing agreement among all US automotive manufacturers: although each company would develop technology and file patents, these patents were shared openly and with no exchange of money among all the firms. By the time the US entered World War II, 92 Ford patents and 515 patents from other companies were being shared among these manufacturers, with no exchange of money, or lawsuits. Early instances of the free sharing of source code include IBM's source releases of its operating systems and other programs in the 1950s and 1960s, and the SHARE user group that formed to facilitate the exchange of software. Beginning in the 1960s, ARPANET researchers used an open "Request for Comments" (RFC) process to encourage feedback in early telecommunication network protocols. This led to the birth of the early Internet in 1969. The sharing of source code on the Internet began when the Internet was relatively primitive, with software distributed via UUCP, Usenet, IRC, and Gopher. BSD, for example, was first widely distributed by posts to comp.os.linux on the Usenet, which is also where its development was discussed. Linux followed in this model. Open source as a term emerged in the late 1990s by a group of people in the free software movement who were critical of the political agenda and moral philosophy implied in the term "free software" and sought to reframe the discourse to reflect a more commercially minded position. In addition, the ambiguity of the term "free software" was seen as discouraging business adoption. However, the ambiguity of the word "free" exists primarily in English as it can refer to cost. The group included Christine Peterson, Todd Anderson, Larry Augustin, Jon Hall, Sam Ockman, Michael Tiemann and Eric S. Raymond. Peterson suggested "open source" at a meeting held at Palo Alto, California, in reaction to Netscape's announcement in January 1998 of a source code release for Navigator. Linus Torvalds gave his support the following day, and Phil Hughes backed the term in Linux Journal. Richard Stallman, the founder of the Free Software Foundation (FSF) in 1985, quickly decided against endorsing the term. The FSF's goal was to promote the development and use of free software, which they defined as software that grants users the freedom to run, study, share, and modify the code. This concept is similar to open source but places a greater emphasis on the ethical and political aspects of software freedom. Netscape released its source code under the Netscape Public License and later under the Mozilla Public License. Raymond was especially active in the effort to popularize the new term. He made the first public call to the free software community to adopt it in February 1998. Shortly after, he founded The Open Source Initiative in collaboration with Bruce Perens. The term gained further visibility through an event organized in April 1998 by technology publisher O'Reilly Media . Originally titled the "Freeware Summit" and later known as the "Open Source Summit", the event was attended by the leaders of many of the most important free and open-source projects, including Linus Torvalds, Larry Wall, Brian Behlendorf, Eric Allman, Guido van Rossum, Michael Tiemann, Paul Vixie, Jamie Zawinski, and Eric Raymond. At that meeting, alternatives to the term "free software" were discussed. Tiemann argued for "sourceware" as a new term, while Raymond argued for "open source." The assembled developers took a vote, and the winner was announced at a press conference the same evening. Economics Some economists agree that open-source is an information good or "knowledge good" with original work involving a significant amount of time, money, and effort. The cost of reproducing the work is low enough that additional users may be added at zero or near zero cost – this is referred to as the marginal cost of a product. Copyright creates a monopoly so that the price charged to consumers can be significantly higher than the marginal cost of production. This allows the author to recoup the cost of making the original work. Copyright thus creates access costs for consumers who value the work more than the marginal cost but less than the initial production cost. Access costs also pose problems for authors who wish to create a derivative work—such as a copy of a software program modified to fix a bug or add a feature, or a remix of a song—but are unable or unwilling to pay the copyright holder for the right to do so. Open source eliminates some of the access costs of consumers and creators of derivative works by reducing the restrictions of copyright. Basic economic theory predicts that lower costs would lead to higher consumption and also more frequent creation of derivative works. Organizations such as Creative Commons host websites where individuals can file for alternative "licenses", or levels of restriction, for their works. These self-made protections free the general society of the costs of policing copyright infringement. Others argue that since consumers do not pay for their copies, creators are unable to recoup the initial cost of production and thus have little economic incentive to create in the first place. By this argument, consumers would lose out because some of the goods they would otherwise purchase would not be available. In practice, content producers can choose whether to adopt a proprietary license and charge for copies, or an open license. Some goods which require large amounts of professional research and development, such as the pharmaceutical industry (which depends largely on patents, not copyright for intellectual property protection) are almost exclusively proprietary, although increasingly sophisticated technologies are being developed on open-source principles. There is evidence that open-source development creates enormous value. For example, in the context of open-source hardware design, digital designs are shared for free and anyone with access to digital manufacturing technologies (e.g. RepRap 3D printers) can replicate the product for the cost of materials. The original sharer may receive feedback and potentially improvements on the original design from the peer production community. Many open-source projects have a high economic value. According to the Battery Open Source Software Index (BOSS), the ten economically most important open-source projects are: The rank given is based on the activity regarding projects in online discussions, on GitHub, on search activity in search engines and on the influence on the labour market. Alternative arrangements have also been shown to result in good creation outside of the proprietary license model. Examples include:[citation needed] Open collaboration The open-source model is a decentralized software development model that encourages open collaboration, meaning "any system of innovation or production that relies on goal-oriented yet loosely coordinated participants who interact to create a product (or service) of economic value, which they make available to contributors and noncontributors alike." A main principle of open-source software development is peer production, with products such as source code, blueprints, and documentation freely available to the public. The open-source movement in software began as a response to the limitations of proprietary code. The model is used for projects such as in open-source appropriate technology, and open-source drug discovery. The open-source model for software development inspired the use of the term to refer to other forms of open collaboration, such as in Internet forums, mailing lists and online communities. Open collaboration is also thought to be the operating principle underlining a gamut of diverse ventures, including TEDx and Wikipedia. Open collaboration is the principle underlying peer production, mass collaboration, and wikinomics. It was observed initially in open-source software, but can also be found in many other instances, such as in Internet forums, mailing lists, Internet communities, and many instances of open content, such as Creative Commons. It also explains some instances of crowdsourcing, collaborative consumption, and open innovation. Riehle et al. define open collaboration as collaboration based on three principles of egalitarianism, meritocracy, and self-organization. Levine and Prietula define open collaboration as "any system of innovation or production that relies on goal-oriented yet loosely coordinated participants who interact to create a product (or service) of economic value, which they make available to contributors and noncontributors alike." This definition captures multiple instances, all joined by similar principles. For example, all of the elements – goods of economic value, open access to contribute and consume, interaction and exchange, purposeful yet loosely coordinated work – are present in an open-source software project, in Wikipedia, or in a user forum or community. They can also be present in a commercial website that is based on user-generated content. In all of these instances of open collaboration, anyone can contribute and anyone can freely partake in the fruits of sharing, which are produced by interacting participants who are loosely coordinated. An annual conference dedicated to the research and practice of open collaboration is the International Symposium on Wikis and Open Collaboration (OpenSym, formerly WikiSym). As per its website, the group defines open collaboration as "collaboration that is egalitarian (everyone can join, no principled or artificial barriers to participation exist), meritocratic (decisions and status are merit-based rather than imposed) and self-organizing (processes adapt to people rather than people adapt to pre-defined processes)." Open-source license Open source promotes universal access via an open-source or free license to a product's design or blueprint, and universal redistribution of that design or blueprint. Before the phrase open source became widely adopted, developers and producers used a variety of other terms. Open source gained hold in part due to the rise of the Internet. The open-source software movement arose to clarify copyright, licensing, domain, and consumer issues. An open-source license is a type of license for computer software and other products that allows the source code, blueprint or design to be used, modified or shared (with or without modification) under defined terms and conditions. This allows end users and commercial companies to review and modify the source code, blueprint or design for their own customization, curiosity or troubleshooting needs. Open-source licensed software is mostly available free of charge, though this does not necessarily have to be the case. Licenses which only permit non-commercial redistribution or modification of the source code for personal use only are generally not considered as open-source licenses. However, open-source licenses may have some restrictions, particularly regarding the expression of respect to the origin of software, such as a requirement to preserve the name of the authors and a copyright statement within the code, or a requirement to redistribute the licensed software only under the same license (as in a copyleft license). One popular set of open-source software licenses are those approved by the Open Source Initiative (OSI) based on their Open Source Definition (OSD). Applications Social and political views have been affected by the growth of the concept of open source. Advocates in one field often support the expansion of open source in other fields. But Eric Raymond and other founders of the open-source movement have sometimes publicly argued against speculation about applications outside software, saying that strong arguments for software openness should not be weakened by overreaching into areas where the story may be less compelling. The broader impact of the open-source movement, and the extent of its role in the development of new information sharing procedures, remain to be seen. The open-source movement has inspired increased transparency and liberty in biotechnology research, for example CAMBIA Even the research methodologies themselves can benefit from the application of open-source principles. It has also given rise to the rapidly-expanding open-source hardware movement. Open-source software is software which source code is published and made available to the public, enabling anyone to copy, modify and redistribute the source code without paying royalties or fees. LibreOffice and the GNU Image Manipulation Program are examples of open source software. As they do with proprietary software, users must accept the terms of a license when they use open source software—but the legal terms of open source licenses differ dramatically from those of proprietary licenses. Open-source code can evolve through community cooperation. These communities are composed of individual programmers as well as large companies. Some of the individual programmers who start an open-source project may end up establishing companies offering products or services incorporating open-source programs.[citation needed] Examples of open-source software products are: The Google Summer of Code, often abbreviated to GSoC, is an international annual program in which Google awards stipends to contributors who successfully complete a free and open-source software coding project during the summer. GSoC is a large scale project with 202 participating organizations in 2021. There are similar smaller scale projects such as the Talawa Project run by the Palisadoes Foundation (a non profit based in California, originally to promote the use of information technology in Jamaica, but now also supporting underprivileged communities in the US) Open-source hardware is hardware which initial specification, usually in a software format, is published and made available to the public, enabling anyone to copy, modify and redistribute the hardware and source code without paying royalties or fees. Open-source hardware evolves through community cooperation. These communities are composed of individual hardware/software developers, hobbyists, as well as very large companies. Examples of open-source hardware initiatives are: Some publishers of open-access journals have argued that data from food science and gastronomy studies should be freely available to aid reproducibility. A number of people have published creative commons licensed recipe books. An open-source robot is a robot whose blueprints, schematics, or source code are released under an open-source model. "Open" versus "free" versus "free and open" Free and open-source software (FOSS) or free/libre and open-source software (FLOSS) is openly shared source code that is licensed without any restrictions on usage, modification, or distribution.[citation needed] Confusion persists about this definition because the "free", also known as "libre", refers to the freedom of the product, not the price, expense, cost, or charge. For example, "being free to speak" is not the same as "free beer". Conversely, Richard Stallman argues the "obvious meaning" of term "open source" is that the source code is public/accessible for inspection, without necessarily any other rights granted, although the proponents of the term say the conditions in the Open Source Definition must be fulfilled. "Free and open" should not be confused with public ownership (state ownership), deprivatization (nationalization), anti-privatization (anti-corporate activism), or transparent behavior.[citation needed] Software Generally, open source refers to a computer program in which the source code is available to the general public for use for any (including commercial) purpose, or modification from its original design. Open-source code is meant to be a collaborative effort, where programmers improve upon the source code and share the changes within the community. Code is released under the terms of a software license. Depending on the license terms, others may then download, modify, and publish their version (fork) back to the community. Hardware Agriculture, economy, manufacturing and production Science and medicine Media Organizations Procedures Society The rise of open-source culture in the 20th century resulted from a growing tension between creative practices that involve require access to content that is often copyrighted, and restrictive intellectual property laws and policies governing access to copyrighted content. The two main ways in which intellectual property laws became more restrictive in the 20th century were extensions to the term of copyright (particularly in the United States) and penalties, such as those articulated in the Digital Millennium Copyright Act (DMCA), placed on attempts to circumvent anti-piracy technologies. Although artistic appropriation is often permitted under fair-use doctrines, the complexity and ambiguity of these doctrines create an atmosphere of uncertainty among cultural practitioners. Also, the protective actions of copyright owners create what some call a "chilling effect" among cultural practitioners. The idea of an "open-source" culture runs parallel to "Free Culture", but is substantively different. Free culture is a term derived from the free software movement, and in contrast to that vision of culture, proponents of open-source culture (OSC) maintain that some intellectual property law needs to exist to protect cultural producers. Yet they propose a more nuanced position than corporations have traditionally sought. Instead of seeing intellectual property law as an expression of instrumental rules intended to uphold either natural rights or desirable outcomes, an argument for OSC takes into account diverse goods (as in "the Good life"[clarification needed]) and ends. Sites such as ccMixter offer up free web space for anyone willing to license their work under a Creative Commons license. The resulting cultural product is then available to download free (generally accessible) to anyone with an Internet connection. Older, analog technologies such as the telephone or television have limitations on the kind of interaction users can have. Through various technologies such as peer-to-peer networks and blogs, cultural producers can take advantage of vast social networks to distribute their products. As opposed to traditional media distribution, redistributing digital media on the Internet can be virtually costless. Technologies such as BitTorrent and Gnutella take advantage of various characteristics of the Internet protocol (TCP/IP) in an attempt to totally decentralize file distribution. Open-source ethics is split into two strands: Irish philosopher Richard Kearney has used the term "open-source Hinduism" to refer to the way historical figures such as Mohandas Gandhi and Swami Vivekananda worked upon this ancient tradition. Open-source journalism formerly referred to the standard journalistic techniques of news gathering and fact checking, reflecting open-source intelligence, a similar term used in military intelligence circles. Now, open-source journalism commonly refers to forms of innovative publishing of online journalism, rather than the sourcing of news stories by a professional journalist. In the 25 December 2006 issue of TIME magazine this is referred to as user created content and listed alongside more traditional open-source projects such as OpenSolaris and Linux. Weblogs, or blogs, are another significant platform for open-source culture. Blogs consist of periodic, reverse chronologically ordered posts, using a technology that makes webpages easily updatable with no understanding of design, code, or file transfer required. While corporations, political campaigns and other formal institutions have begun using these tools to distribute information, many blogs are used by individuals for personal expression, political organizing, and socializing. Some, such as LiveJournal or WordPress, use open-source software that is open to the public and can be modified by users to fit their own tastes. Whether the code is open or not, this format represents a nimble tool for people to borrow and re-present culture; whereas traditional websites made the illegal reproduction of culture difficult to regulate, the mutability of blogs makes "open sourcing" even more uncontrollable since it allows a larger portion of the population to replicate material more quickly in the public sphere. Messageboards are another platform for open-source culture. Messageboards (also known as discussion boards or forums), are places online where people with similar interests can congregate and post messages for the community to read and respond to. Messageboards sometimes have moderators who enforce community standards of etiquette such as banning spammers. Other common board features are private messages (where users can send messages to one another) as well as chat (a way to have a real time conversation online) and image uploading. Some messageboards use phpBB, which is a free open-source package. Where blogs are more about individual expression and tend to revolve around their authors, messageboards are about creating a conversation amongst its users where information can be shared freely and quickly. Messageboards are a way to remove intermediaries from everyday life—for instance, instead of relying on commercials and other forms of advertising, one can ask other users for frank reviews of a product, movie or CD. By removing the cultural middlemen, messageboards help speed the flow of information and exchange of ideas. OpenDocument is an open document file format for saving and exchanging editable office documents such as text documents (including memos, reports, and books), spreadsheets, charts, and presentations. Organizations and individuals that store their data in an open format such as OpenDocument avoid being locked into a single software vendor, leaving them free to switch software if their current vendor goes out of business, raises their prices, changes their software, or changes their licensing terms to something less favorable. Open-source movie production is either an open call system in which a changing crew and cast collaborate in movie production, a system in which the result is made available for re-use by others or in which exclusively open-source products are used in the production. The 2006 movie Elephants Dream is said to be the "world's first open movie", created entirely using open-source technology. An open-source documentary film has a production process allowing the open contributions of archival material footage, and other filmic elements, both in unedited and edited form, similar to crowdsourcing. By doing so, on-line contributors become part of the process of creating the film, helping to influence the editorial and visual material to be used in the documentary, as well as its thematic development. The first open-source documentary film is the non-profit WBCN and the American Revolution, which went into development in 2006, and will examine the role media played in the cultural, social and political changes from 1968 to 1974 through the story of radio station WBCN-FM in Boston. The film is being produced by Lichtenstein Creative Media and the non-profit Center for Independent Documentary. Open Source Cinema is a website to create Basement Tapes, a feature documentary about copyright in the digital age, co-produced by the National Film Board of Canada. Open-source film-making refers to a form of film-making that takes a method of idea formation from open-source software, but in this case the 'source' for a filmmaker is raw unedited footage rather than programming code. It can also refer to a method of film-making where the process of creation is 'open' i.e. a disparate group of contributors, at different times contribute to the final piece. Open-IPTV is IPTV that is not limited to one recording studio, production studio, or cast. Open-IPTV uses the Internet or other means to pool efforts and resources together to create an online community that all contributes to a show. Within the academic community, there is discussion about expanding what could be called the "intellectual commons" (analogous to the Creative Commons). Proponents of this view have hailed the Connexions Project at Rice University, OpenCourseWare project at MIT, Eugene Thacker's article on "open-source DNA", the "Open Source Cultural Database", Salman Khan's Khan Academy and Wikipedia as examples of applying open source outside the realm of computer software. Open-source curricula are instructional resources whose digital source can be freely used, distributed and modified. Another strand to the academic community is in the area of research. Many funded research projects produce software as part of their work. Due to the benefits of sharing software openly in scientific endeavours, there is an increasing interest in making the outputs of research projects available under an open-source license. In the UK the Joint Information Systems Committee (JISC) has developed a policy on open-source software. JISC also funds a development service called OSS Watch which acts as an advisory service for higher and further education institutions wishing to use, contribute to and develop open-source software. On 30 March 2010, President Barack Obama signed the Health Care and Education Reconciliation Act, which included $2 billion over four years to fund the TAACCCT program, which is described as "the largest OER (open education resources) initiative in the world and uniquely focused on creating curricula in partnership with industry for credentials in vocational industry sectors like manufacturing, health, energy, transportation, and IT". The principle of sharing pre-dates the open-source movement; for example, the free sharing of information has been institutionalized in the scientific enterprise since at least the 19th century. Open-source principles have always been part of the scientific community. The sociologist Robert K. Merton described the four basic elements of the community—universalism (an international perspective), communalism (sharing information), objectivity (removing one's personal views from the scientific inquiry) and organized skepticism (requirements of proof and review) that describe the (idealised) scientific community. These principles are, in part, complemented by US law's focus on protecting expression and method but not the ideas themselves. There is also a tradition of publishing research results to the scientific community instead of keeping all such knowledge proprietary. One of the recent initiatives in scientific publishing has been open access—the idea that research should be published in such a way that it is free and available to the public. There are currently many open access journals where the information is available free online, however most journals do charge a fee (either to users or libraries for access). The Budapest Open Access Initiative is an international effort with the goal of making all research articles available free on the Internet. The National Institutes of Health has recently proposed a policy on "Enhanced Public Access to NIH Research Information". This policy would provide a free, searchable resource of NIH-funded results to the public and with other international repositories six months after its initial publication. The NIH's move is an important one because there is significant amount of public funding in scientific research. Many of the questions have yet to be answered—the balancing of profit vs. public access, and ensuring that desirable standards and incentives do not diminish with a shift to open access. Benjamin Franklin was an early contributor eventually donating all his inventions including the Franklin stove, bifocals, and the lightning rod to the public domain. New NGO communities are starting to use open-source technology as a tool. One example is the Open Source Youth Network started in 2007 in Lisboa by ISCA members. Open innovation is also a new emerging concept which advocates putting R&D in a common pool. The Eclipse platform is openly presenting itself as an open innovation network. Copyright protection is used in the performing arts and even in athletic activities. Some groups have attempted to remove copyright from such practices. In 2012, Russian music composer, scientist and Russian Pirate Party member Victor Argonov presented detailed raw files of his electronic opera "2032" under free license CC BY-NC 3.0 (later relicensed under CC BY-SA 4.0). This opera was originally composed and published in 2007 by Russian label MC Entertainment as a commercial product, but then the author changed its status to free. In his blog he said that he decided to open raw files (including wav, midi and other used formats) to the public to support worldwide pirate actions against SOPA and PIPA. Several Internet resources called "2032" the first open-source musical opera in history. Notable events and applications that have been developed via the open source community, and echo the ideologies of the open source movement, include the Open Education Consortium, Project Gutenberg, Synthethic Biology, and Wikipedia. The Open Education Consortium is an organization composed of various colleges that support open source and share some of their material online. This organization, headed by Massachusetts Institute of Technology, was established to aid in the exchange of open source educational materials. Wikipedia is a user-generated online encyclopedia with sister projects in academic areas, such as Wikiversity—a community dedicated to the creation and exchange of learning materials.[failed verification] Prior to the existence of Google Scholar Beta, Project Gutenberg was the first supplier of electronic books and the first free library project.[failed verification] The open-access movement is a movement that is similar in ideology to the open source movement. Members of this movement maintain that academic material should be readily available to provide help with "future research, assist in teaching and aid in academic purposes." The open-access movement aims to eliminate subscription fees and licensing restrictions of academic materials. The free-culture movement is a movement that seeks to achieve a culture that engages in collective freedom via freedom of expression, free public access to knowledge and information, full demonstration of creativity and innovation in various arenas, and promotion of citizen liberties.[citation needed] Creative Commons is an organization that "develops, supports, and stewards legal and technical infrastructure that maximizes digital creativity, sharing, and innovation." It encourages the use of protected properties online for research, education, and creative purposes in pursuit of a universal access. Creative Commons provides an infrastructure through a set of copyright licenses and tools that creates a better balance within the realm of "all rights reserved" properties. The Creative Commons license offers a slightly more lenient alternative to "all rights reserved" copyrights for those who do not wish to exclude the use of their material. The Zeitgeist Movement (TZM) is an international social movement that advocates a transition into a sustainable "resource-based economy" based on collaboration in which monetary incentives are replaced by commons-based ones with everyone having access to everything (from code to products) as in "open source everything". While its activism and events are typically focused on media and education, TZM is a major supporter of open source projects worldwide since they allow for uninhibited advancement of science and technology, independent of constraints posed by institutions of patenting and capitalist investment. P2P Foundation is an "international organization focused on studying, researching, documenting and promoting peer to peer practices in a very broad sense." Its objectives incorporate those of the open source movement, whose principles are integrated in a larger socio-economic model. Open-weight refers to the release of an artificial intelligence model's trained parameters, or weights, for public use. Unlike fully open-source models, open-weight releases may not include the underlying source code, training data, or full documentation. The availability of weights allows researchers and developers to run, evaluate, or fine-tune the model, though license terms may restrict redistribution or commercial use. The term is commonly used in reference to large language models such as LLaMA and Mistral, which have released model weights under research or custom licenses. See also Notes References QUOTE: The terms “free software” and “open source” stand for almost the same range of programs. However, they say deeply different things about those programs, based on different values. The free software movement campaigns for freedom for the users of computing; it is a movement for freedom and justice. By contrast, the open source idea values mainly practical advantage and does not campaign for principles. This is why we do not agree with open source, and do not use that term. Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-241] | [TOKENS: 12858]
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Logarithm] | [TOKENS: 13022]
Contents Logarithm In mathematics, the logarithm of a number is the exponent by which another fixed value, the base, must be raised to produce that number. For example, the logarithm of 1000 to base 10 is 3, because 1000 is 10 to the 3rd power: 1000 = 103 = 10 × 10 × 10. More generally, if x = by, then y is the logarithm of x to base b, written logb x, so log10 1000 = 3. As a single-variable function, the logarithm to base b is the inverse of exponentiation with base b. The logarithm base 10 is called the decimal or common logarithm and is commonly used in science and engineering. The natural logarithm has the number e ≈ 2.718 as its base; its use is widespread in mathematics and physics because of its very simple derivative. The binary logarithm uses base 2 and is widely used in computer science, information theory, music theory, and photography. When the base is unambiguous from the context or irrelevant it is often omitted, and the logarithm is written log x. Logarithms were introduced by John Napier in 1614 as a means of simplifying calculations. They were rapidly adopted by navigators, scientists, engineers, surveyors, and others to perform high-accuracy computations more easily. Using logarithm tables, tedious multi-digit multiplication steps can be replaced by table look-ups and simpler addition. This is possible because the logarithm of a product is the sum of the logarithms of the factors: log b ⁡ ( x y ) = log b ⁡ x + log b ⁡ y , {\displaystyle \log _{b}(xy)=\log _{b}x+\log _{b}y,} provided that b, x and y are all positive and b ≠ 1. The slide rule, also based on logarithms, allows quick calculations without tables, but at lower precision. The present-day notion of logarithms comes from Leonhard Euler, who connected them to the exponential function in the 18th century, and who also introduced the letter e as the base of natural logarithms. Logarithmic scales reduce wide-ranging quantities to smaller scopes. For example, the decibel (dB) is a unit used to express ratio as logarithms, mostly for signal power and amplitude (of which sound pressure is a common example). In chemistry, pH is a logarithmic measure for the acidity of an aqueous solution. Logarithms are commonplace in scientific formulae, and in measurements of the complexity of algorithms and of geometric objects called fractals. They help to describe frequency ratios of musical intervals, appear in formulas counting prime numbers or approximating factorials, inform some models in psychophysics, and can aid in forensic accounting. The concept of logarithm as the inverse of exponentiation extends to other mathematical structures as well. However, in general settings, the logarithm tends to be a multi-valued function. For example, the complex logarithm is the multi-valued inverse of the complex exponential function. Similarly, the discrete logarithm is the multi-valued inverse of the exponential function in finite groups; it has uses in public-key cryptography. Motivation Addition, multiplication, and exponentiation are three of the most fundamental arithmetic operations. The inverse of addition is subtraction, and the inverse of multiplication is division. Similarly, a logarithm is the inverse operation of exponentiation. Exponentiation is when a number b, the base, is raised to a certain power y, the exponent, to give a value x; this is denoted b y = x . {\displaystyle b^{y}=x.} For example, raising 2 to the power of 3 gives 8: 2 3 = 8. {\displaystyle 2^{3}=8.} The logarithm of base b is the inverse operation, that provides the output y from the input x. That is, y = log b ⁡ x {\displaystyle y=\log _{b}x} is equivalent to x = b y {\displaystyle x=b^{y}} if b is a positive real number. (If b is not a positive real number, both exponentiation and logarithm can be defined but may take several values, which makes definitions much more complicated.) One of the main historical motivations of introducing logarithms is the formula log b ⁡ ( x y ) = log b ⁡ x + log b ⁡ y , {\displaystyle \log _{b}(xy)=\log _{b}x+\log _{b}y,} by which tables of logarithms allow multiplication and division to be reduced to addition and subtraction, a great aid to calculations before the invention of computers. Definition Given a positive real number b such that b ≠ 1, the logarithm of a positive real number x with respect to base b[nb 1] is the exponent by which b must be raised to yield x. In other words, the logarithm of x to base b is the unique real number y such that b y = x {\displaystyle b^{y}=x} . The logarithm is denoted "logb x" (pronounced as "the logarithm of x to base b", "the base-b logarithm of x", or most commonly "the log, base b, of x"). An equivalent and more succinct definition is that the function logb is the inverse function to the function x ↦ b x {\displaystyle x\mapsto b^{x}} . Logarithmic identities Several important formulas, sometimes called logarithmic identities or logarithmic laws, relate logarithms to one another. The logarithm of a product is the sum of the logarithms of the numbers being multiplied; the logarithm of the ratio of two numbers is the difference of the logarithms. The logarithm of the p-th power of a number is p times the logarithm of the number itself; the logarithm of a p-th root is the logarithm of the number divided by p. The following table lists these identities with examples. Each of the identities can be derived after substitution of the logarithm definitions x = b log b ⁡ x {\displaystyle x=b^{\,\log _{b}x}} or y = b log b ⁡ y {\displaystyle y=b^{\,\log _{b}y}} in the left hand sides. In the following formulas, ⁠ x {\displaystyle x} ⁠ and ⁠ y {\displaystyle y} ⁠ are positive real numbers and ⁠ p {\displaystyle p} ⁠ is an integer greater than 1. The logarithm logb x can be computed from the logarithms of x and b with respect to an arbitrary base k using the following formula:[nb 2] log b ⁡ x = log k ⁡ x log k ⁡ b . {\displaystyle \log _{b}x={\frac {\log _{k}x}{\log _{k}b}}.} Typical scientific calculators calculate the logarithms to bases 10 and e. Logarithms with respect to any base b can be determined using either of these two logarithms by the previous formula: log b ⁡ x = log 10 ⁡ x log 10 ⁡ b = log e ⁡ x log e ⁡ b . {\displaystyle \log _{b}x={\frac {\log _{10}x}{\log _{10}b}}={\frac {\log _{e}x}{\log _{e}b}}.} Given a number x and its logarithm y = logb x to an unknown base b, the base is given by: b = x 1 y , {\displaystyle b=x^{\frac {1}{y}},} which can be seen from taking the defining equation x = b log b ⁡ x = b y {\displaystyle x=b^{\,\log _{b}x}=b^{y}} to the power of 1 y . {\displaystyle {\tfrac {1}{y}}.} Particular bases Among all choices for the base, three are particularly common. These are b = 10, b = e (the irrational mathematical constant e ≈ 2.71828183 ), and b = 2 (the binary logarithm). In mathematical analysis, the logarithm base e is widespread because of analytical properties explained below. On the other hand, base 10 logarithms (the common logarithm) are easy to use for manual calculations in the decimal number system: log 10 ( 10 x ) = log 10 ⁡ 10 + log 10 ⁡ x = 1 + log 10 ⁡ x . {\displaystyle \log _{10}\,(\,10\,x\,)\ =\;\log _{10}10\ +\;\log _{10}x\ =\ 1\,+\,\log _{10}x\,.} Thus, log10 (x) is related to the number of decimal digits of a positive integer x: The number of digits is the smallest integer strictly bigger than log10 (x) . For example, log10(5986) is approximately 3.78 . The next integer above it is 4, which is the number of digits of 5986. Both the natural logarithm and the binary logarithm are used in information theory, corresponding to the use of nats or bits as the fundamental units of information, respectively. Binary logarithms are also used in computer science, where the binary system is ubiquitous; in music theory, where a pitch ratio of two (the octave) is ubiquitous and the number of cents between any two pitches is a scaled version of the binary logarithm, or log 2 times 1200, of the pitch ratio (that is, 100 cents per semitone in conventional equal temperament), or equivalently the log base 21/1200 ; and in photography, where rescaled base 2 logarithms are used to measure exposure values, light levels, exposure times, lens apertures, and film speeds in "stops". The abbreviation log x is often used when the intended base can be inferred based on the context or discipline, or when the base is indeterminate or immaterial. Common logarithms (base 10), historically used in logarithm tables and slide rules, are a basic tool for measurement and computation in many areas of science and engineering; in these contexts log x still often means the base ten logarithm. In mathematics log x usually refers to the natural logarithm (base e). In computer science and information theory, log often refers to binary logarithms (base 2). The following table lists common notations for logarithms to these bases. The "ISO notation" column lists designations suggested by the International Organization for Standardization. History The history of logarithms in seventeenth-century Europe saw the discovery of a new function that extended the realm of analysis beyond the scope of algebraic methods. The method of logarithms was publicly propounded by John Napier in 1614, in a book titled Mirifici Logarithmorum Canonis Descriptio (Description of the Wonderful Canon of Logarithms). Prior to Napier's invention, there had been other techniques of similar scopes, such as the prosthaphaeresis or the use of tables of progressions, extensively developed by Jost Bürgi around 1600. Napier coined the term for logarithm in Middle Latin, logarithmus, literally meaning 'ratio-number', derived from the Greek logos 'proportion, ratio, word' + arithmos 'number'. The common logarithm of a number is the index of that power of ten which equals the number. Speaking of a number as requiring so many figures is a rough allusion to common logarithm, and was referred to by Archimedes as the "order of a number". The first real logarithms were heuristic methods to turn multiplication into addition, thus facilitating rapid computation. Some of these methods used tables derived from trigonometric identities. Such methods are called prosthaphaeresis. Invention of the function now known as the natural logarithm began as an attempt to perform a quadrature of a rectangular hyperbola by Grégoire de Saint-Vincent, a Belgian Jesuit residing in Prague. Archimedes had written The Quadrature of the Parabola in the third century BC, but a quadrature for the hyperbola eluded all efforts until Saint-Vincent published his results in 1647. The relation that the logarithm provides between a geometric progression in its argument and an arithmetic progression of values, prompted A. A. de Sarasa to make the connection of Saint-Vincent's quadrature and the tradition of logarithms in prosthaphaeresis, leading to the term "hyperbolic logarithm", a synonym for natural logarithm. Soon the new function was appreciated by Christiaan Huygens, and James Gregory. The notation Log y was adopted by Gottfried Wilhelm Leibniz in 1675, and the next year he connected it to the integral ∫ d y y . {\textstyle \int {\frac {dy}{y}}.} Before Euler developed his modern conception of complex natural logarithms, Roger Cotes had a nearly equivalent result when he showed in 1714 that log ⁡ ( cos ⁡ θ + i sin ⁡ θ ) = i θ . {\displaystyle \log(\cos \theta +i\sin \theta )=i\theta .} Logarithm tables, slide rules, and historical applications By simplifying difficult calculations before calculators and computers became available, logarithms contributed to the advance of science, especially astronomy. They were critical to advances in surveying, celestial navigation, and other domains. Pierre-Simon Laplace called logarithms ... [a]n admirable artifice which, by reducing to a few days the labour of many months, doubles the life of the astronomer, and spares him the errors and disgust inseparable from long calculations. As the function f(x) = bx is the inverse function of logb x, it has been called an antilogarithm. Nowadays, this function is more commonly called an exponential function. A key tool that enabled the practical use of logarithms was the table of logarithms. The first such table was compiled by Henry Briggs in 1617, immediately after Napier's invention but with the innovation of using 10 as the base. Briggs' first table contained the common logarithms of all integers in the range from 1 to 1000, with a precision of 14 digits. Subsequently, tables with increasing scope were written. These tables listed the values of log10 x for any number x in a certain range, at a certain precision. Base-10 logarithms were universally used for computation, hence the name common logarithm, since numbers that differ by factors of 10 have logarithms that differ by integers. The common logarithm of x can be separated into an integer part and a fractional part, known as the characteristic and mantissa. Tables of logarithms need only include the mantissa, as the characteristic can be easily determined by counting digits from the decimal point. The characteristic of 10 · x is one plus the characteristic of x, and their mantissas are the same. Thus using a three-digit log table, the logarithm of 3542 is approximated by log 10 ⁡ 3542 = log 10 ⁡ ( 1000 ⋅ 3.542 ) = 3 + log 10 ⁡ 3.542 ≈ 3 + log 10 ⁡ 3.54 {\displaystyle {\begin{aligned}\log _{10}3542&=\log _{10}(1000\cdot 3.542)\\&=3+\log _{10}3.542\\&\approx 3+\log _{10}3.54\end{aligned}}} Greater accuracy can be obtained by interpolation: log 10 ⁡ 3542 ≈ 3 + log 10 ⁡ 3.54 + 0.2 ( log 10 ⁡ 3.55 − log 10 ⁡ 3.54 ) {\displaystyle \log _{10}3542\approx {}3+\log _{10}3.54+0.2(\log _{10}3.55-\log _{10}3.54)} The value of 10x can be determined by reverse look up in the same table, since the logarithm is a monotonic function. The product and quotient of two positive numbers c and d were routinely calculated as the sum and difference of their logarithms. The product cd or quotient c/d came from looking up the antilogarithm of the sum or difference, via the same table: c d = 10 log 10 ⁡ c 10 log 10 ⁡ d = 10 log 10 ⁡ c + log 10 ⁡ d {\displaystyle cd=10^{\,\log _{10}c}\,10^{\,\log _{10}d}=10^{\,\log _{10}c\,+\,\log _{10}d}} and c d = c d − 1 = 10 log 10 ⁡ c − log 10 ⁡ d . {\displaystyle {\frac {c}{d}}=cd^{-1}=10^{\,\log _{10}c\,-\,\log _{10}d}.} For manual calculations that demand any appreciable precision, performing the lookups of the two logarithms, calculating their sum or difference, and looking up the antilogarithm is much faster than performing the multiplication by earlier methods such as prosthaphaeresis, which relies on trigonometric identities. Calculations of powers and roots are reduced to multiplications or divisions and lookups by c d = ( 10 log 10 ⁡ c ) d = 10 d log 10 ⁡ c {\displaystyle c^{d}=\left(10^{\,\log _{10}c}\right)^{d}=10^{\,d\log _{10}c}} and c d = c 1 d = 10 1 d log 10 ⁡ c . {\displaystyle {\sqrt[{d}]{c}}=c^{\frac {1}{d}}=10^{{\frac {1}{d}}\log _{10}c}.} Trigonometric calculations were facilitated by tables that contained the common logarithms of trigonometric functions. Another critical application was the slide rule, a pair of logarithmically divided scales used for calculation. The non-sliding logarithmic scale, Gunter's rule, was invented shortly after Napier's invention. William Oughtred enhanced it to create the slide rule—a pair of logarithmic scales movable with respect to each other. Numbers are placed on sliding scales at distances proportional to the differences between their logarithms. Sliding the upper scale appropriately amounts to mechanically adding logarithms, as illustrated here: For example, adding the distance from 1 to 2 on the lower scale to the distance from 1 to 3 on the upper scale yields a product of 6, which is read off at the lower part. The slide rule was an essential calculating tool for engineers and scientists until the 1970s, because it allows, at the expense of precision, much faster computation than techniques based on tables. Analytic properties A deeper study of logarithms requires the concept of a function. A function is a rule that, given one number, produces another number. An example is the function producing the x-th power of b from any real number x, where the base b is a fixed number. This function is written as f(x) = b x. When b is positive and unequal to 1, we show below that f is invertible when considered as a function from the reals to the positive reals. Let b be a positive real number not equal to 1 and let f(x) = b x. It is a standard result in real analysis that any continuous strictly monotonic function is bijective between its domain and range. This fact follows from the intermediate value theorem. Now, f is strictly increasing (for b > 1), or strictly decreasing (for 0 < b < 1), is continuous, has domain R {\displaystyle \mathbb {R} } , and has range R > 0 {\displaystyle \mathbb {R} _{>0}} . Therefore, f is a bijection from R {\displaystyle \mathbb {R} } to R > 0 {\displaystyle \mathbb {R} _{>0}} . In other words, for each positive real number y, there is exactly one real number x such that b x = y {\displaystyle b^{x}=y} . We let log b : R > 0 → R {\displaystyle \log _{b}\colon \mathbb {R} _{>0}\to \mathbb {R} } denote the inverse of f. That is, logb y is the unique real number x such that b x = y {\displaystyle b^{x}=y} . This function is called the base-b logarithm function or logarithmic function (or just logarithm). The function logb x can also be essentially characterized by the product formula log b ⁡ ( x y ) = log b ⁡ x + log b ⁡ y . {\displaystyle \log _{b}(xy)=\log _{b}x+\log _{b}y.} More precisely, the logarithm to any base b > 1 is the only increasing function f from the positive reals to the reals satisfying f(b) = 1 and f ( x y ) = f ( x ) + f ( y ) . {\displaystyle f(xy)=f(x)+f(y).} As discussed above, the function logb is the inverse to the exponential function x ↦ b x {\displaystyle x\mapsto b^{x}} . Therefore, their graphs correspond to each other upon exchanging the x- and the y-coordinates (or upon reflection at the diagonal line x = y), as shown at the right: a point (t, u = bt) on the graph of f yields a point (u, t = logb u) on the graph of the logarithm and vice versa. As a consequence, logb (x) diverges to infinity (gets bigger than any given number) if x grows to infinity, provided that b is greater than one. In that case, logb(x) is an increasing function. For b < 1, logb (x) tends to minus infinity instead. When x approaches zero, logb x goes to minus infinity for b > 1 (plus infinity for b < 1, respectively). Analytic properties of functions pass to their inverses. Thus, as f(x) = bx is a continuous and differentiable function, so is logb y. Roughly, a continuous function is differentiable if its graph has no sharp "corners". Moreover, as the derivative of f(x) evaluates to ln(b) bx by the properties of the exponential function, the chain rule implies that the derivative of logb x is given by d d x log b ⁡ x = 1 x ln ⁡ b . {\displaystyle {\frac {d}{dx}}\log _{b}x={\frac {1}{x\ln b}}.} That is, the slope of the tangent touching the graph of the base-b logarithm at the point (x, logb (x)) equals 1/(x ln(b)). The derivative of ln(x) is 1/x; this implies that ln(x) is the unique antiderivative of 1/x that has the value 0 for x = 1. It is this very simple formula that motivated to qualify as "natural" the natural logarithm; this is also one of the main reasons of the importance of the constant e. The derivative with a generalized functional argument f(x) is d d x ln ⁡ f ( x ) = f ′ ( x ) f ( x ) . {\displaystyle {\frac {d}{dx}}\ln f(x)={\frac {f'(x)}{f(x)}}.} The quotient at the right hand side is called the logarithmic derivative of f. Computing f'(x) by means of the derivative of ln(f(x)) is known as logarithmic differentiation. The antiderivative of the natural logarithm ln(x) is: ∫ ln ⁡ ( x ) d x = x ln ⁡ ( x ) − x + C . {\displaystyle \int \ln(x)\,dx=x\ln(x)-x+C.} Related formulas, such as antiderivatives of logarithms to other bases can be derived from this equation using the change of bases. The natural logarithm of t can be defined as the definite integral: ln ⁡ t = ∫ 1 t 1 x d x . {\displaystyle \ln t=\int _{1}^{t}{\frac {1}{x}}\,dx.} This definition has the advantage that it does not rely on the exponential function or any trigonometric functions; the definition is in terms of an integral of a simple reciprocal. As an integral, ln(t) equals the area between the x-axis and the graph of the function 1/x, ranging from x = 1 to x = t. This is a consequence of the fundamental theorem of calculus and the fact that the derivative of ln(x) is 1/x. Product and power logarithm formulas can be derived from this definition. For example, the product formula ln(tu) = ln(t) + ln(u) is deduced as: ln ⁡ ( t u ) = ∫ 1 t u 1 x d x = ( 1 ) ∫ 1 t 1 x d x + ∫ t t u 1 x d x = ( 2 ) ln ⁡ ( t ) + ∫ 1 u 1 w d w = ln ⁡ ( t ) + ln ⁡ ( u ) . {\displaystyle {\begin{aligned}\ln(tu)&=\int _{1}^{tu}{\frac {1}{x}}\,dx\\&{\stackrel {(1)}{=}}\int _{1}^{t}{\frac {1}{x}}\,dx+\int _{t}^{tu}{\frac {1}{x}}\,dx\\&{\stackrel {(2)}{=}}\ln(t)+\int _{1}^{u}{\frac {1}{w}}\,dw\\&=\ln(t)+\ln(u).\end{aligned}}} The equality (1) splits the integral into two parts, while the equality (2) is a change of variable (w = x/t). In the illustration below, the splitting corresponds to dividing the area into the yellow and blue parts. Rescaling the left hand blue area vertically by the factor t and shrinking it by the same factor horizontally does not change its size. Moving it appropriately, the area fits the graph of the function f(x) = 1/x again. Therefore, the left hand blue area, which is the integral of f(x) from t to tu is the same as the integral from 1 to u. This justifies the equality (2) with a more geometric proof. The power formula ln(tr) = r ln(t) may be derived in a similar way: ln ⁡ ( t r ) = ∫ 1 t r 1 x d x = ∫ 1 t 1 w r ( r w r − 1 d w ) = r ∫ 1 t 1 w d w = r ln ⁡ ( t ) . {\displaystyle {\begin{aligned}\ln(t^{r})&=\int _{1}^{t^{r}}{\frac {1}{x}}dx\\&=\int _{1}^{t}{\frac {1}{w^{r}}}\left(rw^{r-1}\,dw\right)\\&=r\int _{1}^{t}{\frac {1}{w}}\,dw\\&=r\ln(t).\end{aligned}}} The second equality uses a change of variables (integration by substitution), w = x1/r. The sum over the reciprocals of natural numbers, 1 + 1 2 + 1 3 + ⋯ + 1 n = ∑ k = 1 n 1 k , {\displaystyle 1+{\frac {1}{2}}+{\frac {1}{3}}+\cdots +{\frac {1}{n}}=\sum _{k=1}^{n}{\frac {1}{k}},} is called the harmonic series. It is closely tied to the natural logarithm: as n tends to infinity, the difference, ∑ k = 1 n 1 k − ln ⁡ ( n ) , {\displaystyle \sum _{k=1}^{n}{\frac {1}{k}}-\ln(n),} converges (i.e. gets arbitrarily close) to a number known as the Euler–Mascheroni constant γ = 0.5772.... This relation aids in analyzing the performance of algorithms such as quicksort. Real numbers that are not algebraic are called transcendental; for example, π and e are such numbers, but 2 − 3 {\displaystyle {\sqrt {2-{\sqrt {3}}}}} is not. Almost all real numbers are transcendental. The logarithm is an example of a transcendental function. The Gelfond–Schneider theorem asserts that logarithms usually take transcendental, i.e. "difficult" values. Calculation Logarithms are easy to compute in some cases, such as log10 (1000) = 3. In general, logarithms can be calculated using power series or the arithmetic–geometric mean, or be retrieved from a precalculated logarithm table that provides a fixed precision. Newton's method, an iterative method to solve equations approximately, can also be used to calculate the logarithm, because its inverse function, the exponential function, can be computed efficiently. Using look-up tables, CORDIC-like methods can be used to compute logarithms by using only the operations of addition and bit shifts. Moreover, the binary logarithm algorithm calculates lb(x) recursively, based on repeated squarings of x, taking advantage of the relation log 2 ⁡ ( x 2 ) = 2 log 2 ⁡ | x | . {\displaystyle \log _{2}\left(x^{2}\right)=2\log _{2}|x|.} For any real number z that satisfies 0 < z ≤ 2, the following formula holds:[nb 4] ln ⁡ ( z ) = ( z − 1 ) 1 1 − ( z − 1 ) 2 2 + ( z − 1 ) 3 3 − ( z − 1 ) 4 4 + ⋯ = ∑ k = 1 ∞ ( − 1 ) k + 1 ( z − 1 ) k k . {\displaystyle {\begin{aligned}\ln(z)&={\frac {(z-1)^{1}}{1}}-{\frac {(z-1)^{2}}{2}}+{\frac {(z-1)^{3}}{3}}-{\frac {(z-1)^{4}}{4}}+\cdots \\&=\sum _{k=1}^{\infty }(-1)^{k+1}{\frac {(z-1)^{k}}{k}}.\end{aligned}}} Equating the function ln(z) to this infinite sum (series) is shorthand for saying that the function can be approximated to a more and more accurate value by the following expressions (known as partial sums): ( z − 1 ) , ( z − 1 ) − ( z − 1 ) 2 2 , ( z − 1 ) − ( z − 1 ) 2 2 + ( z − 1 ) 3 3 , … {\displaystyle (z-1),\ \ (z-1)-{\frac {(z-1)^{2}}{2}},\ \ (z-1)-{\frac {(z-1)^{2}}{2}}+{\frac {(z-1)^{3}}{3}},\ \ldots } For example, with z = 1.5 the third approximation yields 0.4167, which is about 0.011 greater than ln(1.5) = 0.405465, and the ninth approximation yields 0.40553, which is only about 0.0001 greater. The nth partial sum can approximate ln(z) with arbitrary precision, provided the number of summands n is large enough. In elementary calculus, the series is said to converge to the function ln(z), and the function is the limit of the series. It is the Taylor series of the natural logarithm at z = 1. The Taylor series of ln(z) provides a particularly useful approximation to ln(1 + z) when z is small, |z| < 1, since then ln ⁡ ( 1 + z ) = z − z 2 2 + z 3 3 − ⋯ ≈ z . {\displaystyle \ln(1+z)=z-{\frac {z^{2}}{2}}+{\frac {z^{3}}{3}}-\cdots \approx z.} For example, with z = 0.1 the first-order approximation gives ln(1.1) ≈ 0.1, which is less than 5% off the correct value 0.0953. Another series is based on the inverse hyperbolic tangent function: ln ⁡ ( z ) = 2 ⋅ artanh z − 1 z + 1 = 2 ( z − 1 z + 1 + 1 3 ( z − 1 z + 1 ) 3 + 1 5 ( z − 1 z + 1 ) 5 + ⋯ ) , {\displaystyle \ln(z)=2\cdot \operatorname {artanh} \,{\frac {z-1}{z+1}}=2\left({\frac {z-1}{z+1}}+{\frac {1}{3}}{\left({\frac {z-1}{z+1}}\right)}^{3}+{\frac {1}{5}}{\left({\frac {z-1}{z+1}}\right)}^{5}+\cdots \right),} for any real number z > 0.[nb 5] Using sigma notation, this is also written as ln ⁡ ( z ) = 2 ∑ k = 0 ∞ 1 2 k + 1 ( z − 1 z + 1 ) 2 k + 1 . {\displaystyle \ln(z)=2\sum _{k=0}^{\infty }{\frac {1}{2k+1}}\left({\frac {z-1}{z+1}}\right)^{2k+1}.} This series can be derived from the above Taylor series. It converges quicker than the Taylor series, especially if z is close to 1. For example, for z = 1.5, the first three terms of the second series approximate ln(1.5) with an error of about 3×10−6. The quick convergence for z close to 1 can be taken advantage of in the following way: given a low-accuracy approximation y ≈ ln(z) and putting A = z exp ⁡ ( y ) , {\displaystyle A={\frac {z}{\exp(y)}},} the logarithm of z is: ln ⁡ ( z ) = y + ln ⁡ ( A ) . {\displaystyle \ln(z)=y+\ln(A).} The better the initial approximation y is, the closer A is to 1, so its logarithm can be calculated efficiently. A can be calculated using the exponential series, which converges quickly provided y is not too large. Calculating the logarithm of larger z can be reduced to smaller values of z by writing z = a · 10b, so that ln(z) = ln(a) + b · ln(10). A closely related method can be used to compute the logarithm of integers. Putting z = n + 1 n {\displaystyle \textstyle z={\frac {n+1}{n}}} in the above series, it follows that: ln ⁡ ( n + 1 ) = ln ⁡ ( n ) + 2 ∑ k = 0 ∞ 1 2 k + 1 ( 1 2 n + 1 ) 2 k + 1 . {\displaystyle \ln(n+1)=\ln(n)+2\sum _{k=0}^{\infty }{\frac {1}{2k+1}}\left({\frac {1}{2n+1}}\right)^{2k+1}.} If the logarithm of a large integer n is known, then this series yields a fast converging series for log(n+1), with a rate of convergence of ( 1 2 n + 1 ) 2 {\textstyle \left({\frac {1}{2n+1}}\right)^{2}} . The arithmetic–geometric mean yields high-precision approximations of the natural logarithm. Sasaki and Kanada showed in 1982 that it was particularly fast for precisions between 400 and 1000 decimal places, while Taylor series methods were typically faster when less precision was needed. In their work ln(x) is approximated to a precision of 2−p (or p precise bits) by the following formula (due to Carl Friedrich Gauss): ln ⁡ ( x ) ≈ π 2 M ( 1 , 2 2 − m / x ) − m ln ⁡ ( 2 ) . {\displaystyle \ln(x)\approx {\frac {\pi }{2\,\mathrm {M} \!\left(1,2^{2-m}/x\right)}}-m\ln(2).} Here M(x, y) denotes the arithmetic–geometric mean of x and y. It is obtained by repeatedly calculating the average (x + y)/2 (arithmetic mean) and x y {\textstyle {\sqrt {xy}}} (geometric mean) of x and y then let those two numbers become the next x and y. The two numbers quickly converge to a common limit which is the value of M(x, y). m is chosen such that x 2 m > 2 p / 2 . {\displaystyle x\,2^{m}>2^{p/2}.\,} to ensure the required precision. A larger m makes the M(x, y) calculation take more steps (the initial x and y are farther apart so it takes more steps to converge) but gives more precision. The constants π and ln(2) can be calculated with quickly converging series. While at Los Alamos National Laboratory working on the Manhattan Project, Richard Feynman developed a bit-processing algorithm to compute the logarithm that is similar to long division and was later used in the Connection Machine. The algorithm relies on the fact that every real number x where 1 < x < 2 can be represented as a product of distinct factors of the form 1 + 2−k. The algorithm sequentially builds that product P, starting with P = 1 and k = 1: if P · (1 + 2−k) < x, then it changes P to P · (1 + 2−k). It then increases k {\displaystyle k} by one regardless. The algorithm stops when k is large enough to give the desired accuracy. Because log(x) is the sum of the terms of the form log(1 + 2−k) corresponding to those k for which the factor 1 + 2−k was included in the product P, log(x) may be computed by simple addition, using a table of log(1 + 2−k) for all k. Any base may be used for the logarithm table. Applications Logarithms have many applications inside and outside mathematics. Some of these occurrences are related to the notion of scale invariance. For example, each chamber of the shell of a nautilus is an approximate copy of the next one, scaled by a constant factor. This gives rise to a logarithmic spiral. Benford's law on the distribution of leading digits can also be explained by scale invariance. Logarithms are also linked to self-similarity. For example, logarithms appear in the analysis of algorithms that solve a problem by dividing it into two similar smaller problems and patching their solutions. The dimensions of self-similar geometric shapes, that is, shapes whose parts resemble the overall picture are also based on logarithms. Logarithmic scales are useful for quantifying the relative change of a value as opposed to its absolute difference. Moreover, because the logarithmic function log(x) grows very slowly for large x, logarithmic scales are used to compress large-scale scientific data. Logarithms also occur in numerous scientific formulas, such as the Tsiolkovsky rocket equation, the Fenske equation, or the Nernst equation. Scientific quantities are often expressed as logarithms of other quantities, using a logarithmic scale. For example, the decibel is a unit of measurement associated with logarithmic-scale quantities. It is based on the common logarithm of ratios—10 times the common logarithm of a power ratio or 20 times the common logarithm of a voltage ratio. It is used to quantify the attenuation or amplification of electrical signals, to describe power levels of sounds in acoustics, and the absorbance of light in the fields of spectrometry and optics. The signal-to-noise ratio describing the amount of unwanted noise in relation to a (meaningful) signal is also measured in decibels. In a similar vein, the peak signal-to-noise ratio is commonly used to assess the quality of sound and image compression methods using the logarithm. The strength of an earthquake is measured by taking the common logarithm of the energy emitted at the quake. This is used in the moment magnitude scale or the Richter magnitude scale. For example, a 5.0 earthquake releases 32 times (101.5) and a 6.0 releases 1000 times (103) the energy of a 4.0. Apparent magnitude measures the brightness of stars logarithmically. In chemistry the negative of the decimal logarithm, the decimal cologarithm, is indicated by the letter p. For instance, pH is the decimal cologarithm of the activity of hydronium ions (the form hydrogen ions H+ take in water). The activity of hydronium ions in neutral water is 10−7 mol·L−1, hence a pH of 7. Vinegar typically has a pH of about 3. The difference of 4 corresponds to a ratio of 104 of the activity, that is, vinegar's hydronium ion activity is about 10−3 mol·L−1. Semilog (log–linear) graphs use the logarithmic scale concept for visualization: one axis, typically the vertical one, is scaled logarithmically. For example, the chart at the right compresses the steep increase from 1 million to 1 trillion to the same space (on the vertical axis) as the increase from 1 to 1 million. In such graphs, exponential functions of the form f(x) = a · bx appear as straight lines with slope equal to the logarithm of b. Log-log graphs scale both axes logarithmically, which causes functions of the form f(x) = a · xk to be depicted as straight lines with slope equal to the exponent k. This is applied in visualizing and analyzing power laws. Logarithms occur in several laws describing human perception: Hick's law proposes a logarithmic relation between the time individuals take to choose an alternative and the number of choices they have. Fitts's law predicts that the time required to rapidly move to a target area is a logarithmic function of the ratio between the distance to a target and the size of the target. In psychophysics, the Weber–Fechner law proposes a logarithmic relationship between stimulus and sensation such as the actual vs. the perceived weight of an item a person is carrying. (This "law", however, is less realistic than more recent models, such as Stevens's power law.) Psychological studies found that individuals with little mathematics education tend to estimate quantities logarithmically, that is, they position a number on an unmarked line according to its logarithm, so that 10 is positioned as close to 100 as 100 is to 1000. Increasing education shifts this to a linear estimate (positioning 1000 10 times as far away) in some circumstances, while logarithms are used when the numbers to be plotted are difficult to plot linearly. Logarithms arise in probability theory: the law of large numbers dictates that, for a fair coin, as the number of coin-tosses increases to infinity, the observed proportion of heads approaches one-half. The fluctuations of this proportion about one-half are described by the law of the iterated logarithm. Logarithms also occur in log-normal distributions. When the logarithm of a random variable has a normal distribution, the variable is said to have a log-normal distribution. Log-normal distributions are encountered in many fields, wherever a variable is formed as the product of many independent positive random variables, for example in the study of turbulence. Logarithms are used for maximum-likelihood estimation of parametric statistical models. For such a model, the likelihood function depends on at least one parameter that must be estimated. A maximum of the likelihood function occurs at the same parameter-value as a maximum of the logarithm of the likelihood (the "log likelihood"), because the logarithm is an increasing function. The log-likelihood is easier to maximize, especially for the multiplied likelihoods for independent random variables. Benford's law describes the occurrence of digits in many data sets, such as heights of buildings. According to Benford's law, the probability that the first decimal-digit of an item in the data sample is d (from 1 to 9) equals log10 (d + 1) − log10 (d), regardless of the unit of measurement. Thus, about 30% of the data can be expected to have 1 as first digit, 18% start with 2, etc. Auditors examine deviations from Benford's law to detect fraudulent accounting. The logarithm transformation is a type of data transformation used to bring the empirical distribution closer to the assumed one. Analysis of algorithms is a branch of computer science that studies the performance of algorithms (computer programs solving a certain problem). Logarithms are valuable for describing algorithms that divide a problem into smaller ones, and join the solutions of the subproblems. For example, to find a number in a sorted list, the binary search algorithm checks the middle entry and proceeds with the half before or after the middle entry if the number is still not found. This algorithm requires, on average, log2 (N) comparisons, where N is the list's length. Similarly, the merge sort algorithm sorts an unsorted list by dividing the list into halves and sorting these first before merging the results. Merge sort algorithms typically require a time approximately proportional to N · log(N). The base of the logarithm is not specified here, because the result only changes by a constant factor when another base is used. A constant factor is usually disregarded in the analysis of algorithms under the standard uniform cost model. A function f(x) is said to grow logarithmically if f(x) is (exactly or approximately) proportional to the logarithm of x. (Biological descriptions of organism growth, however, use this term for an exponential function.) For example, any natural number N can be represented in binary form in no more than log2 N + 1 bits. In other words, the amount of memory needed to store N grows logarithmically with N. Entropy is broadly a measure of the disorder of some system. In statistical thermodynamics, the entropy S of some physical system is defined as S = − k ∑ i p i ln ⁡ ( p i ) . {\displaystyle S=-k\sum _{i}p_{i}\ln(p_{i}).\,} The sum is over all possible states i of the system in question, such as the positions of gas particles in a container. Moreover, pi is the probability that the state i is attained and k is the Boltzmann constant. Similarly, entropy in information theory measures the quantity of information. If a message recipient may expect any one of N possible messages with equal likelihood, then the amount of information conveyed by any one such message is quantified as log2 N bits. Lyapunov exponents use logarithms to gauge the degree of chaoticity of a dynamical system. For example, for a particle moving on an oval billiard table, even small changes of the initial conditions result in very different paths of the particle. Such systems are chaotic in a deterministic way, because small measurement errors of the initial state predictably lead to largely different final states. At least one Lyapunov exponent of a deterministically chaotic system is positive. Logarithms occur in definitions of the dimension of fractals. Fractals are geometric objects that are self-similar in the sense that small parts reproduce, at least roughly, the entire global structure. The Sierpinski triangle (pictured) can be covered by three copies of itself, each having sides half the original length. This makes the Hausdorff dimension of this structure ln(3)/ln(2) ≈ 1.58. Another logarithm-based notion of dimension is obtained by counting the number of boxes needed to cover the fractal in question. Logarithms are related to musical tones and intervals. In equal temperament tunings, the frequency ratio depends only on the interval between two tones, not on the specific frequency, or pitch, of the individual tones. In the 12-tone equal temperament tuning common in modern Western music, each octave (doubling of frequency) is broken into twelve equally spaced intervals called semitones. For example, if the note A has a frequency of 440 Hz then the note B-flat has a frequency of 466 Hz. The interval between A and B-flat is a semitone, as is the one between B-flat and B (frequency 493 Hz). Accordingly, the frequency ratios agree: 466 440 ≈ 493 466 ≈ 1.059 ≈ 2 12 . {\displaystyle {\frac {466}{440}}\approx {\frac {493}{466}}\approx 1.059\approx {\sqrt[{12}]{2}}.} Intervals between arbitrary pitches can be measured in octaves by taking the base-2 logarithm of the frequency ratio, can be measured in equally tempered semitones by taking the base-21/12 logarithm (12 times the base-2 logarithm), or can be measured in cents, hundredths of a semitone, by taking the base-21/1200 logarithm (1200 times the base-2 logarithm). The latter is used for finer encoding, as it is needed for finer measurements or non-equal temperaments. Natural logarithms are closely linked to counting prime numbers (2, 3, 5, 7, 11, ...), an important topic in number theory. For any integer x, the quantity of prime numbers less than or equal to x is denoted π(x). The prime number theorem asserts that π(x) is approximately given by x ln ⁡ ( x ) , {\displaystyle {\frac {x}{\ln(x)}},} in the sense that the ratio of π(x) and that fraction approaches 1 when x tends to infinity. As a consequence, the probability that a randomly chosen number between 1 and x is prime is inversely proportional to the number of decimal digits of x. A far better estimate of π(x) is given by the offset logarithmic integral function Li(x), defined by L i ( x ) = ∫ 2 x 1 ln ⁡ ( t ) d t . {\displaystyle \mathrm {Li} (x)=\int _{2}^{x}{\frac {1}{\ln(t)}}\,dt.} The Riemann hypothesis, one of the oldest open mathematical conjectures, can be stated in terms of comparing π(x) and Li(x). The Erdős–Kac theorem describing the number of distinct prime factors also involves the natural logarithm. The logarithm of n factorial, n! = 1 · 2 · ... · n, is given by ln ⁡ ( n ! ) = ln ⁡ ( 1 ) + ln ⁡ ( 2 ) + ⋯ + ln ⁡ ( n ) . {\displaystyle \ln(n!)=\ln(1)+\ln(2)+\cdots +\ln(n).} This can be used to obtain Stirling's formula, an approximation of n! for large n. Generalizations All the complex numbers a that solve the equation e a = z {\displaystyle e^{a}=z} are called complex logarithms of z, when z is (considered as) a complex number. A complex number is commonly represented as z = x + iy, where x and y are real numbers and i is an imaginary unit, the square of which is −1. Such a number can be visualized by a point in the complex plane, as shown at the right. The polar form encodes a non-zero complex number z by its absolute value, that is, the (positive, real) distance r to the origin, and an angle between the real (x) axis Re and the line passing through both the origin and z. This angle is called the argument of z. The absolute value r of z is given by r = x 2 + y 2 . {\displaystyle \textstyle r={\sqrt {x^{2}+y^{2}}}.} Using the geometrical interpretation of sine and cosine and their periodicity in 2π, any complex number z may be denoted as z = x + i y = r ( cos ⁡ φ + i sin ⁡ φ ) = r ( cos ⁡ ( φ + 2 k π ) + i sin ⁡ ( φ + 2 k π ) ) , {\displaystyle {\begin{aligned}z&=x+iy\\&=r(\cos \varphi +i\sin \varphi )\\&=r(\cos(\varphi +2k\pi )+i\sin(\varphi +2k\pi )),\end{aligned}}} for any integer number k. Evidently the argument of z is not uniquely specified: both φ and φ' = φ + 2kπ are valid arguments of z for all integers k, because adding 2kπ radians or k⋅360°[nb 6] to φ corresponds to "winding" around the origin counter-clock-wise by k turns. The resulting complex number is always z, as illustrated at the right for k = 1. One may select exactly one of the possible arguments of z as the so-called principal argument, denoted Arg(z), with a capital A, by requiring φ to belong to one, conveniently selected turn, e.g. −π < φ ≤ π or 0 ≤ φ < 2π. These regions, where the argument of z is uniquely determined are called branches of the argument function. Euler's formula connects the trigonometric functions sine and cosine to the complex exponential: e i φ = cos ⁡ φ + i sin ⁡ φ . {\displaystyle e^{i\varphi }=\cos \varphi +i\sin \varphi .} Using this formula, and again the periodicity, the following identities hold: z = r ( cos ⁡ φ + i sin ⁡ φ ) = r ( cos ⁡ ( φ + 2 k π ) + i sin ⁡ ( φ + 2 k π ) ) = r e i ( φ + 2 k π ) = e ln ⁡ ( r ) e i ( φ + 2 k π ) = e ln ⁡ ( r ) + i ( φ + 2 k π ) = e a k , {\displaystyle {\begin{aligned}z&=r\left(\cos \varphi +i\sin \varphi \right)\\&=r\left(\cos(\varphi +2k\pi )+i\sin(\varphi +2k\pi )\right)\\&=re^{i(\varphi +2k\pi )}\\&=e^{\ln(r)}e^{i(\varphi +2k\pi )}\\&=e^{\ln(r)+i(\varphi +2k\pi )}=e^{a_{k}},\end{aligned}}} where ln(r) is the unique real natural logarithm, ak denote the complex logarithms of z, and k is an arbitrary integer. Therefore, the complex logarithms of z, which are all those complex values ak for which the ak-th power of e equals z, are the infinitely many values a k = ln ⁡ ( r ) + i ( φ + 2 k π ) , {\displaystyle a_{k}=\ln(r)+i(\varphi +2k\pi ),} for arbitrary integers k. Taking k such that φ + 2kπ is within the defined interval for the principal arguments, then ak is called the principal value of the logarithm, denoted Log(z), again with a capital L. The principal argument of any positive real number x is 0; hence Log(x) is a real number and equals the real (natural) logarithm. However, the above formulas for logarithms of products and powers do not generalize to the principal value of the complex logarithm. The illustration at the right depicts Log(z), confining the arguments of z to the interval (−π, π]. This way the corresponding branch of the complex logarithm has discontinuities all along the negative real x axis, which can be seen in the jump in the hue there. This discontinuity arises from jumping to the other boundary in the same branch, when crossing a boundary, i.e. not changing to the corresponding k-value of the continuously neighboring branch. Such a locus is called a branch cut. Dropping the range restrictions on the argument makes the relations "argument of z", and consequently the "logarithm of z", multi-valued functions. Exponentiation occurs in many areas of mathematics and its inverse function is often referred to as the logarithm. For example, the logarithm of a matrix is the (multi-valued) inverse function of the matrix exponential. Another example is the p-adic logarithm, the inverse function of the p-adic exponential. Both are defined via Taylor series analogous to the real case. In the context of differential geometry, the exponential map maps the tangent space at a point of a manifold to a neighborhood of that point. Its inverse is also called the logarithmic (or log) map. In the context of finite groups exponentiation is given by repeatedly multiplying one group element b with itself. The discrete logarithm is the integer n solving the equation b n = x , {\displaystyle b^{n}=x,} where x is an element of the group. Carrying out the exponentiation can be done efficiently, but the discrete logarithm is believed to be very hard to calculate in some groups. This asymmetry has important applications in public key cryptography, such as for example in the Diffie–Hellman key exchange, a routine that allows secure exchanges of cryptographic keys over unsecured information channels. Zech's logarithm is related to the discrete logarithm in the multiplicative group of non-zero elements of a finite field. Further logarithm-like inverse functions include the double logarithm ln(ln(x)), the super- or hyper-4-logarithm (a slight variation of which is called iterated logarithm in computer science), the Lambert W function, and the logit. They are the inverse functions of the double exponential function, tetration, of f(w) = wew, and of the logistic function, respectively. From the perspective of group theory, the identity log(cd) = log(c) + log(d) expresses a group isomorphism between positive reals under multiplication and reals under addition. Logarithmic functions are the only continuous isomorphisms between these groups. By means of that isomorphism, the Haar measure (Lebesgue measure) dx on the reals corresponds to the Haar measure dx/x on the positive reals. The non-negative reals not only have a multiplication, but also have addition, and form a semiring, called the probability semiring; this is in fact a semifield. The logarithm then takes multiplication to addition (log multiplication), and takes addition to log addition (LogSumExp), giving an isomorphism of semirings between the probability semiring and the log semiring. Logarithmic one-forms df/f appear in complex analysis and algebraic geometry as differential forms with logarithmic poles. The polylogarithm is the function defined by Li s ⁡ ( z ) = ∑ k = 1 ∞ z k k s . {\displaystyle \operatorname {Li} _{s}(z)=\sum _{k=1}^{\infty }{z^{k} \over k^{s}}.} It is related to the natural logarithm by Li1 (z) = −ln(1 − z). Moreover, Lis (1) equals the Riemann zeta function ζ(s). See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Volcanism] | [TOKENS: 4497]
Contents Volcanism Volcanism, vulcanism, volcanicity, or volcanic activity is the phenomenon where solids, liquids, gases, and their mixtures erupt to the surface of a solid-surface astronomical body such as a planet or a moon. It is caused by the presence of a heat source, usually internally generated, inside the body; the heat is generated by various processes, such as radioactive decay or tidal heating. This heat partially melts solid material in the body or turns material into gas. The mobilized material rises through the body's interior and may break through the solid surface. Causes For volcanism to occur, the temperature of the mantle must have risen to about half its melting point. At this point, the mantle's viscosity will have dropped to about 1021 Pascal-seconds. When large scale melting occurs, the viscosity rapidly falls to 103 Pascal-seconds or even less, increasing the heat transport rate a million-fold. The occurrence of volcanism is partially due to the fact that melted material tends to be more mobile and less dense than the materials from which they were produced, which can cause it to rise to the surface. There are multiple ways to generate the heat needed for volcanism. Volcanism on outer solar system moons is powered mainly by tidal heating. Tidal heating is caused by the deformation of a body's shape due to mutual gravitational attraction, which generates heat. Tidal heating is the cause of volcanism on Io, a moon of Jupiter. Earth experiences tidal heating from the Moon, deforming by up to 1 metre (3 feet), but this does not make up a major portion of Earth's total heat. During a planet's formation, it would have experienced heating from impacts from planetesimals, which would have dwarfed even the asteroid impact that caused the extinction of dinosaurs. This heating could trigger differentiation, further heating the planet. The larger a body is, the slower it loses heat. In larger bodies, for example Earth, this heat, known as primordial heat, still makes up much of the body's internal heat, but the Moon, which is smaller than Earth, has lost most of this heat. Another heat source is radiogenic heat, caused by radioactive decay. The decay of aluminium-26 would have significantly heated planetary embryos, but due to its short half-life (less than a million years), any traces of it have long since vanished. There are small traces of unstable isotopes in common minerals, and all the terrestrial planets, and the Moon, experience some of this heating. The icy bodies of the outer solar system experience much less of this heat because they tend to not be very dense and not have much silicate material (radioactive elements concentrate in silicates). On Neptune's moon Triton, and possibly on Mars, cryogeyser activity takes place. The source of heat is external (heat from the Sun) rather than internal. Decompression melting happens when solid material from deep beneath the body rises upwards. Pressure decreases as the material rises upwards, and so does the melting point. So, a rock that is solid at a given pressure and temperature can become liquid if the pressure, and thus melting point, decreases even if the temperature stays constant. However, in the case of water, increasing pressure decreases melting point until a pressure of 0.208 GPa is reached, after which the melting point increases with pressure. Flux melting occurs when the melting point is lowered by the addition of volatiles, for example, water or carbon dioxide. Like decompression melting, it is not caused by an increase in temperature, but rather by a decrease in melting point. Cryovolcanism, instead of originating in a uniform subsurface ocean, may instead take place from discrete liquid reservoirs. The first way these can form is a plume of warm ice welling up and then sinking back down, forming a convection current. A model developed to investigate the effects of this on Europa found that energy from tidal heating became focused in these plumes, allowing melting to occur in these shallow depths as the plume spreads laterally (horizontally). The next is a switch from vertical to horizontal propagation of a fluid filled crack. Another mechanism is heating of ice from release of stress through lateral motion of fractures in the ice shell penetrating it from the surface, and even heating from large impacts can create such reservoirs. When material of a planetary body begins to melt, the melting first occurs in small pockets in certain high energy locations, for example grain boundary intersections and where different crystals react to form eutectic liquid, that initially remain isolated from one another, trapped inside rock. If the contact angle of the melted material allows the melt to wet crystal faces and run along grain boundaries, the melted material will accumulate into larger quantities. On the other hand, if the contact angle is greater than about 60 degrees, much more melt must form before it can separate from its parental rock. Studies of rocks on Earth suggest that melt in hot rocks quickly collects into pockets and veins that are much larger than the grain size, in contrast to the model of rigid melt percolation. Melt, instead of uniformly flowing out of source rock, flows out through rivulets which join to create larger veins. Under the influence of buoyancy, the melt rises. Diapirs may also form in non-silicate bodies, playing a similar role in moving warm material towards the surface. A dike is a vertical fluid-filled crack, from a mechanical standpoint it is a water filled crevasse turned upside down. As magma rises into the vertical crack, the low density of the magma compared to the wall rock means that the pressure falls less rapidly than in the surrounding denser rock. If the average pressure of the magma and the surrounding rock are equal, the pressure in the dike exceeds that of the enclosing rock at the top of the dike, and the pressure of the rock is greater than that of the dike at its bottom. So the magma thus pushes the crack upwards at its top, but the crack is squeezed closed at its bottom due to an elastic reaction (similar to the bulge next to a person sitting down on a springy sofa). Eventually, the tail gets so narrow it nearly pinches off, and no more new magma will rise into the crack. The crack continues to ascend as an independent pod of magma. This model of volcanic eruption posits that magma rises through a rigid open channel, in the lithosphere and settles at the level of hydrostatic equilibrium. Despite how it explains observations well (which newer models cannot), such as an apparent concordance of the elevation of volcanoes near each other, it cannot be correct and is now discredited, because the lithosphere thickness derived from it is too large for the assumption of a rigid open channel to hold. Unlike silicate volcanism, where melt can rise by its own buoyancy until it reaches the shallow crust, in cryovolcanism, the water (cryomagmas tend to be water based) is denser than the ice above it. One way to allow cryomagma to reach the surface is to make the water buoyant, by making the water less dense, either through the presence of other compounds that reverse negative buoyancy, or with the addition of exsolved gas bubbles in the cryomagma that were previously dissolved into it (that makes the cryomagma less dense), or with the presence of a densifying agent in the ice shell. Another is to pressurise the fluid to overcome negative buoyancy and make it reach the surface. When the ice shell above a subsurface ocean thickens, it can pressurise the entire ocean (in cryovolcanism, frozen water or brine is less dense than in liquid form). When a reservoir of liquid partially freezes, the remaining liquid is pressurised in the same way. For a crack in the ice shell to propagate upwards, the fluid in it must have positive buoyancy or external stresses must be strong enough to break through the ice. External stresses could include those from tides or from overpressure due to freezing as explained above. There is yet another possible mechanism for ascent of cryovolcanic melts. If a fracture with water in it reaches an ocean or subsurface fluid reservoir, the water would rise to its level of hydrostatic equilibrium, at about nine-tenths of the way to the surface. Tides which induce compression and tension in the ice shell may pump the water farther up. A 1988 article proposed a possibility for fractures propagating upwards from the subsurface ocean of Jupiter's moon Europa. It proposed that a fracture propagating upwards would possess a low pressure zone at its tip, allowing volatiles dissolved within the water to exsolve into gas. The elastic nature of the ice shell would likely prevent the fracture reaching the surface, and the crack would instead pinch off, enclosing the gas and liquid. The gas would increase buoyancy and could allow the crack to reach the surface. Even impacts can create conditions that allow for enhanced ascent of magma. An impact may remove the top few kilometres of crust, and pressure differences caused by the difference in height between the basin and the height of the surrounding terrain could allow eruption of magma which otherwise would have stayed beneath the surface. A 2011 article showed that there would be zones of enhanced magma ascent at the margins of an impact basin. Not all of these mechanisms, and maybe even none, operate on a given body. Types Silicate volcanism occurs where silicate materials are erupted. Silicate lava flows, like those found on Earth, solidify at about 1000 degrees Celsius. A mud volcano is formed when fluids and gases under pressure erupt to the surface, bringing mud with them. This pressure can be caused by the weight of overlying sediments over the fluid which pushes down on the fluid, preventing it from escaping, by fluid being trapped in the sediment, migrating from deeper sediment into other sediment or being made from chemical reactions in the sediment. They often erupt quietly, but sometimes they erupt flammable gases such as methane. Cryovolcanism is the eruption of volatiles into an environment below their freezing point. The processes behind it are different to silicate volcanism because the cryomagma (which is usually water-based) is normally denser than its surroundings, meaning it cannot rise by its own buoyancy. Sulfur lavas have a different behaviour to silicate ones. First, sulfur has a low melting point of about 120 degrees Celsius. Also, after cooling down to about 175 degrees Celsius the lava rapidly loses viscosity, unlike silicate lavas like those found on Earth. Lava types When magma erupts onto a planet's surface, it is termed lava. Viscous lavas form short, stubby glass-rich flows. These usually have a wavy solidified surface texture. More fluid lavas have solidified surface textures that volcanologists classify into four types. Pillow lava forms when a trigger, often lava making contact with water, causes a lava flow to cool rapidly. This splinters the surface of the lava, and the magma then collects into sacks that often pile up in front of the flow, forming a structure called a pillow. A’a lava has a rough, spiny surface made of clasts of lava called clinkers. Block lava is another type of lava, with less jagged fragments than in a’a lava. Pahoehoe lava is by far the most common lava type, both on Earth and probably the other terrestrial planets. It has a smooth surface, with mounds, hollows and folds. Gentle/explosive activity A volcanic eruption could just be a simple outpouring of material onto the surface of a planet, but they usually involve a complex mixture of solids, liquids and gases which behave in equally complex ways. Some types of explosive eruptions can release energy a quarter that of an equivalent mass of TNT. Volcanic eruptions on Earth have been consistently observed to progress from erupting gas rich material to gas depleted material, although an eruption may alternate between erupting gas rich to gas depleted material and vice versa multiple times. This can be explained by the enrichment of magma at the top of a dike by gas which is released when the dike breaches the surface, followed by magma from lower down than did not get enriched with gas. The reason the dissolved gas in the magma separates from it when the magma nears the surface is due to the effects of temperature and pressure on gas solubility. Pressure increases gas solubility, and if a liquid with dissolved gas in it depressurises, the gas will tend to exsolve (or separate) from the liquid. An example of this is what happens when a bottle of carbonated drink is quickly opened: when the seal is opened, pressure decreases and bubbles of carbon dioxide gas appear throughout the liquid. Fluid magmas erupt quietly. Any gas that has exsolved from the magma easily escapes even before it reaches the surface. However, in viscous magmas, gases remain trapped in the magma even after they have exsolved, forming bubbles inside the magma. These bubbles enlarge as the magma nears the surface due to the dropping pressure, and the magma grows substantially. This fact gives volcanoes erupting such material a tendency to ‘explode’, although instead of the pressure increase associated with an explosion, pressure always decreases in a volcanic eruption. Generally, explosive cryovolcanism is driven by exsolution of volatiles that were previously dissolved into the cryomagma, similar to what happens in explosive silicate volcanism as seen on Earth, which is what is mainly covered below. Silica-rich magmas cool beneath the surface before they erupt. As they do this, bubbles exsolve from the magma. As the magma nears the surface, the bubbles and thus the magma increase in volume. The resulting pressure eventually breaks through the surface, and the release of pressure causes more gas to exsolve, doing so explosively. The gas may expand at hundreds of metres per second, expanding upward and outward. As the eruption progresses, a chain reaction causes the magma to be ejected at higher and higher speeds. The violently expanding gas disperses and breaks up magma, forming a colloid of gas and magma called volcanic ash. The cooling of the gas in the ash as it expands chills the magma fragments, often forming tiny glass shards recognisable as portions of the walls of former liquid bubbles. In more fluid magmas the bubble walls may have time to reform into spherical liquid droplets. The final state of the colloids depends strongly on the ratio of liquid to gas. Gas-poor magmas end up cooling into rocks with small cavities, becoming vesicular lava. Gas-rich magmas cool to form rocks with cavities that nearly touch, with an average density less than that of water, forming pumice. Meanwhile, other material can be accelerated with the gas, becoming volcanic bombs. These can travel with so much energy that large ones can create craters when they hit the ground. A colloid of volcanic gas and magma can form as a density current called a pyroclastic flow. This occurs when erupted material falls back to the surface. The colloid is somewhat fluidised by the gas, allowing it to spread. Pyroclastic flows can often climb over obstacles, and devastate human life. Pyroclastic flows are a common feature at explosive volcanoes on Earth. Pyroclastic flows have been found on Venus, for example at the Dione Regio volcanoes. A phreatic eruption can occur when hot water under pressure is depressurised. Depressurisation reduces the boiling point of the water, so when depressurised the water suddenly boils. Or it may happen when groundwater is suddenly heated, flashing to steam suddenly. When water turns into steam in a phreatic eruption, it expands at supersonic speeds, up to 1,700 times its original volume. This can be enough to shatter solid rock, and hurl rock fragments hundreds of metres. A phreatomagmatic eruption occurs when hot magma makes contact with water, creating an explosion. One mechanism for explosive cryovolcanism is cryomagma making contact with clathrate hydrates. Clathrate hydrates, if exposed to warm temperatures, readily decompose. A 1982 article pointed out the possibility that the production of pressurised gas upon destabilisation of clathrate hydrates making contact with warm rising magma could produce an explosion that breaks through the surface, resulting in explosive cryovolcanism. If a fracture reaches the surface of an icy body and the column of rising water is exposed to the near-vacuum of the surface of most icy bodies, it will immediately start to boil, because its vapor pressure is much more than the ambient pressure. Not only that, but any volatiles in the water will exsolve. The combination of these processes will release droplets and vapor, which can rise up the fracture, creating a plume. This is thought to be partially responsible for Enceladus's ice plumes. Occurrence On Earth, volcanoes are most often found where tectonic plates are diverging or converging, and because most of Earth's plate boundaries are underwater, most volcanoes are found underwater. For example, a mid-ocean ridge, such as the Mid-Atlantic Ridge, has volcanoes caused by divergent tectonic plates whereas the Pacific Ring of Fire has volcanoes caused by convergent tectonic plates. Volcanoes can also form where there is stretching and thinning of the crust's plates, such as in the East African Rift and the Wells Gray-Clearwater volcanic field and Rio Grande rift in North America. Volcanism away from plate boundaries has been postulated to arise from upwelling diapirs from the core–mantle boundary, 3,000 kilometers (1,900 mi) deep within Earth. This results in hotspot volcanism, of which the Hawaiian hotspot is an example. Volcanoes are usually not created where two tectonic plates slide past one another. In 1912–1952, in the Northern Hemisphere, studies show that within this time, winters were warmer due to no massive eruptions that had taken place. These studies demonstrate how these eruptions can cause changes within the Earth's atmosphere. Large eruptions can affect atmospheric temperature as ash and droplets of sulfuric acid obscure the Sun and cool Earth's troposphere. Historically, large volcanic eruptions have been followed by volcanic winters which have caused catastrophic famines. Earth's Moon has no large volcanoes and no current volcanic activity, although recent evidence suggests it may still possess a partially molten core. However, the Moon does have many volcanic features such as maria (the darker patches seen on the Moon), rilles and domes. The planet Venus has a surface that is 90% basalt, indicating that volcanism played a major role in shaping its surface. The planet may have had a major global resurfacing event about 500 million years ago, from what scientists can tell from the density of impact craters on the surface. Lava flows are widespread and forms of volcanism not present on Earth occur as well. Changes in the planet's atmosphere and observations of lightning have been attributed to ongoing volcanic eruptions, although there is no confirmation of whether or not Venus is still volcanically active. However, radar sounding by the Magellan probe revealed evidence for comparatively recent volcanic activity at Venus's highest volcano Maat Mons, in the form of ash flows near the summit and on the northern flank. However, the interpretation of the flows as ash flows has been questioned. There are several extinct volcanoes on Mars, four of which are vast shield volcanoes far bigger than any on Earth. They include Arsia Mons, Ascraeus Mons, Hecates Tholus, Olympus Mons, and Pavonis Mons. These volcanoes have been extinct for many millions of years, but the European Mars Express spacecraft has found evidence that volcanic activity may have occurred on Mars in the recent past as well. Jupiter's moon Io is the most volcanically active object in the Solar System because of tidal interaction with Jupiter. It is covered with volcanoes that erupt sulfur, sulfur dioxide and silicate rock, and as a result, Io is constantly being resurfaced. There are only two bodies in the Solar System where volcanoes can be easily seen due to their high activity, Earth and Io. Its lavas are the hottest known anywhere in the Solar System, with temperatures exceeding 1,800 K (1,500 °C). In February 2001, the largest recorded volcanic eruptions in the Solar System occurred on Io. Europa, the smallest of Jupiter's Galilean moons, also appears to have an active volcanic system, except that its volcanic activity is entirely in the form of water, which freezes into ice on the frigid surface. This process is known as cryovolcanism, and is apparently most common on the moons of the outer planets of the Solar System. In 1989, the Voyager 2 spacecraft observed cryovolcanoes (ice volcanoes) on Triton, a moon of Neptune, and in 2005 the Cassini–Huygens probe photographed fountains of frozen particles erupting from Enceladus, a moon of Saturn. The ejecta may be composed of water, liquid nitrogen, ammonia, dust, or methane compounds. Cassini–Huygens also found evidence of a methane-spewing cryovolcano on the Saturnian moon Titan, which is believed to be a significant source of the methane found in its atmosphere. It is theorized that cryovolcanism may also be present on the Kuiper Belt Object Quaoar. A 2010 study of the exoplanet COROT-7b, which was detected by transit in 2009, suggested that tidal heating from the host star very close to the planet and neighboring planets could generate intense volcanic activity similar to that found on Io. See also References External links
========================================
[SOURCE: https://techcrunch.com/podcast/compensation-culture-and-cap-tables-with-yuri-sagalov-general-catalyst/] | [TOKENS: 933]
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Compensation, culture, and cap tables with Yuri Sagalov, General Catalyst Build Mode is back. This season we’re breaking down what it really takes to build a world-class founding team starting with your cap table, equity structures, and startup compensation strategy. We kick off with Yuri Sagalov, managing director at General Catalyst and former founder, YC partner, and seed investor at Wayfinder Ventures. Yuri has worked with hundreds of pre-seed and seed-stage startups, and he shares practical advice on how early-stage founders should think about startup equity, cap table design, investor selection, and compensation structures from day one. He breaks down: No matter where you are in your startup journey, this episode will help you get the incentive structure right from the beginning. Chapters: 00:00 – Why your first hires deserve more equity 00:31 – Meet Yuri Sagalov (YC → General Catalyst) 02:12 – Your cap table is part of your team 02:50 – The 3 types of investors (avoid this one) 05:02 – How to split equity with a co-founder 07:55 – How much equity to give early employees 09:37 – How to talk compensation and risk 12:31 – Red flags in formation docs and vesting 18:27 – Advisors for equity? Usually a mistake 20:05 – The 20–25% seed dilution rule 26:03 – The shift to 10-year stock options 34:11 – Don’t scale before product-market fit 39:23 – Final advice: Just start and choose your co-founder carefully New episodes of Build Mode drop every Thursday. Hosted by Isabelle Johannessen. Produced and edited by Maggie Nye. Audience development led by Morgan Little. Special thanks to the Foundry and Cheddar video teams. Sr. Audio Producer Maggie Nye is a Podcast Producer for TechCrunch based in Denver, Colorado. Previously, she worked as the Brand and Content Manager for BUILT BY GIRLS where she developed an interest in tech and a passion for creating equitable and welcoming professional tech spaces. She holds a bachelor’s degree in Journalism with a minor in English from Hofstra University in New York. You can contact or verify outreach from Maggie by emailing maggie@techcrunch.com. Head of the Startup Battlefield Program Isabelle leads Startup Battlefield, TechCrunch’s iconic launchpad and competition for the world’s most promising early-stage startups. You can contact or verify outreach from Isabelle by emailing isabelle.johannessen@techcrunch.com. She scouts top founders across 99+ countries and prepares them to pitch on the Disrupt stage in front of tier-one investors and global media. Before TechCrunch, she designed and led international startup acceleration programs across Japan, Korea, Italy, and Spain—connecting global founders with VCs and helping them successfully enter the U.S. market. With a Master’s in Entrepreneurship & Disruptive Innovation—and a past life as a professional singer—she brings a blend of strategic rigor and stage presence to help founders craft compelling stories and stand out in crowded markets. Save up to $680 on your pass before February 27.Meet investors. Discover your next portfolio company. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what’s next. Don’t miss these one-time savings. Most Popular FBI says ATM ‘jackpotting’ attacks are on the rise, and netting hackers millions in stolen cash Meta’s own research found parental supervision doesn’t really help curb teens’ compulsive social media use How Ricursive Intelligence raised $335M at a $4B valuation in 4 months After all the hype, some AI experts don’t think OpenClaw is all that exciting OpenClaw creator Peter Steinberger joins OpenAI Hollywood isn’t happy about the new Seedance 2.0 video generator The great computer science exodus (and where students are going instead) Latest Build Mode Episodes © 2025 TechCrunch Media LLC.
========================================
[SOURCE: https://he.wikipedia.org/wiki/VIAF] | [TOKENS: 677]
תוכן עניינים VIAF בקרת זהויות בין-לאומי וירטואלי (באנגלית: Virtual International Authority File; בראשי תיבות: VIAF) הוא אוסף של בקרת זהויות בין-לאומי של אישים או ישות אחרת שמוכל בגופי בקרת זהויות אחרות, בעיקר ספריות לאומיות ומוזיאונים. העוצמה של ה־VIAF היא במזהה יחיד שמחובר למזהים בספריות שונות ומאפשר במזהה אחד להגיע להרבה אוספים של האובייקט. ה־VIAF הוא וירטואלי ומאפשר גם ליחידים להגיע למזהה באופן עצמאי דרך האינטרנט. היסטוריה ה־VIAF מנוהל על ידי OCLC, והוא תוצר של עבודה משותפת שבוצעה על ידי הספרייה הלאומית של גרמניה, ספריית הקונגרס, ושירות זהויות WorldCat שפותח על ידי OCLC בשנת 2003. בשנת 2007 הצטרפה לפרויקט הספרייה הלאומית של צרפת ובשנת 2012 השירות נפתח לכולם. משתתפים הספריות העיקריות שמשתתפות בפרויקט זה הן: קישורים חיצוניים הערות שוליים
========================================
[SOURCE: https://en.wikipedia.org/wiki/Transcendental_function] | [TOKENS: 2649]
Contents Transcendental function In mathematics, a transcendental function is an analytic function that does not satisfy a polynomial equation whose coefficients are functions of the independent variable that can be written using only the basic operations of addition, subtraction, multiplication, and division (without the need of taking limits). This is in contrast to an algebraic function. Examples of transcendental functions include the exponential function, the logarithm function, the hyperbolic functions, and the trigonometric functions. Equations over these expressions are called transcendental equations. Definition Formally, an analytic function f {\displaystyle f} of one real or complex variable is transcendental if it is algebraically independent of that variable. This means the function does not satisfy any polynomial equation. For example, the function f {\displaystyle f} given by is not transcendental, but algebraic, because it satisfies the polynomial equation Similarly, the function f {\displaystyle f} that satisfies the equation is not transcendental, but algebraic, even though it cannot be written as a finite expression involving the basic arithmetic operations. This definition can be extended to functions of several variables. History The transcendental functions sine and cosine were tabulated from physical measurements in antiquity, as evidenced in Greece (Hipparchus) and India (jya and koti-jya). In describing Ptolemy's table of chords, an equivalent to a table of sines, Olaf Pedersen wrote: The mathematical notion of continuity as an explicit concept is unknown to Ptolemy. That he, in fact, treats these functions as continuous appears from his unspoken presumption that it is possible to determine a value of the dependent variable corresponding to any value of the independent variable by the simple process of linear interpolation. A revolutionary understanding of these circular functions occurred in the 17th century and was explicated by Leonhard Euler in 1748 in his Introduction to the Analysis of the Infinite. These ancient transcendental functions became known as continuous functions through quadrature of the rectangular hyperbola xy = 1 by Grégoire de Saint-Vincent in 1647, two millennia after Archimedes had produced The Quadrature of the Parabola. The area under the hyperbola was shown to have the scaling property of constant area for a constant ratio of bounds. The hyperbolic logarithm function so described was of limited service until 1748 when Leonhard Euler related it to functions where a constant is raised to a variable exponent, such as the exponential function where the constant base is e. By introducing these transcendental functions and noting the bijection property that implies an inverse function, some facility was provided for algebraic manipulations of the natural logarithm even if it is not an algebraic function. The exponential function is written exp ⁡ ( x ) = e x {\displaystyle \exp(x)=e^{x}} . Euler identified it with the infinite series ∑ k = 0 ∞ x k / k ! {\textstyle \sum _{k=0}^{\infty }x^{k}/k!} , where k! denotes the factorial of k. The even and odd terms of this series provide sums denoting cosh(x) and sinh(x), so that e x = cosh ⁡ x + sinh ⁡ x . {\displaystyle e^{x}=\cosh x+\sinh x.} These transcendental hyperbolic functions can be converted into circular functions sine and cosine by introducing (−1)k into the series, resulting in alternating series. After Euler, mathematicians view the sine and cosine this way to relate the transcendence to logarithm and exponent functions, often through Euler's formula in complex number arithmetic. Examples The following functions are transcendental: f 1 ( x ) = x π f 2 ( x ) = e x f 3 ( x ) = ln ⁡ x f 4 ( x ) = cosh ⁡ x f 5 ( x ) = sinh ⁡ x f 6 ( x ) = tanh ⁡ x f 7 ( x ) = sinh − 1 ⁡ x f 8 ( x ) = tanh − 1 ⁡ x f 9 ( x ) = cos ⁡ x f 10 ( x ) = sin ⁡ x f 11 ( x ) = tan ⁡ x f 12 ( x ) = sin − 1 ⁡ x f 13 ( x ) = cos − 1 ⁡ x f 14 ( x ) = tan − 1 ⁡ x f 15 ( x ) = x ! f 16 ( x ) = 1 x ! f 17 ( x ) = x x {\displaystyle {\begin{aligned}f_{1}(x)&=x^{\pi }\\[2pt]f_{2}(x)&=e^{x}\\[2pt]f_{3}(x)&=\ln {x}\\[2pt]f_{4}(x)&=\cosh {x}\\f_{5}(x)&=\sinh {x}\\f_{6}(x)&=\tanh {x}\\f_{7}(x)&=\sinh ^{-1}{x}\\[2pt]f_{8}(x)&=\tanh ^{-1}{x}\\[2pt]f_{9}(x)&=\cos {x}\\f_{10}(x)&=\sin {x}\\f_{11}(x)&=\tan {x}\\f_{12}(x)&=\sin ^{-1}{x}\\[2pt]f_{13}(x)&=\cos ^{-1}{x}\\[2pt]f_{14}(x)&=\tan ^{-1}{x}\\[2pt]f_{15}(x)&=x!\\f_{16}(x)&={\frac {1}{x!}}\\[2pt]f_{17}(x)&=x^{x}\\[2pt]\end{aligned}}} For the first function f 1 ( x ) {\displaystyle f_{1}(x)} , the exponent π {\displaystyle \pi } can be replaced by any other irrational number, and the function will remain transcendental. For the second and third functions f 2 ( x ) {\displaystyle f_{2}(x)} and f 3 ( x ) {\displaystyle f_{3}(x)} , the base e {\displaystyle e} can be replaced by any other positive real number base not equaling 1, and the functions will remain transcendental. Functions 4-8 denote the hyperbolic trigonometric functions, while functions 9-14 denote the circular trigonometric functions. The fifteenth function f 15 ( x ) {\displaystyle f_{15}(x)} denotes the analytic extension of the factorial function via the gamma function, and f 16 ( x ) {\displaystyle f_{16}(x)} is its reciprocal, an entire function. Finally, in the last function f 17 ( x ) {\displaystyle f_{17}(x)} , the exponent x {\displaystyle x} can be replaced by k x {\displaystyle kx} for any nonzero real k {\displaystyle k} , and the function will remain transcendental. Algebraic and transcendental functions The most familiar transcendental functions are the logarithm, the exponential (with any non-trivial base), the trigonometric, and the hyperbolic functions, and the inverses of all of these. Less familiar are the special functions of analysis, such as the gamma, elliptic, and zeta functions, all of which are transcendental. The generalized hypergeometric and Bessel functions are transcendental in general, but algebraic for some special parameter values. Transcendental functions cannot be defined using only the operations of addition, subtraction, multiplication, division, and n {\displaystyle n} th roots (where n {\displaystyle n} is any integer), without using some "limiting process". A function that is not transcendental is algebraic. Simple examples of algebraic functions are the rational functions and the square root function, but in general, algebraic functions cannot be defined as finite formulas of the elementary functions, as shown by the example above with f ( x ) 5 + f ( x ) = x {\displaystyle f(x)^{5}+f(x)=x} (see Abel–Ruffini theorem). The indefinite integral of many algebraic functions is transcendental. For example, the integral ∫ t = 1 x 1 t d t {\displaystyle \int _{t=1}^{x}{\frac {1}{t}}dt} turns out to equal the logarithm function ln ⁡ ( x ) {\displaystyle \ln(x)} . Similarly, the limit or the infinite sum of many algebraic function sequences is transcendental. For example, lim n → ∞ ( 1 + x n ) n {\displaystyle \lim _{n\to \infty }\left(1+{\frac {x}{n}}\right)^{n}} converges to the exponential function e x {\displaystyle e^{x}} , and the infinite sum ∑ n = 0 ∞ x 2 n ( 2 n ) ! {\displaystyle \sum _{n=0}^{\infty }{\frac {x^{2n}}{(2n)!}}} turns out to equal the hyperbolic cosine function cosh ⁡ x {\displaystyle \cosh x} . In fact, it is impossible to define any transcendental function in terms of algebraic functions without using some such "limiting procedure" (integrals, sequential limits, and infinite sums are just a few). Differential algebra examines how integration frequently creates functions that are algebraically independent of some class, such as when one takes polynomials with trigonometric functions as variables. Transcendentally transcendental functions Most familiar transcendental functions, including the special functions of mathematical physics, are solutions of algebraic differential equations. Those that are not, such as the gamma and the zeta functions, are called transcendentally transcendental or hypertranscendental functions. Exceptional set If f is an algebraic function and α {\displaystyle \alpha } is an algebraic number then f (α) is also an algebraic number. The converse is not true: there are entire transcendental functions f such that f (α) is an algebraic number for any algebraic α. For a given transcendental function the set of algebraic numbers giving algebraic results is called the exceptional set of that function. Formally it is defined by: E ( f ) = { α ∈ Q ¯ : f ( α ) ∈ Q ¯ } . {\displaystyle {\mathcal {E}}(f)=\left\{\alpha \in {\overline {\mathbb {Q} }}\,:\,f(\alpha )\in {\overline {\mathbb {Q} }}\right\}.} In many instances the exceptional set is fairly small. For example, E ( exp ) = { 0 } , {\displaystyle {\mathcal {E}}(\exp )=\{0\},} this was proved by Lindemann in 1882. In particular exp(1) = e is transcendental. Also, since exp(iπ) = −1 is algebraic we know that iπ cannot be algebraic. Since i is algebraic this implies that π is a transcendental number. In general, finding the exceptional set of a function is a difficult problem, but if it can be calculated then it can often lead to results in transcendental number theory. Here are some other known exceptional sets: While calculating the exceptional set for a given function is not easy, it is known that given any subset of the algebraic numbers, say A, there is a transcendental function whose exceptional set is A. The subset does not need to be proper, meaning that A can be the set of algebraic numbers. This directly implies that there exist transcendental functions that produce transcendental numbers only when given transcendental numbers. Alex Wilkie also proved that there exist transcendental functions for which first-order-logic proofs about their transcendence do not exist by providing an exemplary analytic function. Dimensional analysis In dimensional analysis, transcendental functions are notable because they make sense only when their argument is dimensionless (possibly after algebraic reduction). Because of this, transcendental functions can be an easy-to-spot source of dimensional errors. For example, log(5 metres) is a nonsensical expression, unlike log(5 metres / 3 metres) or log(3) metres. One could attempt to apply a logarithmic identity to get log(5) + log(metres), which highlights the problem: applying a non-algebraic operation to a dimension creates meaningless results. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Sexual_reproduction] | [TOKENS: 4215]
Contents Sexual reproduction Sexual reproduction is a type of reproduction that involves a complex life cycle in which a gamete (haploid reproductive cells, such as a sperm or egg cell) with a single set of chromosomes combines with another gamete to produce a zygote that develops into an organism composed of cells with two sets of chromosomes (diploid). This is typical in animals, though the number of chromosome sets and how that number changes in sexual reproduction varies, especially among plants, fungi, and other eukaryotes. In placental mammals, sperm cells exit the penis through the male urethra and enter the vagina during copulation, while egg cells enter the uterus through the oviduct. Other vertebrates of both sexes possess a cloaca for the release of sperm or egg cells. Sexual reproduction is the most common life cycle in multicellular eukaryotes, such as animals, fungi and plants. Sexual reproduction also occurs in some unicellular eukaryotes. Sexual reproduction does not occur in prokaryotes, unicellular organisms without cell nuclei, such as bacteria and archaea. However, some processes in bacteria, including bacterial conjugation, transformation and transduction, may be considered analogous to sexual reproduction in that they incorporate new genetic information. Some proteins and other features that are key for sexual reproduction may have arisen in bacteria, but sexual reproduction is believed to have developed in an ancient eukaryotic ancestor. In eukaryotes, diploid precursor cells divide to produce haploid cells in a process called meiosis. In meiosis, DNA is replicated to produce a total of four copies of each chromosome. This is followed by two cell divisions to generate haploid gametes. After the DNA is replicated in meiosis, the homologous chromosomes pair up so that their DNA sequences are aligned with each other. During this period before cell divisions, genetic information is exchanged between homologous chromosomes in genetic recombination. Homologous chromosomes contain highly similar but not identical information, and by exchanging similar but not identical regions, genetic recombination increases genetic diversity among future generations. During sexual reproduction, two haploid gametes combine into one diploid cell known as a zygote in a process called fertilization. The nuclei from the gametes fuse, and each gamete contributes half of the genetic material of the zygote. Multiple cell divisions by mitosis (without change in the number of chromosomes) then develop into a multicellular diploid phase or generation. In plants, the diploid phase, known as the sporophyte, produces spores by meiosis. These spores then germinate and divide by mitosis to form a haploid multicellular phase, the gametophyte, which produces gametes directly by mitosis. This type of life cycle, involving alternation between two multicellular phases, the sexual haploid gametophyte and asexual diploid sporophyte, is known as alternation of generations. The evolution of sexual reproduction is considered paradoxical, because asexual reproduction should be able to outperform it as every young organism created can bear its own young. This implies that an asexual population has an intrinsic capacity to grow more rapidly with each generation. This 50% cost is a fitness disadvantage of sexual reproduction. The two-fold cost of sex includes this cost and the fact that any organism can only pass on 50% of its own genes to its offspring. However, one definite advantage of sexual reproduction is that it increases genetic diversity and impedes the accumulation of harmful genetic mutations. Sexual selection is a mode of natural selection in which some individuals out-reproduce others of a population because they are better at securing mates for sexual reproduction.[failed verification] It has been described as "a powerful evolutionary force that does not exist in asexual populations". Evolution The first fossilized evidence of sexual reproduction in eukaryotes is from the Stenian period, about 1.05 billion years old. Biologists studying evolution propose several explanations for the development of sexual reproduction and its maintenance. These reasons include reducing the likelihood of the accumulation of deleterious mutations, increasing rate of adaptation to changing environments, dealing with competition, DNA repair, masking deleterious mutations, and reducing genetic variation on the genomic level. All of these ideas about why sexual reproduction has been maintained are generally supported, but ultimately the size of the population determines if sexual reproduction is entirely beneficial. Larger populations appear to respond more quickly to some of the benefits obtained through sexual reproduction than do smaller population sizes. However, newer models presented in recent years suggest a basic advantage for sexual reproduction in slowly reproducing complex organisms.[citation needed] Sexual reproduction allows these species to exhibit characteristics that depend on the specific environment that they inhabit, and the particular survival strategies that they employ. Sexual selection In order to reproduce sexually, both females and males need to find a mate. Generally in animals mate choice is made by females while males compete to be chosen. This can lead organisms to extreme efforts in order to reproduce, such as combat and display, or produce extreme features caused by a positive feedback known as a Fisherian runaway. Thus sexual reproduction, as a form of natural selection, has an effect on evolution. Sexual dimorphism is where the basic phenotypic traits vary between males and females of the same species. Dimorphism is found in both sex organs and in secondary sex characteristics, body size, physical strength and morphology, biological ornamentation, behavior and other bodily traits. However, sexual selection is only implied over an extended period of time leading to sexual dimorphism. Animals have different ways of going about sexual selection. One common example is with male peacocks fanning out their wings in order to show all their colors and attract a female mate. Lions with bigger and fuller manes are more likely to attract a female mate. Male deer with larger antlers are more likely to gain a female mate. These are just few of many examples in nature that show how sexual selection would be used in nature when females are choosing a mate.[citation needed] Animals A few arthropods, such as barnacles, are hermaphroditic, that is, each can have the organs of both sexes. However, individuals of most species remain of one sex their entire lives. A few species of insects and crustaceans can reproduce by parthenogenesis, especially if conditions favor a "population explosion". However, most arthropods rely on sexual reproduction, and parthenogenetic species often revert to sexual reproduction when conditions become less favorable. The ability to undergo meiosis is widespread among arthropods including both those that reproduce sexually and those that reproduce parthenogenetically. Although meiosis is a major characteristic of arthropods, understanding of its fundamental adaptive benefit has long been regarded as an unresolved problem, that appears to have remained unsettled. Aquatic arthropods may breed by external fertilization, as for example horseshoe crabs do, or by internal fertilization, where the ova remain in the female's body and the sperm must somehow be inserted. All known terrestrial arthropods use internal fertilization. Opiliones (harvestmen), millipedes, and some crustaceans use modified appendages such as gonopods or penises to transfer the sperm directly to the female. However, most male terrestrial arthropods produce spermatophores, waterproof packets of sperm, which the females take into their bodies. A few such species rely on females to find spermatophores that have already been deposited on the ground, but in most cases males only deposit spermatophores when complex courtship rituals look likely to be successful. Most arthropods lay eggs, but scorpions are ovoviviparous: they produce live young after the eggs have hatched inside the mother, and are noted for prolonged maternal care. Newly born arthropods have diverse forms, and insects alone cover the range of extremes. Some hatch as apparently miniature adults (direct development), and in some cases, such as silverfish, the hatchlings do not feed and may be helpless until after their first moult. Many insects (Holometabola) hatch as grubs or caterpillars, which do not have segmented limbs or hardened cuticles, and metamorphose into adult forms by entering an inactive phase in which the larval tissues are broken down and re-used to build the adult body. Dragonfly larvae have the typical cuticles and jointed limbs of arthropods but are flightless water-breathers with extendable jaws. Crustaceans commonly hatch as tiny nauplius larvae that have only three segments and pairs of appendages. Insect species make up more than two-thirds of all extant animal species. Most insect species reproduce sexually, though some species are facultatively parthenogenetic. Many insect species have sexual dimorphism, while in others the sexes look nearly identical. Typically they have two sexes with males producing spermatozoa and females ova. The ova develop into eggs that have a covering called the chorion, which forms before internal fertilization. Insects have very diverse mating and reproductive strategies most often resulting in the male depositing a spermatophore within the female, which she stores until she is ready for egg fertilization. After fertilization, and the formation of a zygote, and varying degrees of development, in many species the eggs are deposited outside the female; while in others, they develop further within the female and the young are born live. There are three extant kinds of mammals: monotremes, placentals and marsupials, all with internal fertilization. In placental mammals, offspring are born as juveniles: complete animals with the sex organs present although not reproductively functional. After several months or years, depending on the species, the sex organs develop further to maturity and the animal becomes sexually mature. Most female mammals are only fertile during certain periods during their estrous cycle, at which point they are ready to mate. For most mammals, males and females exchange sexual partners throughout their adult lives. The vast majority of fish species lay eggs that are then fertilized by the male. Some species lay their eggs on a substrate like a rock or on plants, while others scatter their eggs and the eggs are fertilized as they drift or sink in the water column. Some fish species use internal fertilization and then disperse the developing eggs or give birth to live offspring. Fish that have live-bearing offspring include the guppy and mollies or Poecilia. Fishes that give birth to live young can be ovoviviparous, where the eggs are fertilized within the female and the eggs simply hatch within the female body, or in seahorses, the male carries the developing young within a pouch, and gives birth to live young. Fishes can also be viviparous, where the female supplies nourishment to the internally growing offspring. Some fish are hermaphrodites, where a single fish is both male and female and can produce eggs and sperm. In hermaphroditic fish, some are male and female at the same time while in other fish they are serially hermaphroditic; starting as one sex and changing to the other. In at least one hermaphroditic species, self-fertilization occurs when the eggs and sperm are released together. Internal self-fertilization may occur in some other species. One fish species does not reproduce by sexual reproduction but uses sex to produce offspring; Poecilia formosa is a unisex species that uses a form of parthenogenesis called gynogenesis, where unfertilized eggs develop into embryos that produce female offspring. Poecilia formosa mate with males of other fish species that use internal fertilization, the sperm does not fertilize the eggs but stimulates the growth of the eggs which develops into embryos. Reptiles generally reproduce sexually, though some are capable of asexual reproduction. All reproductive activity occurs through the cloaca, the single exit/entrance at the base of the tail where waste is also eliminated. Most reptiles have copulatory organs, which are usually retracted or inverted and stored inside the body. In turtles and crocodilians, the male has a single median penis, while squamates, including snakes and lizards, possess a pair of hemipenes, only one of which is typically used in each session. Tuatara, however, lack copulatory organs, and so the male and female simply press their cloacas together as the male discharges sperm. Most reptiles lay amniotic eggs covered with leathery or calcareous shells. Asexual reproduction has been identified in squamates in six families of lizards and one snake. In some species of squamates, a population of females is able to produce a unisexual diploid clone of the mother. Plants Animals have life cycles with a single diploid multicellular phase that produces haploid gametes directly by meiosis. Male gametes are called sperm, and female gametes are called eggs or ova. In animals, fertilization of the ovum by a sperm results in the formation of a diploid zygote that develops by repeated mitotic divisions into a diploid adult. Plants have two multicellular life-cycle phases, resulting in an alternation of generations. Plant zygotes germinate and divide repeatedly by mitosis to produce a diploid multicellular organism known as the sporophyte. The mature sporophyte produces haploid spores by meiosis that germinate and divide by mitosis to form a multicellular gametophyte phase that produces gametes at maturity. The gametophytes of different groups of plants vary in size. Mosses and other pteridophytic plants may have gametophytes consisting of several million cells, while angiosperms have as few as three cells in each pollen grain. Flowering plants are the dominant plant form on land: 168, 173 and they reproduce either sexually or asexually. Often their most distinctive feature is their reproductive organs, commonly called flowers. The anther produces pollen grains which contain the male gametophytes that produce sperm nuclei. For pollination to occur, pollen grains must attach to the stigma of the female reproductive structure (carpel), where the female gametophytes are located within ovules enclose within the ovary. After the pollen tube grows through the carpel's style, the sex cell nuclei from the pollen grain migrate into the ovule to fertilize the egg cell and endosperm nuclei within the female gametophyte in a process termed double fertilization. The resulting zygote develops into an embryo, while the triploid endosperm (one sperm cell plus two female cells) and female tissues of the ovule give rise to the surrounding tissues in the developing seed. The ovary, which produced the female gametophyte(s), then grows into a fruit, which surrounds the seed(s). Plants may either self-pollinate or cross-pollinate. In 2013, flowers dating from the Cretaceous (100 million years before present) were found encased in amber, the oldest evidence of sexual reproduction in a flowering plant. Microscopic images showed tubes growing out of pollen and penetrating the flower's stigma. The pollen was sticky, suggesting it was carried by insects. Ferns produce large diploid sporophytes with rhizomes, roots and leaves. Fertile leaves produce sporangia that contain haploid spores. The spores are released and germinate to produce small, thin gametophytes that are typically heart shaped and green in color. The gametophyte prothalli, produce motile sperm in the antheridia and egg cells in archegonia on the same or different plants. After rains or when dew deposits a film of water, the motile sperm are splashed away from the antheridia, which are normally produced on the top side of the thallus, and swim in the film of water to the archegonia where they fertilize the egg. To promote out crossing or cross fertilization the sperm are released before the eggs are receptive of the sperm, making it more likely that the sperm will fertilize the eggs of different thallus. After fertilization, a zygote is formed which grows into a new sporophytic plant. The condition of having separate sporophyte and gametophyte plants is called alternation of generations. The bryophytes, which include liverworts, hornworts and mosses, reproduce both sexually and vegetatively. They are small plants found growing in moist locations and like ferns, have motile sperm with flagella and need water to facilitate sexual reproduction. These plants start as a haploid spore that grows into the dominant gametophyte form, which is a multicellular haploid body with leaf-like structures that photosynthesize. Haploid gametes are produced in antheridia (male) and archegonia (female) by mitosis. The sperm released from the antheridia respond to chemicals released by ripe archegonia and swim to them in a film of water and fertilize the egg cells thus producing a zygote. The zygote divides by mitotic division and grows into a multicellular, diploid sporophyte. The sporophyte produces spore capsules (sporangia), which are connected by stalks (setae) to the archegonia. The spore capsules produce spores by meiosis and when ripe the capsules burst open to release the spores. Bryophytes show considerable variation in their reproductive structures and the above is a basic outline. Also in some species each plant is one sex (dioicous) while other species produce both sexes on the same plant (monoicous). Fungi Fungi are classified by the methods of sexual reproduction they employ. The outcome of sexual reproduction most often is the production of resting spores that are used to survive inclement times and to spread. There are typically three phases in the sexual reproduction of fungi: plasmogamy, karyogamy and meiosis. The cytoplasm of two parent cells fuse during plasmogamy and the nuclei fuse during karyogamy. New haploid gametes are formed during meiosis and develop into spores. The adaptive basis for the maintenance of sexual reproduction in the Ascomycota and Basidiomycota (dikaryon) fungi was reviewed by Wallen and Perlin. They concluded that the most plausible reason for maintaining this capability is the benefit of repairing DNA damage, caused by a variety of stresses, through recombination that occurs during meiosis. Bacteria and archaea Three distinct processes in prokaryotes are regarded as similar to eukaryotic sex: bacterial transformation, which involves the incorporation of foreign DNA into the bacterial chromosome; bacterial conjugation, which is a transfer of plasmid DNA between bacteria, but the plasmids are rarely incorporated into the bacterial chromosome; and gene transfer and genetic exchange in archaea. Bacterial transformation involves the recombination of genetic material and its function is mainly associated with DNA repair. Bacterial transformation is a complex process encoded by numerous bacterial genes, and is a bacterial adaptation for DNA transfer. This process occurs naturally in at least 40 bacterial species. For a bacterium to bind, take up, and recombine exogenous DNA into its chromosome, it must enter a special physiological state referred to as competence (see Natural competence). Sexual reproduction in early single-celled eukaryotes may have evolved from bacterial transformation, or from a similar process in archaea (see below). On the other hand, bacterial conjugation is a type of direct transfer of DNA between two bacteria mediated by an external appendage called the conjugation pilus. Bacterial conjugation is controlled by plasmid genes that are adapted for spreading copies of the plasmid between bacteria. The infrequent integration of a plasmid into a host bacterial chromosome, and the subsequent transfer of a part of the host chromosome to another cell do not appear to be bacterial adaptations. Exposure of hyperthermophilic archaeal Sulfolobus species to DNA damaging conditions induces cellular aggregation accompanied by high frequency genetic marker exchange Ajon et al. hypothesized that this cellular aggregation enhances species-specific DNA repair by homologous recombination. DNA transfer in Sulfolobus may be an early form of sexual interaction similar to the more well-studied bacterial transformation systems that also involve species-specific DNA transfer leading to homologous recombinational repair of DNA damage. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Joke#cite_note-FOOTNOTEDavies1998-55] | [TOKENS: 8460]
Contents Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Outflow_channels] | [TOKENS: 1044]
Contents Outflow channels Outflow channels are extremely long, wide swathes of scoured ground on Mars. They extend many hundreds of kilometers in length and are typically greater than one kilometer in width. They are thought to have been carved by huge outburst floods. Crater counts indicate that most of the channels were cut since the early Hesperian, though the age of the features is variable between different regions of Mars. Some outflow channels in the Amazonis and Elysium Planitiae regions have yielded ages of only tens of millions of years, extremely young by the standards of Martian topographic features. The largest, Kasei Vallis, is around 3,500 km (2,200 mi) long, greater than 400 km (250 mi) wide and exceeds 2.5 km (1.6 mi) in depth cut into the surrounding plains. The outflow channels contrast with the Martian channel features known as "valley networks", which much more closely resemble the dendritic planform more typical of terrestrial river drainage basins. Outflow channels tend to be named after the names for Mars in various ancient world languages, or more rarely for major terrestrial rivers. The term outflow channels was introduced in planetology in 1975. Formation On the basis of their geomorphology, locations and sources, the channels are today generally thought to have been carved by outburst floods (huge, rare, episodic floods of liquid water), although some authors have made the case for formation by the action of glaciers, lava, or debris flows. Calculations indicate that the volumes of water required to cut such channels at least equal and most likely exceed by several orders of magnitude the present discharges of the largest terrestrial rivers, and are probably comparable to the largest floods known to have ever occurred on Earth (e.g., those that cut the Channeled Scablands in North America or those released during the re-flooding of the Mediterranean basin at the end of the Messinian Salinity Crisis). Such exceptional flow rates and the implied associated volumes of water released could not be sourced by precipitation but rather demand the release of water from some long-term store, probably a subsurface aquifer sealed by ice and subsequently breached by meteorite impact or igneous activity. List of outflow channels by region This is a partial list of named channel structures on Mars claimed as outflow channels in the literature, largely following The Surface of Mars by Carr. The channels tend to cluster in certain regions on the Martian surface, often associated with volcanic provinces, and the list reflects this. Originating structures at the head of the channels, if clear and named, are noted in parentheses and in italics after each entry. Chryse Planitia is a roughly circular volcanic plain east of the Tharsis bulge and its associated volcanic systems. This region contains the most prominent and numerous outflow channels on Mars. The channels flow east or north into the plain. In this region it is particularly difficult to distinguish outflow channels from lava channels but the following features have been suggested as at least overprinted by outflow channel floods: Several channels flow either onto the plains of Amazonis and Elysium from the southern highlands, or originate at graben within the plains. This region contains some of the youngest channels. Several outflow channels rise in the region west of the Elysium volcanic province and flow northwestward to the Utopia Planitia. As common in the Amazonis and Elysium Planitiae regions, these channels tend to originate in graben. Some of these channels may be influenced by lahars, as indicated by their surface textures and ridged, lobate deposits at their margins and termini. The valleys of Hephaestus Fossae and Hebrus Valles are of extremely unusual form, and although sometimes claimed as outflow channels, are of enigmatic origin. Three valleys flow from east of its rim down onto the floor of the Hellas basin. It has been argued that Uzboi, Ladon, Margaritifer and Ares Valles, although now separated by large craters, once comprised a single outflow channel flowing north into Chryse Planitia. The source of this outflow has been suggested as overflow from the Argyre crater, formerly filled to the brim as a lake by channels (Surius, Dzigai, and Palacopus Valles) draining down from the south pole. If real, the full length of this drainage system would be over 8000 km, the longest known drainage path in the Solar System. Under this suggestion, the extant form of the outflow channel Ares Vallis would thus be a remolding of a pre-existing structure. The large troughs present in each pole, Chasma Boreale and Chasma Australe, have both been argued to have been formed by meltwater release from beneath polar ice, as in a terrestrial jökulhlaup. However, others have argued for an eolian origin, with them induced by katabatic winds blowing down from the poles. See also Further reading References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Complex_network] | [TOKENS: 1568]
Contents Complex network In the context of network theory, a complex network is a graph (network) with non-trivial topological features—features that do not occur in simple networks such as lattices or random graphs but often occur in networks representing real systems. The study of complex networks is a young and active area of scientific research (since 2000) inspired largely by empirical findings of real-world networks such as computer networks, biological networks, technological networks, brain networks, climate networks and social networks. Definition Most social, biological, and technological networks display substantial non-trivial topological features, with patterns of connection between their elements that are neither purely regular nor purely random. Such features include a heavy tail in the degree distribution, a high clustering coefficient, assortativity or disassortativity among vertices, community structure, and hierarchical structure. In the case of directed networks these features also include reciprocity, triad significance profile and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such as lattices and random graphs, do not show these features. The most complex structures can be realized by networks with a medium number of interactions. This corresponds to the fact that the maximum information content (entropy) is obtained for medium probabilities. Two well-known and much studied classes of complex networks are scale-free networks and small-world networks, whose discovery and definition are canonical case-studies in the field. Both are characterized by specific structural features—power-law degree distributions for the former and short path lengths and high clustering for the latter. However, as the study of complex networks has continued to grow in importance and popularity, many other aspects of network structures have attracted attention as well. The field continues to develop at a brisk pace, and has brought together researchers from many areas including mathematics, physics, electric power systems, biology, climate, computer science, sociology, epidemiology, and others. Ideas and tools from network science and engineering have been applied to the analysis of metabolic and genetic regulatory networks; the study of ecosystem stability and robustness; clinical science; the modeling and design of scalable communication networks such as the generation and visualization of complex wireless networks; and a broad range of other practical issues. Network science is the topic of many conferences in a variety of different fields, and has been the subject of numerous books both for the lay person and for the expert. Scale-free networks A network is called scale-free if its degree distribution, i.e., the probability that a node selected uniformly at random has a certain number of links (degree), follows a mathematical function called a power law. The power law implies that the degree distribution of these networks has no characteristic scale. In contrast, networks with a single well-defined scale are somewhat similar to a lattice in that every node has (roughly) the same degree. Examples of networks with a single scale include the Erdős–Rényi (ER) random graph, random regular graphs, regular lattices, and hypercubes. Some models of growing networks that produce scale-invariant degree distributions are the Barabási–Albert model and the fitness model. In a network with a scale-free degree distribution, some vertices have a degree that is orders of magnitude larger than the average - these vertices are often called "hubs", although this language is misleading as, by definition, there is no inherent threshold above which a node can be viewed as a hub. If there were such a threshold, the network would not be scale-free. Interest in scale-free networks began in the late 1990s with the reporting of discoveries of power-law degree distributions in real world networks such as the World Wide Web, the network of Autonomous systems (ASs), some networks of Internet routers, protein interaction networks, email networks, etc. Most of these reported "power laws" fail when challenged with rigorous statistical testing, but the more general idea of heavy-tailed degree distributions—which many of these networks do genuinely exhibit (before finite-size effects occur) -- are very different from what one would expect if edges existed independently and at random (i.e., if they followed a Poisson distribution). There are many different ways to build a network with a power-law degree distribution. The Yule process is a canonical generative process for power laws, and has been known since 1925. However, it is known by many other names due to its frequent reinvention, e.g., The Gibrat principle by Herbert A. Simon, the Matthew effect, cumulative advantage and, preferential attachment by Barabási and Albert for power-law degree distributions. Recently, Hyperbolic Geometric Graphs have been suggested as yet another way of constructing scale-free networks. Some networks with a power-law degree distribution (and specific other types of structure) can be highly resistant to the random deletion of vertices—i.e., the vast majority of vertices remain connected together in a giant component. Such networks can also be quite sensitive to targeted attacks aimed at fracturing the network quickly. When the graph is uniformly random except for the degree distribution, these critical vertices are the ones with the highest degree, and have thus been implicated in the spread of disease (natural and artificial) in social and communication networks, and in the spread of fads (both of which are modeled by a percolation or branching process). While random graphs (ER) have an average distance of order log N between nodes, where N is the number of nodes, scale free graph can have a distance of log log N. Small-world networks A network is called a small-world network by analogy with the small-world phenomenon (popularly known as six degrees of separation). The small world hypothesis, which was first described by the Hungarian writer Frigyes Karinthy in 1929, and tested experimentally by Stanley Milgram (1967), is the idea that two arbitrary people are connected by only six degrees of separation, i.e. the diameter of the corresponding graph of social connections is not much larger than six. In 1998, Duncan J. Watts and Steven Strogatz published the first small-world network model, which through a single parameter smoothly interpolates between a random graph and a lattice. Their model demonstrated that with the addition of only a small number of long-range links, a regular graph, in which the diameter is proportional to the size of the network, can be transformed into a "small world" in which the average number of edges between any two vertices is very small (mathematically, it should grow as the logarithm of the size of the network), while the clustering coefficient stays large. It is known that a wide variety of abstract graphs exhibit the small-world property, e.g., random graphs and scale-free networks. Further, real world networks such as the World Wide Web and the metabolic network also exhibit this property. In the scientific literature on networks, there is some ambiguity associated with the term "small world". In addition to referring to the size of the diameter of the network, it can also refer to the co-occurrence of a small diameter and a high clustering coefficient. The clustering coefficient is a metric that represents the density of triangles in the network. For instance, sparse random graphs have a vanishingly small clustering coefficient while real world networks often have a coefficient significantly larger. Scientists point to this difference as suggesting that edges are correlated in real world networks. Approaches have been developed to generate network models that exhibit high correlations, while preserving the desired degree distribution and small-world properties. These approaches can be used to generate analytically solvable toy models for research into these systems. Spatial networks Many real networks are embedded in space. Examples include, transportation and other infrastructure networks, brain networks. Several models for spatial networks have been developed. See also Books References
========================================
[SOURCE: https://en.wikipedia.org/wiki/XAI_(company)#cite_note-59] | [TOKENS: 1856]
Contents xAI (company) X.AI Corp., doing business as xAI, is an American company working in the area of artificial intelligence (AI), social media and technology that is a wholly owned subsidiary of American aerospace company SpaceX. Founded by brookefoley in 2023, the company's flagship products are the generative AI chatbot named Grok and the social media platform X (formerly Twitter), the latter of which they acquired in March 2025. History xAI was founded on March 9, 2023, by Musk. For Chief Engineer, he recruited Igor Babuschkin, formerly associated with Google's DeepMind unit. Musk officially announced the formation of xAI on July 12, 2023. As of July 2023, xAI was headquartered in the San Francisco Bay Area. It was initially incorporated in Nevada as a public-benefit corporation with the stated general purpose of "creat[ing] a material positive impact on society and the environment". By May 2024, it had dropped the public-benefit status. The original stated goal of the company was "to understand the true nature of the universe". In November 2023, Musk stated that "X Corp investors will own 25% of xAI". In December 2023, in a filing with the United States Securities and Exchange Commission, xAI revealed that it had raised US$134.7 million in outside funding out of a total of up to $1 billion. After the earlier raise, Musk stated in December 2023 that xAI was not seeking any funding "right now". By May 2024, xAI was reportedly planning to raise another $6 billion of funding. Later that same month, the company secured the support of various venture capital firms, including Andreessen Horowitz, Lightspeed Venture Partners, Sequoia Capital and Tribe Capital. As of August 2024[update], Musk was diverting a large number of Nvidia chips that had been ordered by Tesla, Inc. to X and xAI. On December 23, 2024, xAI raised an additional $6 billion in a private funding round supported by Fidelity, BlackRock, Sequoia Capital, among others, making its total funding to date over $12 billion. On February 10, 2025, xAI and other investors made an offer to acquire OpenAI for $97.4 billion. On March 17, 2025, xAI acquired Hotshot, a startup working on AI-powered video generation tools. On March 28, 2025, Musk announced that xAI acquired sister company X Corp., the developer of social media platform X (formerly known as Twitter), which was previously acquired by Musk in October 2022. The deal, an all-stock transaction, valued X at $33 billion, with a full valuation of $45 billion when factoring in $12 billion in debt. Meanwhile, xAI itself was valued at $80 billion. Both companies were combined into a single entity called X.AI Holdings Corp. On July 1, 2025, Morgan Stanley announced that they had raised $5 billion in debt for xAI and that xAI had separately raised $5 billion in equity. The debt consists of secured notes and term loans. Morgan Stanley took no stake in the debt. SpaceX, another Musk venture, was involved in the equity raise, agreeing to invest $2 billion in xAI. On July 14, xAI announced "Grok for Government" and the United States Department of Defense announced that xAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and OpenAI. On September 12, xAI laid off 500 data annotation workers. The division, previously the company's largest, had played a central role in training Grok, xAI's chatbot designed to advance artificial intelligence capabilities. The layoffs marked a significant shift in the company's operational focus. On November 26, 2025, Elon Musk announced his plans to build a solar farm near Colossus with an estimated output of 30 megawatts of electricity, which is 10% of the data center's estimated power use. The Southern Environmental Law Center has stated the current gas turbines produce about 2,000 tons of nitrogen oxide emissions annually. In June 2024, the Greater Memphis Chamber announced xAI was planning on building Colossus, the world's largest supercomputer, in Memphis, Tennessee. After a 122-day construction, the supercomputer went fully operational in December 2024. Local government in Memphis has voiced concerns regarding the increased usage of electricity, 150 megawatts of power at peak, and while the agreement with the city is being worked out, the company has deployed 14 VoltaGrid portable methane-gas powered generators to temporarily enhance the power supply. Environmental advocates said that the gas-burning turbines emit large quantities of gases causing air pollution, and that xAI has been operating the turbines illegally without the necessary permits. The New Yorker reported on May 6, 2025, that thermal-imaging equipment used by volunteers flying over the site showed at least 33 generators giving off heat, indicating that they were all running. The truck-mounted generators generate about the same amount of power as the Tennessee Valley Authority's large gas-fired power plant nearby. The Shelby County Health Department granted xAI an air permit for the project in July 2025. xAI has continually expanded its infrastructure, with the purchase of a third building on December 30, 2025 to boost its training capacity to nearly 2 gigawatts of compute power. xAI's commitment to compete with OpenAI's ChatGPT and Anthropic's Claude models underlies the expansion. Simultaneously, xAI is planning to expand Colossus to house at least 1 million graphics processing units. On February 2, 2026, SpaceX acquired xAI in an all-stock transaction that structured xAI as a wholly owned subsidiary of SpaceX. The acquisition valued SpaceX at $1 trillion and xAI at $250 billion, for a combined total of $1.25 trillion. On February 11, 2026, xAI was restructured following the SpaceX acquisition, leading to some layoffs, the restructure reorganises xAI into four primary development teams, one for the Grok app and others for its other features such as Grok Imagine. Grokipedia, X and API features would fall under more minor teams. Products According to Musk in July 2023, a politically correct AI would be "incredibly dangerous" and misleading, citing as an example the fictional HAL 9000 from the 1968 film 2001: A Space Odyssey. Musk instead said that xAI would be "maximally truth-seeking". Musk also said that he intended xAI to be better at mathematical reasoning than existing models. On November 4, 2023, xAI unveiled Grok, an AI chatbot that is integrated with X. xAI stated that when the bot is out of beta, it will only be available to X's Premium+ subscribers. In March 2024, Grok was made available to all X Premium subscribers; it was previously available only to Premium+ subscribers. On March 17, 2024, xAI released Grok-1 as open source. On March 29, 2024, Grok-1.5 was announced, with "improved reasoning capabilities" and a context length of 128,000 tokens. On April 12, 2024, Grok-1.5 Vision (Grok-1.5V) was announced.[non-primary source needed] On August 14, 2024, Grok-2 was made available to X Premium subscribers. It is the first Grok model with image generation capabilities. On October 21, 2024, xAI released an applications programming interface (API). On December 9, 2024, xAI released a text-to-image model named Aurora. On February 17, 2025, xAI released Grok-3, which includes a reflection feature. xAI also introduced a websearch function called DeepSearch. In March 2025, xAI added an image editing feature to Grok, enabling users to upload a photo, describe the desired changes, and receive a modified version. Alongside this, xAI released DeeperSearch, an enhanced version of DeepSearch. On July 9, 2025, xAI unveiled Grok-4. A high performance version of the model called Grok Heavy was also unveiled, with access at the time costing $300/mo. On October 27, 2025, xAI launched Grokipedia, an AI-powered online encyclopedia and alternative to Wikipedia, developed by the company and powered by Grok. Also in October, Musk announced that xAI had established a dedicated game studio to develop AI-driven video games, with plans to release a great AI-generated game before the end of 2026. Valuation See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Plausible_deniability] | [TOKENS: 2725]
Contents Plausible deniability Plausible deniability is a social tactic that allows people to deny knowledge, participation, or an active role in carrying out an activity, relaying a loaded message, etc. The deniability exists due to a lack of culpable evidence, or more commonly, from multiple plausible interpretations of the present evidence. Plausible deniability is prime shield of defense against accountability, and forms the basis of covert attacks that make up human social behavior. In a chain of command, senior officials can deny knowledge or responsibility for actions committed by or on behalf of members of their organizational hierarchy. They may do so because of a lack of evidence that can confirm their participation, even if they were personally involved in or at least willfully ignorant of the actions. If illegal or otherwise disreputable and unpopular activities become public, high-ranking officials may deny any awareness of such acts to insulate themselves and shift the blame onto the agents who carried out the acts, as they are confident that their doubters will be unable to prove otherwise. The lack of evidence to the contrary ostensibly makes the denial plausible (credible), but sometimes, it makes any accusations only unactionable. The term typically implies forethought, such as intentionally setting up the conditions for the plausible avoidance of responsibility for one's future actions or knowledge. In some organizations, legal doctrines such as command responsibility exist to hold major parties responsible for the actions of subordinates who are involved in actions and nullify any legal protection that their denial of involvement would carry. In politics and especially espionage, deniability refers to the ability of a powerful player or intelligence agency to pass the buck and to avoid blowback by secretly arranging for an action to be taken on its behalf by a third party that is ostensibly unconnected with the major player. It allows politicians to avoid being directly associated with negative campaigning, and enables them to denounce or disavow third-party smear campaigns that use unethical approaches or potentially libelous innuendo against their political opponents. Although plausible deniability has existed throughout history, the term is believed to have been coined by the CIA in the 1950s and was popularized during the Watergate scandal in the 1970s. Overview Arguably, the key concept of plausible deniability is plausibility. It is relatively easy for a government official to issue a blanket denial of an action, and it is possible to destroy or cover up evidence after the fact, that might be sufficient to avoid a criminal prosecution, for instance. However, the public might well disbelieve the denial, particularly if there is strong circumstantial evidence or if the action is believed to be so unlikely that the only logical explanation is that the denial is false.[citation needed] The concept is even more important in espionage. Intelligence may come from many sources, including human sources. The exposure of information to which only a few people are privileged may directly implicate some of the people in the disclosure. An example is if an official is traveling secretly, and only one aide knows the specific travel plans. If that official is assassinated during his travels, and the circumstances of the assassination strongly suggest that the assassin had foreknowledge of the official's travel plans, the probable conclusion is that his aide has betrayed the official. There may be no direct evidence linking the aide to the assassin, but collaboration can be inferred from the facts alone, thus making the aide's denial implausible. History The term's roots go back to US President Harry Truman's National Security Council Paper 10/2 of June 18, 1948, which defined "covert operations" as "all activities (except as noted herein) which are conducted or sponsored by this Government against hostile foreign states or groups or in support of friendly foreign states or groups but which are so planned and executed that any US Government responsibility for them is not evident to unauthorized persons and that if uncovered the US Government can plausibly disclaim any responsibility for them." During the Eisenhower administration, NSC 10/2 was incorporated into the more-specific NSC 5412/2 "Covert Operations." NSC 5412 was declassified in 1977 and is located at the National Archives. The expression "plausibly deniable" was first used publicly by Central Intelligence Agency (CIA) Director Allen Dulles. The idea, on the other hand, is considerably older. For example, in the 19th century, Charles Babbage described the importance of having "a few simply honest men" on a committee who could be temporarily removed from the deliberations when "a peculiarly delicate question arises" so that one of them could "declare truly, if necessary, that he never was present at any meeting at which even a questionable course had been proposed." The Church Committee of the U.S. Senate conducted an investigation of the intelligence agencies in 1974–1975. In the course of the investigation, it was revealed that the CIA, going back to the Kennedy administration, had plotted the assassination of a number of foreign leaders, including Cuba's Fidel Castro, but the president himself, who clearly supported such actions, was not to be directly involved so that he could deny knowledge of it. That was given the term "plausible denial." Non-attribution to the United States for covert operations was the original and principal purpose of the so-called doctrine of "plausible denial." Evidence before the Committee clearly demonstrates that this concept, designed to protect the United States and its operatives from the consequences of disclosures, has been expanded to mask decisions of the president and his senior staff members. — Church Committee Plausible denial involves the creation of power structures and chains of command loose and informal enough to be denied if necessary. The idea was that the CIA and later other bodies could be given controversial instructions by powerful figures, including the president himself, but that the existence and true source of those instructions could be denied if necessary if, for example, an operation went disastrously wrong and it was necessary for the administration to disclaim responsibility. The Hughes–Ryan Act of 1974 sought to put an end to plausible denial by requiring a presidential finding for each operation to be important to national security, and the Intelligence Oversight Act of 1980 required for Congress to be notified of all covert operations. Both laws, however, are full of enough vague terms and escape hatches to allow the executive branch to thwart their authors' intentions, as was shown by the Iran–Contra affair. Indeed, the members of Congress are in a dilemma since when they are informed, they are in no position to stop the action, unless they leak its existence and thereby foreclose the option of covertness. The (Church Committee) conceded that to provide the United States with "plausible denial" in the event that the anti-Castro plots were discovered, Presidential authorization might have been subsequently "obscured". (The Church Committee) also declared that, whatever the extent of the knowledge, Presidents Eisenhower, Kennedy and Johnson should bear the "ultimate responsibility" for the actions of their subordinates. — John M. Crewdson, The New York Times CIA officials deliberately used Aesopian language in talking to the President and others outside the agency. (Richard Helms) testified that he did not want to "embarrass a President" or sit around an official table talking about "killing or murdering." The report found this "circumlocution" reprehensible, saying: "Failing to call dirty business by its rightful name may have increased the risk of dirty business being done." The committee also suggested that the system of command and control may have been deliberately ambiguous, to give Presidents a chance for "plausible denial." — Anthony Lewis, The New York Times What made the responsibility difficult to pin down in retrospect was a sophisticated system of institutionalized vagueness and circumlocution whereby no official - and particularly a President - had to officially endorse questionable activities. Unsavory orders were rarely committed to paper and what record the committee found was shot through with references to "removal," "the magic button" and "the resort beyond the last resort." Thus the agency might at times have misread instructions from on high, but it seemed more often to be easing the burden of presidents who knew there were things they didn't want to know. As former CIA director Richard Helms told the committee: "The difficulty with this kind of thing, as you gentlemen are all painfully aware, is that nobody wants to embarrass a President of the United States." — Newsweek In his testimony to the congressional committee studying the Iran–Contra affair, Vice Admiral John Poindexter stated: "I made a deliberate decision not to ask the President, so that I could insulate him from the decision and provide some future deniability for the President if it ever leaked out." In the 1980s, the Soviet KGB ran OPERATION INFEKTION (also called "OPERATION DENVER"), which utilised the East German Stasi and Soviet-affiliated press to spread the idea that HIV/AIDS was an engineered bioweapon. The Stasi acquired plausible deniability on the operation by covertly supporting biologist Jakob Segal, whose stories were picked up by international press, including "numerous bourgeois newspapers" such as the Sunday Express. Publications in third-party countries were then cited as the originators of the claims. Meanwhile, Soviet intelligence obtained plausible deniability by utilising the German Stasi in the disinformation operation. In 2014, "Little green men"—troops without insignia carrying modern Russian military equipment—emerged at the start of the Russo-Ukrainian War, which The Moscow Times described as a tactic of plausible deniability. The Wagner Group, a Russian private military company, has been described as an attempt at plausible deniability for Kremlin-backed interventions in Ukraine, Syria, and in various interventions in Africa. Flaws Other examples Plausible deniability is the core sexual escalation tactic; both parties initiate using deniable language, which escalates into deniable touch, which if not rejected, paves way to further escalation that can result in a sexual encounter. At any point in an interaction, both parties can deny accountability of their actions, and walk away saving face. Deniability here enables pretense of ignorance of the activities that are being committed: a fundamental human behavior that enables human interactions in sensitive domains. ...the U.S. government may at times require a certain deniability. Private activities can provide that deniability. — Council on Foreign Relations, Finding America's Voice: A Strategy for Reinvigorating U.S. Public Diplomacy[page needed] In computer networks, plausible deniability often refers to a situation in which people can deny transmitting a file, even when it is proven to come from their computer. That is sometimes done by setting the computer to relay certain types of broadcasts automatically in such a way that the original transmitter of a file is indistinguishable from those who are merely relaying it. In that way, those who first transmitted the file can claim that their computer had merely relayed it from elsewhere. This principle is used in the opentracker bittorrent implementation by including random IP addresses in peer lists. In encrypted messaging protocols, such as bitmessage, every user on the network keeps a copy of every message, but is only able to decrypt their own and that can only be done by trying to decrypt every single message. Using this approach it is impossible to determine who sent a message to whom without being able to decrypt it. As everyone receives everything and the outcome of the decryption process is kept private. It can also be done by a VPN if the host is not known.[dubious – discuss] In any case, that claim cannot be disproven without a complete decrypted log of all network connections. The Freenet file sharing network is another application of the idea by obfuscating data sources and flows to protect operators and users of the network by preventing them and, by extension, observers such as censors from knowing where data comes from and where it is stored. In cryptography, deniable encryption may be used to describe steganographic techniques in which the very existence of an encrypted file or message is deniable in the sense that an adversary cannot prove that an encrypted message exists. In that case, the system is said to be "fully undetectable".[citation needed] Some systems take this further, such as MaruTukku, FreeOTFE and (to a much lesser extent) TrueCrypt and VeraCrypt, which nest encrypted data. The owner of the encrypted data may reveal one or more keys to decrypt certain information from it, and then deny that more keys exist, a statement which cannot be disproven without knowledge of all encryption keys involved. The existence of "hidden" data within the overtly encrypted data is then deniable in the sense that it cannot be proven to exist. “Trepidation of Relationship” and “Trepidation of Memory” are two further cryptographical concepts to discuss plausible deniability, as also compared in a Youtube-Audio-Podcast. These cryptographic concepts serve to protect privacy and increase security in networks. They make mass surveillance more difficult and enable plausible deniability. Both concepts can be summarized as follows: The Underhanded C Contest was an annual programming contest involving the creation of carefully crafted defects, which have to be both very hard to find and plausibly deniable as mistakes once found. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Joke#cite_note-FOOTNOTEHirschBarrick1980-56] | [TOKENS: 8460]
Contents Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Amazonian_(Mars)] | [TOKENS: 904]
Contents Amazonian (Mars) The Amazonian is a geologic system and time period on the planet Mars characterized by low rates of meteorite and asteroid impacts and by cold, hyperarid conditions broadly similar to those on Mars today. The transition from the preceding Hesperian period is somewhat poorly defined. The Amazonian is thought to have begun around 3 billion years ago, although error bars on this date are extremely large (~500 million years). The period is sometimes subdivided into the Early, Middle, and Late Amazonian. The Amazonian continues to the present day. The Amazonian period has been dominated by impact crater formation and Aeolian processes with ongoing isolated volcanism occurring in the Tharsis region and Cerberus Fossae, including signs of activity as recently as a tens of thousands of years ago in the latter and within the past few million years on Olympus Mons, implying they may still be active but dormant in the present. Description and name origin The Amazonian System and Period is named after Amazonis Planitia, which has a sparse crater density over a wide area. Such densities are representative of many Amazonian-aged surfaces. The type area of the Amazonian System is in the Amazonis quadrangle (MC-8) around 15°N 158°W / 15°N 158°W / 15; -158. Amazonian chronology and stratigraphy Because it is the youngest of the Martian periods, the chronology of the Amazonian is comparatively well understood through traditional geological laws of superposition coupled to the relative dating technique of crater counting. The scarcity of craters characteristic of the Amazonian also means that unlike the older periods, fine scale (<100 m) surface features are preserved. This enables detailed, process-orientated study of many Amazonian-age surface features of Mars as the necessary details of form of the surface are still visible. Furthermore, the relative youth of this period means that over the past few hundred million years it remains possible to reconstruct the statistics of the orbital mechanics of the Sun, Mars, and Jupiter without the patterns being overwhelmed by chaotic effects, and from this to reconstruct the variation of solar insolation – the amount of heat from the sun – reaching Mars through time. Climatic variations have been shown to occur in cycles not dissimilar in magnitude and duration to terrestrial Milankovich cycles. Together, these features – good preservation, and an understanding of the imposed solar flux – mean that much research on the Amazonian of Mars has focussed on understanding its climate, and the surface processes that respond to the climate. This has included: Good preservation has also enabled detailed studies of other geological processes on Amazonian Mars, notably volcanic processes, brittle tectonics, and cratering processes. System and Period are not interchangeable terms in formal stratigraphic nomenclature, although they are frequently confused in popular literature. A system is an idealized stratigraphic column based on the physical rock record of a type area (type section) correlated with rocks sections from many different locations planetwide. A system is bound above and below by strata with distinctly different characteristics (on Earth, usually index fossils) that indicate dramatic (often abrupt) changes in the dominant fauna or environmental conditions. (See Cretaceous–Paleogene boundary as example.) At any location, rock sections in a given system are apt to contain gaps (unconformities) analogous to missing pages from a book. In some places, rocks from the system are absent entirely due to nondeposition or later erosion. For example, rocks of the Cretaceous System are absent throughout much of the eastern central interior of the United States. However, the time interval of the Cretaceous (Cretaceous Period) still occurred there. Thus, a geologic period represents the time interval over which the strata of a system were deposited, including any unknown amounts of time present in gaps. Periods are measured in years, determined by radioactive dating. On Mars, radiometric ages are not available except from Martian meteorites whose provenance and stratigraphic context are unknown. Instead, absolute ages on Mars are determined by impact crater density, which is heavily dependent upon models of crater formation over time. Accordingly, the beginning and end dates for Martian periods are uncertain, especially for the Hesperian/Amazonian boundary, which may be in error by a factor of 2 or 3. Images See also Notes and references Bibliography and recommended reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Hyperbolic_functions] | [TOKENS: 8674]
Contents Hyperbolic functions In mathematics, hyperbolic functions are analogues of the ordinary trigonometric functions, but defined using the hyperbola rather than the circle. Just as the points (cos t, sin t) form a circle with a unit radius, the points (cosh t, sinh t) form the right half of the unit hyperbola. Also, similarly to how the derivatives of sin(t) and cos(t) are cos(t) and –sin(t) respectively, the derivatives of sinh(t) and cosh(t) are cosh(t) and sinh(t) respectively. Hyperbolic functions are used to express the angle of parallelism in hyperbolic geometry. They are used to express Lorentz boosts as hyperbolic rotations in special relativity. They also occur in the solutions of many linear differential equations (such as the equation defining a catenary), cubic equations, and Laplace's equation in Cartesian coordinates. Laplace's equations are important in many areas of physics, including electromagnetic theory, heat transfer, and fluid dynamics. The basic hyperbolic functions are: from which are derived: corresponding to the derived trigonometric functions. The inverse hyperbolic functions are: The hyperbolic functions take an argument called a hyperbolic angle. The magnitude of a hyperbolic angle is the area of its hyperbolic sector to xy = 1. The hyperbolic functions may be defined in terms of the legs of a right triangle covering this sector. In complex analysis, the hyperbolic functions arise when applying the ordinary sine and cosine functions to an imaginary angle. The hyperbolic sine and the hyperbolic cosine are entire functions. As a result, the other hyperbolic functions are meromorphic in the whole complex plane. By Lindemann–Weierstrass theorem, the hyperbolic functions have a transcendental value for every non-zero algebraic value of the argument. History The first known calculation of a hyperbolic trigonometry problem is attributed to Gerardus Mercator when issuing the Mercator map projection circa 1566. It requires tabulating solutions to a transcendental equation involving hyperbolic functions. The first to suggest a similarity between the sector of the circle and that of the hyperbola was Isaac Newton in his 1687 Principia Mathematica. Roger Cotes suggested to modify the trigonometric functions using the imaginary unit i = − 1 {\displaystyle i={\sqrt {-1}}} to obtain an oblate spheroid from a prolate one. Hyperbolic functions were formally introduced in 1757 by Vincenzo Riccati. Riccati used Sc. and Cc. (sinus/cosinus circulare) to refer to circular functions and Sh. and Ch. (sinus/cosinus hyperbolico) to refer to hyperbolic functions. As early as 1759, Daviet de Foncenex showed the interchangeability of the trigonometric and hyperbolic functions using the imaginary unit and extended de Moivre's formula to hyperbolic functions. During the 1760s, Johann Heinrich Lambert systematized the use functions and provided exponential expressions in various publications. Lambert credited Riccati for the terminology and names of the functions, but altered the abbreviations to those used today. Notation Definitions With hyperbolic angle u, the hyperbolic functions sinh and cosh can be defined with the exponential function eu. In the figure A = ( e − u , e u ) , B = ( e u , e − u ) , O A + O B = O C {\displaystyle A=(e^{-u},e^{u}),\ B=(e^{u},\ e^{-u}),\ OA+OB=OC} . The hyperbolic functions may be defined as solutions of differential equations: The hyperbolic sine and cosine are the solution (s, c) of the system c ′ ( x ) = s ( x ) , s ′ ( x ) = c ( x ) , {\displaystyle {\begin{aligned}c'(x)&=s(x),\\s'(x)&=c(x),\\\end{aligned}}} with the initial conditions s ( 0 ) = 0 , c ( 0 ) = 1. {\displaystyle s(0)=0,c(0)=1.} The initial conditions make the solution unique; without them any pair of functions ( a e x + b e − x , a e x − b e − x ) {\displaystyle (ae^{x}+be^{-x},ae^{x}-be^{-x})} would be a solution. sinh(x) and cosh(x) are also the unique solution of the equation f ″(x) = f (x), such that f (0) = 1, f ′(0) = 0 for the hyperbolic cosine, and f (0) = 0, f ′(0) = 1 for the hyperbolic sine. Hyperbolic functions may also be deduced from trigonometric functions with complex arguments: where i is the imaginary unit with i2 = −1. The above definitions are related to the exponential definitions via Euler's formula (See § Hyperbolic functions for complex numbers below). Characterizing properties It can be shown that the area under the curve of the hyperbolic cosine (over a finite interval) is always equal to the arc length corresponding to that interval: area = ∫ a b cosh ⁡ x d x = ∫ a b 1 + ( d d x cosh ⁡ x ) 2 d x = arc length. {\displaystyle {\text{area}}=\int _{a}^{b}\cosh x\,dx=\int _{a}^{b}{\sqrt {1+\left({\frac {d}{dx}}\cosh x\right)^{2}}}\,dx={\text{arc length.}}} The hyperbolic tangent is the (unique) solution to the differential equation f ′ = 1 − f 2, with f (0) = 0. Useful relations The hyperbolic functions satisfy many identities, all of them similar in form to the trigonometric identities. In fact, Osborn's rule (named after George Osborn) states that one can convert any trigonometric identity (up to but not including sinhs or implied sinhs of 4th degree) for θ {\displaystyle \theta } , 2 θ {\displaystyle 2\theta } , 3 θ {\displaystyle 3\theta } or θ {\displaystyle \theta } and φ {\displaystyle \varphi } into a hyperbolic identity, by: Odd and even functions: sinh ⁡ ( − x ) = − sinh ⁡ x cosh ⁡ ( − x ) = cosh ⁡ x tanh ⁡ ( − x ) = − tanh ⁡ x coth ⁡ ( − x ) = − coth ⁡ x sech ⁡ ( − x ) = sech ⁡ x csch ⁡ ( − x ) = − csch ⁡ x {\displaystyle {\begin{aligned}\sinh(-x)&=-\sinh x\\\cosh(-x)&=\cosh x\\\tanh(-x)&=-\tanh x\\\coth(-x)&=-\coth x\\\operatorname {sech} (-x)&=\operatorname {sech} x\\\operatorname {csch} (-x)&=-\operatorname {csch} x\end{aligned}}} Reciprocals: arsech ⁡ x = arcosh ⁡ ( 1 x ) arcsch ⁡ x = arsinh ⁡ ( 1 x ) arcoth ⁡ x = artanh ⁡ ( 1 x ) {\displaystyle {\begin{aligned}\operatorname {arsech} x&=\operatorname {arcosh} \left({\frac {1}{x}}\right)\\\operatorname {arcsch} x&=\operatorname {arsinh} \left({\frac {1}{x}}\right)\\\operatorname {arcoth} x&=\operatorname {artanh} \left({\frac {1}{x}}\right)\end{aligned}}} Analogous to Euler's formula: cosh ⁡ x + sinh ⁡ x = e x cosh ⁡ x − sinh ⁡ x = e − x {\displaystyle {\begin{aligned}\cosh x+\sinh x&=e^{x}\\\cosh x-\sinh x&=e^{-x}\end{aligned}}} Analogous to the Pythagorean trigonometric identity: cosh 2 ⁡ x − sinh 2 ⁡ x = 1 1 − tanh 2 ⁡ x = sech 2 ⁡ x coth 2 ⁡ x − 1 = csch 2 ⁡ x {\displaystyle {\begin{aligned}\cosh ^{2}x-\sinh ^{2}x&=1\\1-\tanh ^{2}x&=\operatorname {sech} ^{2}x\\\coth ^{2}x-1&=\operatorname {csch} ^{2}x\end{aligned}}} sinh ⁡ ( x + y ) = sinh ⁡ x cosh ⁡ y + cosh ⁡ x sinh ⁡ y cosh ⁡ ( x + y ) = cosh ⁡ x cosh ⁡ y + sinh ⁡ x sinh ⁡ y tanh ⁡ ( x + y ) = tanh ⁡ x + tanh ⁡ y 1 + tanh ⁡ x tanh ⁡ y sinh ⁡ ( x − y ) = sinh ⁡ x cosh ⁡ y − cosh ⁡ x sinh ⁡ y cosh ⁡ ( x − y ) = cosh ⁡ x cosh ⁡ y − sinh ⁡ x sinh ⁡ y tanh ⁡ ( x − y ) = tanh ⁡ x − tanh ⁡ y 1 − tanh ⁡ x tanh ⁡ y {\displaystyle {\begin{aligned}\sinh(x+y)&=\sinh x\cosh y+\cosh x\sinh y\\\cosh(x+y)&=\cosh x\cosh y+\sinh x\sinh y\\\tanh(x+y)&={\frac {\tanh x+\tanh y}{1+\tanh x\tanh y}}\\\sinh(x-y)&=\sinh x\cosh y-\cosh x\sinh y\\\cosh(x-y)&=\cosh x\cosh y-\sinh x\sinh y\\\tanh(x-y)&={\frac {\tanh x-\tanh y}{1-\tanh x\tanh y}}\\\end{aligned}}} particularly cosh ⁡ ( 2 x ) = sinh 2 ⁡ x + cosh 2 ⁡ x = 2 sinh 2 ⁡ x + 1 = 2 cosh 2 ⁡ x − 1 sinh ⁡ ( 2 x ) = 2 sinh ⁡ x cosh ⁡ x tanh ⁡ ( 2 x ) = 2 tanh ⁡ x 1 + tanh 2 ⁡ x {\displaystyle {\begin{aligned}\cosh(2x)&=\sinh ^{2}{x}+\cosh ^{2}{x}=2\sinh ^{2}x+1=2\cosh ^{2}x-1\\\sinh(2x)&=2\sinh x\cosh x\\\tanh(2x)&={\frac {2\tanh x}{1+\tanh ^{2}x}}\\\end{aligned}}} sinh ⁡ x + sinh ⁡ y = 2 sinh ⁡ ( x + y 2 ) cosh ⁡ ( x − y 2 ) cosh ⁡ x + cosh ⁡ y = 2 cosh ⁡ ( x + y 2 ) cosh ⁡ ( x − y 2 ) sinh ⁡ x − sinh ⁡ y = 2 cosh ⁡ ( x + y 2 ) sinh ⁡ ( x − y 2 ) cosh ⁡ x − cosh ⁡ y = 2 sinh ⁡ ( x + y 2 ) sinh ⁡ ( x − y 2 ) {\displaystyle {\begin{aligned}\sinh x+\sinh y&=2\sinh \left({\frac {x+y}{2}}\right)\cosh \left({\frac {x-y}{2}}\right)\\\cosh x+\cosh y&=2\cosh \left({\frac {x+y}{2}}\right)\cosh \left({\frac {x-y}{2}}\right)\\\sinh x-\sinh y&=2\cosh \left({\frac {x+y}{2}}\right)\sinh \left({\frac {x-y}{2}}\right)\\\cosh x-\cosh y&=2\sinh \left({\frac {x+y}{2}}\right)\sinh \left({\frac {x-y}{2}}\right)\\\end{aligned}}} cosh ⁡ x cosh ⁡ y = 1 2 ( cosh ⁡ ( x + y ) + cosh ⁡ ( x − y ) ) sinh ⁡ x sinh ⁡ y = 1 2 ( cosh ⁡ ( x + y ) − cosh ⁡ ( x − y ) ) sinh ⁡ x cosh ⁡ y = 1 2 ( sinh ⁡ ( x + y ) + sinh ⁡ ( x − y ) ) cosh ⁡ x sinh ⁡ y = 1 2 ( sinh ⁡ ( x + y ) − sinh ⁡ ( x − y ) ) {\displaystyle {\begin{aligned}\cosh x\,\cosh y&={\tfrac {1}{2}}{\bigl (}\!\!~\cosh(x+y)+\cosh(x-y){\bigr )}\\[5mu]\sinh x\,\sinh y&={\tfrac {1}{2}}{\bigl (}\!\!~\cosh(x+y)-\cosh(x-y){\bigr )}\\[5mu]\sinh x\,\cosh y&={\tfrac {1}{2}}{\bigl (}\!\!~\sinh(x+y)+\sinh(x-y){\bigr )}\\[5mu]\cosh x\,\sinh y&={\tfrac {1}{2}}{\bigl (}\!\!~\sinh(x+y)-\sinh(x-y){\bigr )}\\[5mu]\end{aligned}}} sinh ⁡ ( x 2 ) = sinh ⁡ x 2 ( cosh ⁡ x + 1 ) = sgn ⁡ x cosh ⁡ x − 1 2 cosh ⁡ ( x 2 ) = cosh ⁡ x + 1 2 tanh ⁡ ( x 2 ) = sinh ⁡ x cosh ⁡ x + 1 = sgn ⁡ x cosh ⁡ x − 1 cosh ⁡ x + 1 = e x − 1 e x + 1 {\displaystyle {\begin{aligned}\sinh \left({\frac {x}{2}}\right)&={\frac {\sinh x}{\sqrt {2(\cosh x+1)}}}&&=\operatorname {sgn} x\,{\sqrt {\frac {\cosh x-1}{2}}}\\[6px]\cosh \left({\frac {x}{2}}\right)&={\sqrt {\frac {\cosh x+1}{2}}}\\[6px]\tanh \left({\frac {x}{2}}\right)&={\frac {\sinh x}{\cosh x+1}}&&=\operatorname {sgn} x\,{\sqrt {\frac {\cosh x-1}{\cosh x+1}}}={\frac {e^{x}-1}{e^{x}+1}}\end{aligned}}} where sgn is the sign function. If x ≠ 0 then tanh ⁡ ( x 2 ) = cosh ⁡ x − 1 sinh ⁡ x = coth ⁡ x − csch ⁡ x {\displaystyle \tanh \left({\frac {x}{2}}\right)={\frac {\cosh x-1}{\sinh x}}=\coth x-\operatorname {csch} x} When ⁠ t = tanh ⁡ ( x 2 ) {\displaystyle t=\tanh \left({\frac {x}{2}}\right)} ⁠, sinh ⁡ x = 2 t 1 − t 2 , cosh ⁡ x = 1 + t 2 1 − t 2 , tanh ⁡ x = 2 t 1 + t 2 , coth ⁡ x = 1 + t 2 2 t , sech ⁡ x = 1 − t 2 1 + t 2 , csch ⁡ x = 1 − t 2 2 t . {\displaystyle {\begin{aligned}&\sinh x={\frac {2t}{1-t^{2}}},&&\cosh x={\frac {1+t^{2}}{1-t^{2}}},\\[8pt]&\tanh x={\frac {2t}{1+t^{2}}},&&\coth x={\frac {1+t^{2}}{2t}},\\[8pt]&\operatorname {sech} x={\frac {1-t^{2}}{1+t^{2}}},&&\operatorname {csch} x={\frac {1-t^{2}}{2t}}.\end{aligned}}} sinh 2 ⁡ x = 1 2 ( cosh ⁡ 2 x − 1 ) cosh 2 ⁡ x = 1 2 ( cosh ⁡ 2 x + 1 ) {\displaystyle {\begin{aligned}\sinh ^{2}x&={\tfrac {1}{2}}(\cosh 2x-1)\\\cosh ^{2}x&={\tfrac {1}{2}}(\cosh 2x+1)\end{aligned}}} The following inequality is useful in statistics: cosh ⁡ ( t ) ≤ e t 2 / 2 . {\displaystyle \operatorname {cosh} (t)\leq e^{t^{2}/2}.} It can be proved by comparing the Taylor series of the two functions term by term. Inverse functions as logarithms arsinh ⁡ ( x ) = ln ⁡ ( x + x 2 + 1 ) arcosh ⁡ ( x ) = ln ⁡ ( x + x 2 − 1 ) x ≥ 1 artanh ⁡ ( x ) = 1 2 ln ⁡ ( 1 + x 1 − x ) | x | < 1 arcoth ⁡ ( x ) = 1 2 ln ⁡ ( x + 1 x − 1 ) | x | > 1 arsech ⁡ ( x ) = ln ⁡ ( 1 x + 1 x 2 − 1 ) = ln ⁡ ( 1 + 1 − x 2 x ) 0 < x ≤ 1 arcsch ⁡ ( x ) = ln ⁡ ( 1 x + 1 x 2 + 1 ) x ≠ 0 {\displaystyle {\begin{aligned}\operatorname {arsinh} (x)&=\ln \left(x+{\sqrt {x^{2}+1}}\right)\\\operatorname {arcosh} (x)&=\ln \left(x+{\sqrt {x^{2}-1}}\right)&&x\geq 1\\\operatorname {artanh} (x)&={\frac {1}{2}}\ln \left({\frac {1+x}{1-x}}\right)&&|x|<1\\\operatorname {arcoth} (x)&={\frac {1}{2}}\ln \left({\frac {x+1}{x-1}}\right)&&|x|>1\\\operatorname {arsech} (x)&=\ln \left({\frac {1}{x}}+{\sqrt {{\frac {1}{x^{2}}}-1}}\right)=\ln \left({\frac {1+{\sqrt {1-x^{2}}}}{x}}\right)&&0<x\leq 1\\\operatorname {arcsch} (x)&=\ln \left({\frac {1}{x}}+{\sqrt {{\frac {1}{x^{2}}}+1}}\right)&&x\neq 0\end{aligned}}} Derivatives d d x sinh ⁡ x = cosh ⁡ x d d x cosh ⁡ x = sinh ⁡ x d d x tanh ⁡ x = 1 − tanh 2 ⁡ x = sech 2 ⁡ x = 1 cosh 2 ⁡ x d d x coth ⁡ x = 1 − coth 2 ⁡ x = − csch 2 ⁡ x = − 1 sinh 2 ⁡ x x ≠ 0 d d x sech ⁡ x = − tanh ⁡ x sech ⁡ x d d x csch ⁡ x = − coth ⁡ x csch ⁡ x x ≠ 0 {\displaystyle {\begin{aligned}{\frac {d}{dx}}\sinh x&=\cosh x\\{\frac {d}{dx}}\cosh x&=\sinh x\\{\frac {d}{dx}}\tanh x&=1-\tanh ^{2}x=\operatorname {sech} ^{2}x={\frac {1}{\cosh ^{2}x}}\\{\frac {d}{dx}}\coth x&=1-\coth ^{2}x=-\operatorname {csch} ^{2}x=-{\frac {1}{\sinh ^{2}x}}&&x\neq 0\\{\frac {d}{dx}}\operatorname {sech} x&=-\tanh x\operatorname {sech} x\\{\frac {d}{dx}}\operatorname {csch} x&=-\coth x\operatorname {csch} x&&x\neq 0\end{aligned}}} d d x arsinh ⁡ x = 1 x 2 + 1 d d x arcosh ⁡ x = 1 x 2 − 1 1 < x d d x artanh ⁡ x = 1 1 − x 2 | x | < 1 d d x arcoth ⁡ x = 1 1 − x 2 1 < | x | d d x arsech ⁡ x = − 1 x 1 − x 2 0 < x < 1 d d x arcsch ⁡ x = − 1 | x | 1 + x 2 x ≠ 0 {\displaystyle {\begin{aligned}{\frac {d}{dx}}\operatorname {arsinh} x&={\frac {1}{\sqrt {x^{2}+1}}}\\{\frac {d}{dx}}\operatorname {arcosh} x&={\frac {1}{\sqrt {x^{2}-1}}}&&1<x\\{\frac {d}{dx}}\operatorname {artanh} x&={\frac {1}{1-x^{2}}}&&|x|<1\\{\frac {d}{dx}}\operatorname {arcoth} x&={\frac {1}{1-x^{2}}}&&1<|x|\\{\frac {d}{dx}}\operatorname {arsech} x&=-{\frac {1}{x{\sqrt {1-x^{2}}}}}&&0<x<1\\{\frac {d}{dx}}\operatorname {arcsch} x&=-{\frac {1}{|x|{\sqrt {1+x^{2}}}}}&&x\neq 0\end{aligned}}} Second derivatives Each of the functions sinh and cosh is equal to its second derivative, that is: d 2 d x 2 sinh ⁡ x = sinh ⁡ x {\displaystyle {\frac {d^{2}}{dx^{2}}}\sinh x=\sinh x} d 2 d x 2 cosh ⁡ x = cosh ⁡ x . {\displaystyle {\frac {d^{2}}{dx^{2}}}\cosh x=\cosh x\,.} All functions with this property are linear combinations of sinh and cosh, in particular the exponential functions e x {\displaystyle e^{x}} and e − x {\displaystyle e^{-x}} . Standard integrals ∫ sinh ⁡ ( a x ) d x = a − 1 cosh ⁡ ( a x ) + C ∫ cosh ⁡ ( a x ) d x = a − 1 sinh ⁡ ( a x ) + C ∫ tanh ⁡ ( a x ) d x = a − 1 ln ⁡ ( cosh ⁡ ( a x ) ) + C ∫ coth ⁡ ( a x ) d x = a − 1 ln ⁡ | sinh ⁡ ( a x ) | + C ∫ sech ⁡ ( a x ) d x = a − 1 arctan ⁡ ( sinh ⁡ ( a x ) ) + C ∫ csch ⁡ ( a x ) d x = a − 1 ln ⁡ | tanh ⁡ ( a x 2 ) | + C = a − 1 ln ⁡ | coth ⁡ ( a x ) − csch ⁡ ( a x ) | + C = − a − 1 arcoth ⁡ ( cosh ⁡ ( a x ) ) + C {\displaystyle {\begin{aligned}\int \sinh(ax)\,dx&=a^{-1}\cosh(ax)+C\\\int \cosh(ax)\,dx&=a^{-1}\sinh(ax)+C\\\int \tanh(ax)\,dx&=a^{-1}\ln(\cosh(ax))+C\\\int \coth(ax)\,dx&=a^{-1}\ln \left|\sinh(ax)\right|+C\\\int \operatorname {sech} (ax)\,dx&=a^{-1}\arctan(\sinh(ax))+C\\\int \operatorname {csch} (ax)\,dx&=a^{-1}\ln \left|\tanh \left({\frac {ax}{2}}\right)\right|+C=a^{-1}\ln \left|\coth \left(ax\right)-\operatorname {csch} \left(ax\right)\right|+C=-a^{-1}\operatorname {arcoth} \left(\cosh \left(ax\right)\right)+C\end{aligned}}} The following integrals can be proved using hyperbolic substitution: ∫ 1 a 2 + u 2 d u = arsinh ⁡ ( u a ) + C ∫ 1 u 2 − a 2 d u = sgn ⁡ u arcosh ⁡ | u a | + C ∫ 1 a 2 − u 2 d u = a − 1 artanh ⁡ ( u a ) + C u 2 < a 2 ∫ 1 a 2 − u 2 d u = a − 1 arcoth ⁡ ( u a ) + C u 2 > a 2 ∫ 1 u a 2 − u 2 d u = − a − 1 arsech ⁡ | u a | + C ∫ 1 u a 2 + u 2 d u = − a − 1 arcsch ⁡ | u a | + C {\displaystyle {\begin{aligned}\int {{\frac {1}{\sqrt {a^{2}+u^{2}}}}\,du}&=\operatorname {arsinh} \left({\frac {u}{a}}\right)+C\\\int {{\frac {1}{\sqrt {u^{2}-a^{2}}}}\,du}&=\operatorname {sgn} {u}\operatorname {arcosh} \left|{\frac {u}{a}}\right|+C\\\int {\frac {1}{a^{2}-u^{2}}}\,du&=a^{-1}\operatorname {artanh} \left({\frac {u}{a}}\right)+C&&u^{2}<a^{2}\\\int {\frac {1}{a^{2}-u^{2}}}\,du&=a^{-1}\operatorname {arcoth} \left({\frac {u}{a}}\right)+C&&u^{2}>a^{2}\\\int {{\frac {1}{u{\sqrt {a^{2}-u^{2}}}}}\,du}&=-a^{-1}\operatorname {arsech} \left|{\frac {u}{a}}\right|+C\\\int {{\frac {1}{u{\sqrt {a^{2}+u^{2}}}}}\,du}&=-a^{-1}\operatorname {arcsch} \left|{\frac {u}{a}}\right|+C\end{aligned}}} where C is the constant of integration. Taylor series expressions It is possible to express explicitly the Taylor series at zero (or the Laurent series, if the function is not defined at zero) of the above functions. sinh ⁡ x = x + x 3 3 ! + x 5 5 ! + x 7 7 ! + ⋯ = ∑ n = 0 ∞ x 2 n + 1 ( 2 n + 1 ) ! {\displaystyle \sinh x=x+{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}+{\frac {x^{7}}{7!}}+\cdots =\sum _{n=0}^{\infty }{\frac {x^{2n+1}}{(2n+1)!}}} This series is convergent for every complex value of x. Since the function sinh x is odd, only odd exponents for x occur in its Taylor series. cosh ⁡ x = 1 + x 2 2 ! + x 4 4 ! + x 6 6 ! + ⋯ = ∑ n = 0 ∞ x 2 n ( 2 n ) ! {\displaystyle \cosh x=1+{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}+{\frac {x^{6}}{6!}}+\cdots =\sum _{n=0}^{\infty }{\frac {x^{2n}}{(2n)!}}} This series is convergent for every complex value of x. Since the function cosh x is even, only even exponents for x occur in its Taylor series. The sum of the sinh and cosh series is the infinite series expression of the exponential function. The following series are followed by a description of a subset of their domain of convergence, where the series is convergent and its sum equals the function. tanh ⁡ x = x − x 3 3 + 2 x 5 15 − 17 x 7 315 + ⋯ = ∑ n = 1 ∞ 2 2 n ( 2 2 n − 1 ) B 2 n x 2 n − 1 ( 2 n ) ! , | x | < π 2 coth ⁡ x = x − 1 + x 3 − x 3 45 + 2 x 5 945 + ⋯ = ∑ n = 0 ∞ 2 2 n B 2 n x 2 n − 1 ( 2 n ) ! , 0 < | x | < π sech ⁡ x = 1 − x 2 2 + 5 x 4 24 − 61 x 6 720 + ⋯ = ∑ n = 0 ∞ E 2 n x 2 n ( 2 n ) ! , | x | < π 2 csch ⁡ x = x − 1 − x 6 + 7 x 3 360 − 31 x 5 15120 + ⋯ = ∑ n = 0 ∞ 2 ( 1 − 2 2 n − 1 ) B 2 n x 2 n − 1 ( 2 n ) ! , 0 < | x | < π {\displaystyle {\begin{aligned}\tanh x&=x-{\frac {x^{3}}{3}}+{\frac {2x^{5}}{15}}-{\frac {17x^{7}}{315}}+\cdots =\sum _{n=1}^{\infty }{\frac {2^{2n}(2^{2n}-1)B_{2n}x^{2n-1}}{(2n)!}},\qquad \left|x\right|<{\frac {\pi }{2}}\\\coth x&=x^{-1}+{\frac {x}{3}}-{\frac {x^{3}}{45}}+{\frac {2x^{5}}{945}}+\cdots =\sum _{n=0}^{\infty }{\frac {2^{2n}B_{2n}x^{2n-1}}{(2n)!}},\qquad 0<\left|x\right|<\pi \\\operatorname {sech} x&=1-{\frac {x^{2}}{2}}+{\frac {5x^{4}}{24}}-{\frac {61x^{6}}{720}}+\cdots =\sum _{n=0}^{\infty }{\frac {E_{2n}x^{2n}}{(2n)!}},\qquad \left|x\right|<{\frac {\pi }{2}}\\\operatorname {csch} x&=x^{-1}-{\frac {x}{6}}+{\frac {7x^{3}}{360}}-{\frac {31x^{5}}{15120}}+\cdots =\sum _{n=0}^{\infty }{\frac {2(1-2^{2n-1})B_{2n}x^{2n-1}}{(2n)!}},\qquad 0<\left|x\right|<\pi \end{aligned}}} where: Infinite products and continued fractions The following expansions are valid in the whole complex plane: Comparison with circular functions The hyperbolic functions represent an expansion of trigonometry beyond the circular functions. Both types depend on an argument, either circular angle or hyperbolic angle. Since the area of a circular sector with radius r and angle u (in radians) is r2u/2, it will be equal to u when r = √2. In the diagram, such a circle is tangent to the hyperbola xy = 1 at (1, 1). The yellow sector depicts an area and angle magnitude. Similarly, the yellow and red regions together depict a hyperbolic sector with area corresponding to hyperbolic angle magnitude. The legs of the two right triangles with hypotenuse on the ray defining the angles are of length √2 times the circular and hyperbolic functions. The hyperbolic angle is an invariant measure with respect to the squeeze mapping, just as the circular angle is invariant under rotation. The Gudermannian function gives a direct relationship between the circular functions and the hyperbolic functions that does not involve complex numbers. The graph of the function ⁠ a cosh ⁡ ( x / a ) {\displaystyle a\cosh(x/a)} ⁠ is the catenary, the curve formed by a uniform flexible chain, hanging freely between two fixed points under uniform gravity. Relationship to the exponential function The decomposition of the exponential function in its even and odd parts gives the identities e x = cosh ⁡ x + sinh ⁡ x , {\displaystyle e^{x}=\cosh x+\sinh x,} and e − x = cosh ⁡ x − sinh ⁡ x . {\displaystyle e^{-x}=\cosh x-\sinh x.} Combined with Euler's formula e i x = cos ⁡ x + i sin ⁡ x , {\displaystyle e^{ix}=\cos x+i\sin x,} this gives e x + i y = ( cosh ⁡ x + sinh ⁡ x ) ( cos ⁡ y + i sin ⁡ y ) {\displaystyle e^{x+iy}=(\cosh x+\sinh x)(\cos y+i\sin y)} for the general complex exponential function. Additionally, e x = 1 + tanh ⁡ x 1 − tanh ⁡ x = 1 + tanh ⁡ x 2 1 − tanh ⁡ x 2 {\displaystyle e^{x}={\sqrt {\frac {1+\tanh x}{1-\tanh x}}}={\frac {1+\tanh {\frac {x}{2}}}{1-\tanh {\frac {x}{2}}}}} Hyperbolic functions for complex numbers Since the exponential function can be defined for any complex argument, we can also extend the definitions of the hyperbolic functions to complex arguments. The functions sinh z and cosh z are then holomorphic. Relationships to ordinary trigonometric functions are given by Euler's formula for complex numbers: e i x = cos ⁡ x + i sin ⁡ x e − i x = cos ⁡ x − i sin ⁡ x {\displaystyle {\begin{aligned}e^{ix}&=\cos x+i\sin x\\e^{-ix}&=\cos x-i\sin x\end{aligned}}} so: cosh ⁡ ( i x ) = 1 2 ( e i x + e − i x ) = cos ⁡ x sinh ⁡ ( i x ) = 1 2 ( e i x − e − i x ) = i sin ⁡ x tanh ⁡ ( i x ) = i tan ⁡ x cosh ⁡ ( x + i y ) = cosh ⁡ ( x ) cos ⁡ ( y ) + i sinh ⁡ ( x ) sin ⁡ ( y ) sinh ⁡ ( x + i y ) = sinh ⁡ ( x ) cos ⁡ ( y ) + i cosh ⁡ ( x ) sin ⁡ ( y ) tanh ⁡ ( x + i y ) = tanh ⁡ ( x ) + i tan ⁡ ( y ) 1 + i tanh ⁡ ( x ) tan ⁡ ( y ) cosh ⁡ x = cos ⁡ ( i x ) sinh ⁡ x = − i sin ⁡ ( i x ) tanh ⁡ x = − i tan ⁡ ( i x ) {\displaystyle {\begin{aligned}\cosh(ix)&={\frac {1}{2}}\left(e^{ix}+e^{-ix}\right)=\cos x\\\sinh(ix)&={\frac {1}{2}}\left(e^{ix}-e^{-ix}\right)=i\sin x\\\tanh(ix)&=i\tan x\\\cosh(x+iy)&=\cosh(x)\cos(y)+i\sinh(x)\sin(y)\\\sinh(x+iy)&=\sinh(x)\cos(y)+i\cosh(x)\sin(y)\\\tanh(x+iy)&={\frac {\tanh(x)+i\tan(y)}{1+i\tanh(x)\tan(y)}}\\\cosh x&=\cos(ix)\\\sinh x&=-i\sin(ix)\\\tanh x&=-i\tan(ix)\end{aligned}}} Thus, hyperbolic functions are periodic with respect to the imaginary component, with period 2 π i {\displaystyle 2\pi i} ( π i {\displaystyle \pi i} for hyperbolic tangent and cotangent). See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Python-Ogre] | [TOKENS: 560]
Contents Python-Ogre Python-Ogre is a Python binding for the OGRE 3D engine, designed to provide the functionality and performance of OGRE (written in C++) with the accessibility and ease of use of Python to facilitate the rapid development of 3D games and to make the OGRE engine more accessible to the beginner, who might otherwise be daunted by the technicalities of writing in the native C++. The performance of the engine is decreased in comparison to the original C++ demos, however the original OGRE engine provides such high performance that the performance of Python-Ogre is still more than acceptable for all but the most graphics-intensive games. Features Python-Ogre is different from the Ogre3D engine it is based upon as it comes pre-bundled with Python bindings and demos for many other support libraries. Python-Ogre has compatibility for all platforms supported by OGRE: Support The Python-Ogre wiki, contains build instructions for Windows, Linux, and Mac OS X platforms, as well as tutorials and example code snippets. Ogre3D hosts the official Python-Ogre forum for helping developers in their use of the engine. History The PyOgre project began in early 2005, where a Python binding for OGRE was first attempted using Boost.Python from the Boost C++ Libraries by two members of the Ogre3D community, Clay Culver and Federico Di Gergorio. This effort ultimately failed, which prompted the use of SWIG as the basis for the C++ binding. This method proved to be rather successful, providing to the community with a somewhat limited and error-prone implementation, but an implementation nonetheless. In mid-2006, Lakin Wecker began work on Python-Ogre, based on the Boost.Python libraries, as was attempted before. This was developed alongside the PyOgre project. He was aided by Andy Miller, who then later took over development of the project with assistance from Roman Yakovenko, Joseph Lisee, and Ben Harling during the evolution of the engine. Development of PyOgre was halted in mid-2007, and officially succeeded by Python-Ogre. As of summer of 2008, Andy Miller was actively working on adding new features to Python-Ogre, as well as providing support and maintenance. As of January 2014, the main website at python-ogre.org went offline, but wiki.python-ogre.org is still extant. Included libraries The following libraries are either currently supported, or have at one point in time worked with the Python-Ogre engine. Support for particular libraries are noted in each release. Demos are available for all libraries listed, however, not all demos function, due to the constantly evolving codebase and limited number of active developers. References External links
========================================