text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/Judaism_and_politics] | [TOKENS: 2929]
Contents Judaism and politics The relationship between Judaism and politics is a historically complex subject, and has evolved over time concurrently with both changes within Jewish society and religious practice, and changes in the general society of places where Jewish people live. In particular, Jewish political thought can be split into four major eras: Biblical (prior to Roman rule), Rabbinic (from roughly the 100 BCE to 600 CE), Medieval (from roughly 600 CE to 1800 CE), and Modern (18th century to the present day). Several different political models are described across its canon, usually composed of some combination of tribal federation, monarchy, a priestly theocracy, and rule by prophets. Political organization during the Rabbinic and Medieval eras generally involved semi-autonomous rule by Jewish councils and courts (with council membership often composed purely of rabbis) that would govern the community and act as representatives to secular authorities outside the Jewish community. Beginning in the 19th century, and coinciding with the expansion of the political rights accorded to individual Jews in European society, Jews would affiliate with and contribute theory to a wide range of political movements and philosophies. Biblical models Stuart Cohen has pointed out that there are three separate power centers depicted in the Hebrew Bible: the priesthood, the royal throne, and the prophets. One model of biblical politics is the model of the tribal federation, where power is shared among different tribes and institutions. Another is the model of limited constitutional monarchy. The Hebrew Bible contains a complex chronicle of the Kings of Israel and Judah. Some passages of the Hebrew Bible contain intimate portrayals of the inner workings of the royal households of Saul, David, and Solomon; the accounts of subsequent monarchs are frequently more distanced and less detailed, and frequently begin with the judgement that the monarch "did evil in the sight of the Lord".[citation needed] Daniel Elazar has argued that the concept of covenant is the fundamental concept in the biblical political tradition and in the later Jewish thought that emerges from the Bible. Outside of the Hebrew Bible, the ancient Jewish scribe, sage, and allegorist Ben Sira stated "A work is praised for the skill of the artisan; so a people’s leader is proved wise by his words. The loud of mouth are feared in their city, and the one who is reckless in speech is hated. This was followed by "A wise magistrate educates his people, and the rule of an intelligent person is well ordered. As the people’s judge is, so are his officials; as the ruler of the city is, so are all its inhabitants. An undisciplined king ruins his people, but a city becomes fit to live in through the understanding of its rulers," implying the political leader's intelligence reflects the one of his people. This can be seen as an early example of Jewish political philosophy. Rabbinic period In Roman Judea, Jewish communities were governed by rabbinical courts known as Sanhedrin. Lesser Sanhedrins composed of 23 judges were appointed to each city, while a Great Sanhedrin with 71 judges was the highest authority, taking cases appealed from the lower courts. The Sanhedrin served as the leadership of the Jewish community under Roman rule, and served as emissaries to the imperial authorities in addition to overseeing religious practice and collecting taxes. The Sanhedrin was the highest Jewish governing body of the Second Temple period. A statement by Judah bar Ilai in the Babylonian Talmud (Sanhedrin 20b) depicts monarchy as the ideal form of Jewish governance, following the Book of Deuteronomy statement that, "When you come into the land that the Lord your God is about to give you, and you take hold of it and dwell in it, and you say, 'Let me put a king over me like all the nations that are around me', you shall surely put over you a king whom the Lord your God chooses..." (Deut. 17:14–15). But the Talmud also brings a different interpretation of this verse from Eleazar ben Arach, who is quoted as explaining that, "This section was spoken only in anticipation of their future murmurings, as it is written, and you say, Let me put a king over me..." (Sanhedrin 20b). In many interpretations, Rabbi Nehorai does not think of appointing a king as a strict obligation, but as a concession to later "murmurings" from Israel. In addition to imagining ideal forms of governance, the rabbis accept a principle to obey the government currently in power. The Talmud makes reference to the principle of dina de-malkhuta dina ("the law of the land is law"), a principle recognizing non-Jewish laws and non-Jewish legal jurisdiction as binding on Jewish citizens, provided that they are not contrary to any laws of Judaism. Medieval period During the Middle Ages, some Ashkenazi Jewish communities were governed by qahal. The qahal had regulatory control over Jewish communities in a given region; they administered commerce, hygiene, sanitation, charity, Jewish education, kashrut, and relations between landlords and their tenants. It provided a number of community facilities, such as a rabbi, a ritual bath, and an interest-free loan facility for the Jewish community. The qahal even had sufficient authority that it could arrange for individuals to be expelled from synagogues, excommunicating them. Some medieval political theorists such as Maimonides and Nissim of Gerona saw kingship as the ideal form of government. Maimonides' views the commandment in Deuteronomy to appoint a king as a clear positive ideal, following the Talmudic teaching that "three commandments were given to Israel when they entered the land: to appoint a king, as it says, 'You shall surely put over you a king'..." A large section of Maimonides' legal code, the Mishneh Torah, titled "The Laws of Kings and their Wars", deals with the ideal model of kingship, especially in the messianic era, and also concerning ruling over non-Jewish subjects through the Noachide laws.[citation needed] Other sections of Maimonides' Mishneh Torah (mostly also in the Book of Judges, where the laws of kingship are also found) is dedicated to the laws relating to legislators and judges.[citation needed] Whereas Maimonides' idealized kingship, other medieval political theorists, such as Abravanel, saw kingship as misguided. Later on, other Jewish philosophers such as Baruch Spinoza would lay the groundwork for the Enlightenment, arguing for ideas such as the separation of church and state. Spinoza's writings caused him to be excommunicated from the Jewish community of Amsterdam, although his work and legacy has been largely rehabilitated, especially among secular Jews in the 20th and 21st centuries. Modern period With Jewish emancipation, the institution of the qahal as an autonomous entity was officially abolished. Jews increasingly became participants in the wider political and social sphere of larger nations. As Jews became citizens of states with various political systems, and argued about whether to found their own state, Jewish ideas of the relationship between Judaism and politics developed in many different directions. In the nineteenth century and early twentieth century, when there was a large Jewish population in Europe, some Jews favored various forms of liberalism, and saw them as connected with Jewish principles. Some Jews allied themselves with a range of Jewish political movements. These included Socialist and Bundist movements favored by the Jewish left, Zionist movements, Jewish Autonomist movements, Territorialist movements, and Jewish Anarchism movements. Haredi Jews formed an organization known as World Agudath Israel which espoused Haredi Jewish political principles. In the 21st century, shifts are occurring. The Jewish community in Great Britain, one of the largest in the Jewish diaspora, is leaning conservative, as a poll published by the Jewish Chronicle in early 2015 shows. Of British Jews polled, 69% would vote for the Conservative Party, while 22% would vote for the Labour Party. This is in stark contrast to the rest of the voter population, which, according to a BBC poll, had Conservatives and Labour almost tied at about a third each. Jews have typically been a part of the British middle class, traditional home of the Conservative Party, although the number of Jews in working class communities of London is in decline. The main voting bloc of poorer Jews in Britain now, made up primarily of ultra-Orthodox, votes en masse for the Conservatives. Attitudes toward Israel influence the vote of three out of four of British Jews. A shift toward conservatism has also been exhibited in France, where about half of the Jewish population is Sephardic. Jérôme Fourquet, director "Public opinion and corporate strategy" department at the polling organization IFOP, notes that there is a "pronounced preference" for right-wing politics among French Jews. During the 2007 election, Jews (Orthodox or not) represented the strongest pillar of support for Sarkozy after observant Catholics. During the American Civil War, Jews were divided in their views of slavery and abolition. Prior to 1861, there were virtually no rabbinical sermons on slavery. The silence on this issue was probably a result of fear that the controversy would create conflict within the Jewish community. Some Jews owned slaves or traded them. Most southern Jews supported slavery, and few Northern Jews were abolitionists, seeking peace and remaining silent on the subject of slavery. America's largest Jewish community, New York's Jews, were "overwhelmingly pro-southern, pro-slavery, and anti-Lincoln in the early years of the war". However, eventually, they began to lean politically toward Abraham Lincoln's Republican party and emancipation. Swedish born-rabbi Morris Jacob Raphall was one of the most vocal Jewish supporters of the institution of slavery. Mordecai Manuel Noah was against the expansion of slavery initially, but later became an opponent of emancipation. Isaac Mayer Wise followed a policy of silence on the issue when the war broke out. Wise was a supporter of the Democratic Party, pro-slavery at that time. Ernestine Rose was one Jewish opponent of slavery, as was Bernhard Felsenthal. Moses Mielziner opposed slavery on a Jewish religious argument, arguing that Mosaic law maintained a compassionate view toward the slave. Rabbi David Einhorn also invoked Jewish values against slavery. Rose and Einhorn were threatened with tar and feathering. While earlier Jewish immigrants tended to be politically conservative, the wave of Eastern European Jews starting in the early 1880s, were generally more liberal or left-wing, and became the political majority. For most of the 20th century since 1936, the vast majority of Jews in the United States have been aligned with the Democratic Party. Supporters of the Jewish left have argued that left-wing values vis-à-vis social justice can be traced to Jewish religious texts, including the Tanakh and later texts, which include a strong endorsement of hospitality to "the stranger" and the principle of redistribution of wealth – as well as a tradition of challenging authority, as exemplified by the biblical prophets. American rabbinic leaders who have advanced a progressive political agenda grounded in Jewish principles have included:[citation needed] Other prominent Jews who have argued based on Jewish principles for a progressive political agenda have included: Towards the end of the 20th century, and at the beginning of the 21st century, Republicans began a platform that sought to take the Jewish vote away from the Democrats. While a solid majority of American Jews continues to be aligned with the Democratic Party, many have argued that there is increased Jewish support for political conservatism. Rabbinic leaders who have advanced a conservative political agenda grounded in Jewish principles have included: Other prominent Jews who have argued based on Jewish principles for a conservative political agenda have included: Significant Jewish political philosophers in North America have included:[citation needed] The development of a political system in Israel drew largely on European models of governance, rather than on models from the Jewish political tradition. Some political figures in Israel, however, have seen their principles as based in Judaism. This is especially pronounced in political parties that see themselves as religious parties, such as Shas, United Torah Judaism, and The Jewish Home. Politics in Israel are dominated by Zionist parties. They traditionally fall into three camps, the first two being the largest: Labor Zionism, Revisionist Zionism and Religious Zionism. There are also several non-Zionist Orthodox religious parties, non-Zionist secular left-wing groups as well as non-Zionist and anti-Zionist Israeli Arab parties. Recent interest in developing political theory grounded in Jewish sources has been spurred on by the activities of the neo-conservative Shalem Center. One example of a well-known Jew in Australian politics is Josh Frydenberg, a member of the centre-right, conservative Liberal Party, who (until 2022 served as Treasurer and was (before being unseated) the member of Kooyong, a wealthy Melbourne electorate. Currently, there are four Jews in the Australian Parliament, all in the House of Representatives. These are Mark Dreyfus (the Labor member for Isaacs in Victoria since 2007), Mike Freelander (the Labor member for Macarthur in New South Wales since 2016), Julian Leeser (the Liberal member for Berowra in New South Wales since 2016) and Josh Burns (the Labor member for Macnamara in Victoria since 2019). The four electorates with the highest Jewish populations are: Many Australian Jews have been hostile to the progressive Australian Greens party due to its perceived support for the Boycott, Divestment and Sanctions (BDS) movement, a pro-Palestinian political movement opposed by both major parties (the Liberal Party and the Labor Party). There are currently three Jews in state parliaments of Australia: one in New South Wales (Ron Hoenig, the Labor member for electoral district of Heffron since 2012) and two in Victoria (David Southwick, the Liberal member for Caulfield since 2010; and Paul Hamer, the Labor member for Box Hill since 2018). See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-FOOTNOTESkaggs199528-161] | [TOKENS: 10728]
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Orion_(constellation)#cite_ref-38] | [TOKENS: 4993]
Contents Orion (constellation) Orion is a prominent set of stars visible during winter in the northern celestial hemisphere. It is one of the 88 modern constellations; it was among the 48 constellations listed by the 2nd-century AD/CE astronomer Ptolemy. It is named after a hunter in Greek mythology. Orion is most prominent during winter evenings in the Northern Hemisphere, as are five other constellations that have stars in the Winter Hexagon asterism. Orion's two brightest stars, Rigel (β) and Betelgeuse (α), are both among the brightest stars in the night sky; both are supergiants and slightly variable. There are a further six stars brighter than magnitude 3.0, including three making the short straight line of the Orion's Belt asterism. Orion also hosts the radiant of the annual Orionids, the strongest meteor shower associated with Halley's Comet, and the Orion Nebula, one of the brightest nebulae in the sky. Characteristics Orion is bordered by Taurus to the northwest, Eridanus to the southwest, Lepus to the south, Monoceros to the east, and Gemini to the northeast. Covering 594 square degrees, Orion ranks 26th of the 88 constellations in size. The constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 26 sides. In the equatorial coordinate system, the right ascension coordinates of these borders lie between 04h 43.3m and 06h 25.5m , while the declination coordinates are between 22.87° and −10.97°. The constellation's three-letter abbreviation, as adopted by the International Astronomical Union in 1922, is "Ori". Orion is most visible in the evening sky from January to April, winter in the Northern Hemisphere, and summer in the Southern Hemisphere. In the tropics (less than about 8° from the equator), the constellation transits at the zenith. From May to July (summer in the Northern Hemisphere, winter in the Southern Hemisphere), Orion is in the daytime sky and thus invisible at most latitudes. However, for much of Antarctica in the Southern Hemisphere's winter months, the Sun is below the horizon even at midday. Stars (and thus Orion, but only the brightest stars) are then visible at twilight for a few hours around local noon, just in the brightest section of the sky low in the North where the Sun is just below the horizon. At the same time of day at the South Pole itself (Amundsen–Scott South Pole Station), Rigel is only 8° above the horizon, and the Belt sweeps just along it. In the Southern Hemisphere's summer months, when Orion is normally visible in the night sky, the constellation is actually not visible in Antarctica because the Sun does not set at that time of year south of the Antarctic Circle. In countries close to the equator (e.g. Kenya, Indonesia, Colombia, Ecuador), Orion appears overhead in December around midnight and in the February evening sky. Navigational aid Orion is very useful as an aid to locating other stars. By extending the line of the Belt southeastward, Sirius (α CMa) can be found; northwestward, Aldebaran (α Tau). A line eastward across the two shoulders indicates the direction of Procyon (α CMi). A line from Rigel through Betelgeuse points to Castor and Pollux (α Gem and β Gem). Additionally, Rigel is part of the Winter Circle asterism. Sirius and Procyon, which may be located from Orion by following imaginary lines (see map), also are points in both the Winter Triangle and the Circle. Features Orion's seven brightest stars form a distinctive hourglass-shaped asterism, or pattern, in the night sky. Four stars—Rigel, Betelgeuse, Bellatrix, and Saiph—form a large roughly rectangular shape, at the center of which lie the three stars of Orion's Belt—Alnitak, Alnilam, and Mintaka. His head is marked by an additional eighth star called Meissa, which is fairly bright to the observer. Descending from the Belt is a smaller line of three stars, Orion's Sword (the middle of which is in fact not a star but the Orion Nebula), also known as the hunter's sword. Many of the stars are luminous hot blue supergiants, with the stars of the Belt and Sword forming the Orion OB1 association. Standing out by its red hue, Betelgeuse may nevertheless be a runaway member of the same group. Orion's Belt, or The Belt of Orion, is an asterism within the constellation. It consists of three bright stars: Alnitak (Zeta Orionis), Alnilam (Epsilon Orionis), and Mintaka (Delta Orionis). Alnitak is around 800 light-years away from Earth, 100,000 times more luminous than the Sun, and shines with a magnitude of 1.8; much of its radiation is in the ultraviolet range, which the human eye cannot see. Alnilam is approximately 2,000 light-years from Earth, shines with a magnitude of 1.70, and with an ultraviolet light that is 375,000 times more luminous than the Sun. Mintaka is 915 light-years away and shines with a magnitude of 2.21. It is 90,000 times more luminous than the Sun and is a double star: the two orbit each other every 5.73 days. In the Northern Hemisphere, Orion's Belt is best visible in the night sky during the month of January at around 9:00 pm, when it is approximately around the local meridian. Just southwest of Alnitak lies Sigma Orionis, a multiple star system composed of five stars that have a combined apparent magnitude of 3.7 and lying at a distance of 1150 light-years. Southwest of Mintaka lies the quadruple star Eta Orionis. Orion's Sword contains the Orion Nebula, the Messier 43 nebula, Sh 2-279 (also known as the Running Man Nebula), and the stars Theta Orionis, Iota Orionis, and 42 Orionis. Three stars comprise a small triangle that marks the head. The apex is marked by Meissa (Lambda Orionis), a hot blue giant of spectral type O8 III and apparent magnitude 3.54, which lies some 1100 light-years distant. Phi-1 and Phi-2 Orionis make up the base. Also nearby is the young star FU Orionis. Stretching north from Betelgeuse are the stars that make up Orion's club. Mu Orionis marks the elbow, Nu and Xi mark the handle of the club, and Chi1 and Chi2 mark the end of the club. Just east of Chi1 is the Mira-type variable red giant star U Orionis. West from Bellatrix lie six stars all designated Pi Orionis (π1 Ori, π2 Ori, π3 Ori, π4 Ori, π5 Ori, and π6 Ori) which make up Orion's shield. Around 20 October each year, the Orionid meteor shower (Orionids) reaches its peak. Coming from the border with the constellation Gemini, as many as 20 meteors per hour can be seen. The shower's parent body is Halley's Comet. Hanging from Orion's Belt is his sword, consisting of the multiple stars θ1 and θ2 Orionis, called the Trapezium and the Orion Nebula (M42). This is a spectacular object that can be clearly identified with the naked eye as something other than a star. Using binoculars, its clouds of nascent stars, luminous gas, and dust can be observed. The Trapezium cluster has many newborn stars, including several brown dwarfs, all of which are at an approximate distance of 1,500 light-years. Named for the four bright stars that form a trapezoid, it is largely illuminated by the brightest stars, which are only a few hundred thousand years old. Observations by the Chandra X-ray Observatory show both the extreme temperatures of the main stars—up to 60,000 kelvins—and the star forming regions still extant in the surrounding nebula. M78 (NGC 2068) is a nebula in Orion. With an overall magnitude of 8.0, it is significantly dimmer than the Great Orion Nebula that lies to its south; however, it is at approximately the same distance, at 1600 light-years from Earth. It can easily be mistaken for a comet in the eyepiece of a telescope. M78 is associated with the variable star V351 Orionis, whose magnitude changes are visible in very short periods of time. Another fairly bright nebula in Orion is NGC 1999, also close to the Great Orion Nebula. It has an integrated magnitude of 10.5 and is 1500 light-years from Earth. The variable star V380 Orionis is embedded in NGC 1999. Another famous nebula is IC 434, the Horsehead Nebula, near Alnitak (Zeta Orionis). It contains a dark dust cloud whose shape gives the nebula its name. NGC 2174 is an emission nebula located 6400 light-years from Earth. Besides these nebulae, surveying Orion with a small telescope will reveal a wealth of interesting deep-sky objects, including M43, M78, and multiple stars including Iota Orionis and Sigma Orionis. A larger telescope may reveal objects such as the Flame Nebula (NGC 2024), as well as fainter and tighter multiple stars and nebulae. Barnard's Loop can be seen on very dark nights or using long-exposure photography. All of these nebulae are part of the larger Orion molecular cloud complex, which is located approximately 1,500 light-years away and is hundreds of light-years across. Due to its proximity, it is one of the most intense regions of stellar formation visible from Earth. The Orion molecular cloud complex forms the eastern part of an even larger structure, the Orion–Eridanus Superbubble, which is visible in X-rays and in hydrogen emissions. History and mythology The distinctive pattern of Orion is recognized in numerous cultures around the world, and many myths are associated with it. Orion is used as a symbol in the modern world. In Siberia, the Chukchi people see Orion as a hunter; an arrow he has shot is represented by Aldebaran (Alpha Tauri), with the same figure as other Western depictions. In Greek mythology, Orion was a gigantic, supernaturally strong hunter, born to Euryale, a Gorgon, and Poseidon (Neptune), god of the sea. One myth recounts Gaia's rage at Orion, who dared to say that he would kill every animal on Earth. The angry goddess tried to dispatch Orion with a scorpion. This is given as the reason that the constellations of Scorpius and Orion are never in the sky at the same time. However, Ophiuchus, the Serpent Bearer, revived Orion with an antidote. This is said to be the reason that the constellation of Ophiuchus stands midway between the Scorpion and the Hunter in the sky. The constellation is mentioned in Horace's Odes (Ode 3.27.18), Homer's Odyssey (Book 5, line 283) and Iliad, and Virgil's Aeneid (Book 1, line 535). In old Hungarian tradition, Orion is known as "Archer" (Íjász), or "Reaper" (Kaszás). In recently rediscovered myths, he is called Nimrod (Hungarian: Nimród), the greatest hunter, father of the twins Hunor and Magor. The π and o stars (on upper right) form together the reflex bow or the lifted scythe. In other Hungarian traditions, Orion's Belt is known as "Judge's stick" (Bírópálca). In Ireland and Scotland, Orion was called An Bodach, a figure from Irish folklore whose name literally means "the one with a penis [bod]" and was the husband of the Cailleach (hag). In Scandinavian tradition, Orion's Belt was known as "Frigg's Distaff" (friggerock) or "Freyja's distaff". The Finns call Orion's Belt and the stars below it "Väinämöinen's scythe" (Väinämöisen viikate). Another name for the asterism of Alnilam, Alnitak, and Mintaka is "Väinämöinen's Belt" (Väinämöisen vyö) and the stars "hanging" from the Belt as "Kaleva's sword" (Kalevanmiekka). There are claims in popular media that the Adorant from the Geißenklösterle cave, an ivory carving estimated to be 35,000 to 40,000 years old, is the first known depiction of the constellation. Scholars dismiss such interpretations, saying that perceived details such as a belt and sword derive from preexisting features in the grain structure of the ivory. The Babylonian star catalogues of the Late Bronze Age name Orion MULSIPA.ZI.AN.NA,[note 1] "The Heavenly Shepherd" or "True Shepherd of Anu" – Anu being the chief god of the heavenly realms. The Babylonian constellation is sacred to Papshukal and Ninshubur, both minor gods fulfilling the role of "messenger to the gods". Papshukal is closely associated with the figure of a walking bird on Babylonian boundary stones, and on the star map the figure of the Rooster is located below and behind the figure of the True Shepherd—both constellations represent the herald of the gods, in his bird and human forms respectively. In ancient Egypt, the stars of Orion were regarded as a god, called Sah. Because Orion rises before Sirius, the star whose heliacal rising was the basis for the Solar Egyptian calendar, Sah was closely linked with Sopdet, the goddess who personified Sirius. The god Sopdu is said to be the son of Sah and Sopdet. Sah is syncretized with Osiris, while Sopdet is syncretized with Osiris' mythological wife, Isis. In the Pyramid Texts, from the 24th and 23rd centuries BC, Sah is one of many gods whose form the dead pharaoh is said to take in the afterlife. The Armenians identified their legendary patriarch and founder Hayk with Orion. Hayk is also the name of the Orion constellation in the Armenian translation of the Bible. The Bible mentions Orion three times, naming it "Kesil" (כסיל, literally – fool). Though, this name perhaps is etymologically connected with "Kislev", the name for the ninth month of the Hebrew calendar (i.e. November–December), which, in turn, may derive from the Hebrew root K-S-L as in the words "kesel, kisla" (כֵּסֶל, כִּסְלָה, hope, positiveness), i.e. hope for winter rains.: Job 9:9 ("He is the maker of the Bear and Orion"), Job 38:31 ("Can you loosen Orion's belt?"), and Amos 5:8 ("He who made the Pleiades and Orion"). In ancient Aram, the constellation was known as Nephîlā′, the Nephilim are said to be Orion's descendants. In medieval Muslim astronomy, Orion was known as al-jabbar, "the giant". Orion's sixth brightest star, Saiph, is named from the Arabic, saif al-jabbar, meaning "sword of the giant". In China, Orion was one of the 28 lunar mansions Sieu (Xiù) (宿). It is known as Shen (參), literally meaning "three", for the stars of Orion's Belt. The Chinese character 參 (pinyin shēn) originally meant the constellation Orion (Chinese: 參宿; pinyin: shēnxiù); its Shang dynasty version, over three millennia old, contains at the top a representation of the three stars of Orion's Belt atop a man's head (the bottom portion representing the sound of the word was added later). The Rigveda refers to the constellation as Mriga (the Deer). Nataraja, "the cosmic dancer", is often interpreted as the representation of Orion. Rudra, the Rigvedic form of Shiva, is the presiding deity of Ardra nakshatra (Betelgeuse) of Hindu astrology. The Jain Symbol carved in the Udayagiri and Khandagiri Caves, India in 1st century BCE has a striking resemblance with Orion. Bugis sailors identified the three stars in Orion's Belt as tanra tellué, meaning "sign of three". The Seri people of northwestern Mexico call the three stars in Orion's Belt Hapj (a name denoting a hunter) which consists of three stars: Hap (mule deer), Haamoja (pronghorn), and Mojet (bighorn sheep). Hap is in the middle and has been shot by the hunter; its blood has dripped onto Tiburón Island. The same three stars are known in Spain and most of Latin America as "Las tres Marías" (Spanish for "The Three Marys"). In Puerto Rico, the three stars are known as the "Los Tres Reyes Magos" (Spanish for The Three Wise Men). The Ojibwa/Chippewa Native Americans call this constellation Mesabi for Big Man. To the Lakota Native Americans, Tayamnicankhu (Orion's Belt) is the spine of a bison. The great rectangle of Orion is the bison's ribs; the Pleiades star cluster in nearby Taurus is the bison's head; and Sirius in Canis Major, known as Tayamnisinte, is its tail. Another Lakota myth mentions that the bottom half of Orion, the Constellation of the Hand, represented the arm of a chief that was ripped off by the Thunder People as a punishment from the gods for his selfishness. His daughter offered to marry the person who can retrieve his arm from the sky, so the young warrior Fallen Star (whose father was a star and whose mother was human) returned his arm and married his daughter, symbolizing harmony between the gods and humanity with the help of the younger generation. The index finger is represented by Rigel; the Orion Nebula is the thumb; the Belt of Orion is the wrist; and the star Beta Eridani is the pinky finger. The seven primary stars of Orion make up the Polynesian constellation Heiheionakeiki which represents a child's string figure similar to a cat's cradle. Several precolonial Filipinos referred to the belt region in particular as "balatik" (ballista) as it resembles a trap of the same name which fires arrows by itself and is usually used for catching pigs from the bush. Spanish colonization later led to some ethnic groups referring to Orion's Belt as "Tres Marias" or "Tatlong Maria." In Māori tradition, the star Rigel (known as Puanga or Puaka) is closely connected with the celebration of Matariki. The rising of Matariki (the Pleiades) and Rigel before sunrise in midwinter marks the start of the Māori year. In Javanese culture, the constellation is often called Lintang Waluku or Bintang Bajak, referring to the shape of a paddy field plow. The imagery of the Belt and Sword has found its way into popular Western culture, for example in the form of the shoulder insignia of the 27th Infantry Division of the United States Army during both World Wars, probably owing to a pun on the name of the division's first commander, Major General John F. O'Ryan. The film distribution company Orion Pictures used the constellation as its logo. In artistic renderings, the surrounding constellations are sometimes related to Orion: he is depicted standing next to the river Eridanus with his two hunting dogs Canis Major and Canis Minor, fighting Taurus. He is sometimes depicted hunting Lepus the hare. He sometimes is depicted to have a lion's hide in his hand. There are alternative ways to visualise Orion. From the Southern Hemisphere, Orion is oriented south-upward, and the Belt and Sword are sometimes called the saucepan or pot in Australia and New Zealand. Orion's Belt is called Drie Konings (Three Kings) or the Drie Susters (Three Sisters) by Afrikaans speakers in South Africa and are referred to as les Trois Rois (the Three Kings) in Daudet's Lettres de Mon Moulin (1866). The appellation Driekoningen (the Three Kings) is also often found in 17th and 18th-century Dutch star charts and seaman's guides. The same three stars are known in Spain, Latin America, and the Philippines as "Las Tres Marías" (The Three Marys), and as "Los Tres Reyes Magos" (The Three Wise Men) in Puerto Rico. Even traditional depictions of Orion have varied greatly. Cicero drew Orion in a similar fashion to the modern depiction. The Hunter held an unidentified animal skin aloft in his right hand; his hand was represented by Omicron2 Orionis and the skin was represented by the five stars designated Pi Orionis. Saiph and Rigel represented his left and right knees, while Eta Orionis and Lambda Leporis were his left and right feet, respectively. As in the modern depiction, Mintaka, Alnilam, and Alnitak represented his Belt. His left shoulder was represented by Betelgeuse, and Mu Orionis made up his left arm. Meissa was his head, and Bellatrix his right shoulder. The depiction of Hyginus was similar to that of Cicero, though the two differed in a few important areas. Cicero's animal skin became Hyginus's shield (Omicron and Pi Orionis), and instead of an arm marked out by Mu Orionis, he holds a club (Chi Orionis). His right leg is represented by Theta Orionis and his left leg is represented by Lambda, Mu, and Epsilon Leporis. Further Western European and Arabic depictions have followed these two models. Future Orion is located on the celestial equator, but it will not always be so located due to the effects of precession of the Earth's axis. Orion lies well south of the ecliptic, and it only happens to lie on the celestial equator because the point on the ecliptic that corresponds to the June solstice is close to the border of Gemini and Taurus, to the north of Orion. Precession will eventually carry Orion further south, and by AD 14000, Orion will be far enough south that it will no longer be visible from the latitude of Great Britain. Further in the future, Orion's stars will gradually move away from the constellation due to proper motion. However, Orion's brightest stars all lie at a large distance from Earth on an astronomical scale—much farther away than Sirius, for example. Orion will still be recognizable long after most of the other constellations—composed of relatively nearby stars—have distorted into new configurations, with the exception of a few of its stars eventually exploding as supernovae, for example Betelgeuse, which is predicted to explode sometime in the next million years. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Cybersex] | [TOKENS: 1524]
Contents Cybersex Cybersex, also called Internet sex, computer sex, netsex, e-sex, cybering, is a virtual sex encounter in which two or more people have long distance sex via electronic text or video communication (webcams, VR headsets, etc.) and other electronics (such as teledildonics) connected to a computer network. Cybersex can also mean sending each other sexually explicit messages without having sex, and simply describing a sexual experience (also known as "sexting"). Cybersex is a sub-type of technology-mediated sexual interactions. In one form, this is accomplished by the participants describing their actions and responding to their chat partners in a mostly written form designed to stimulate their own sexual feelings and fantasies. Cybersex often includes real life masturbation. Environments in which cybersex takes place are not necessarily exclusively devoted to that subject, and participants in any Internet chat may suddenly receive a message of invitation. Non-marital, adult, consensual paid cybersex counts as illegal solicitation of prostitution and illegal prostitution in multiple US states. Non-consensual cybersex sometimes occurs in cybersex trafficking crimes. There also has been at least one rape conviction for purely virtual sexual encounters. Environments Cybersex is commonly performed in Internet chat rooms (such as IRC, talkers or web chats) and on instant messaging systems. It can also be performed using webcams, voice chat, or online games and/or virtual worlds like Second Life or VRChat. The exact definition of cybersex—specifically, whether real-life masturbation must be taking place for the online sex act to count as cybersex—is up for debate. It is also fairly frequent in online role-playing games, such as MUDs and MMORPGs, though community attitudes toward this activity vary greatly from game to game. Some online social games like Red Light Center are dedicated to cybersex and other adult behaviors. These online games are often called AMMORPGs. Cybersex may also be accomplished through the use of avatars in a multiuser software environment. It is often called mudsex or netsex in MUDs. In TinyMUD variants, particularly MUCKs, the term TinySex (TS) is very common. In a textual environment, good writing skills are a desirable trait in a cybersex partner. Though text-based cybersex has been in practice for decades, the increased popularity of webcams has raised the number of online partners using two-way video connections to "expose" themselves to each other online—giving the act of cybersex a more visual aspect. There are a number of popular, commercial webcam sites that allow people to openly masturbate on camera while others watch them. Using similar sites, couples can also perform on camera for the enjoyment of others. In online worlds like Second Life and via webcam-focused chat services, however, Internet sex workers engage in cybersex in exchange for both virtual and real-life currency. Advantages Cybersex provides various advantages: Research reviews and surveys call for a balanced view of online sexual activities in general and cybersex in particular that acknowledges both advantages and disadvantages, positive and negative effects. Legality of consensual, paid cybersex Several US states use very broad language in their criminal code for prostitution and prostitution related crimes. For example, the criminal code in multiple US states does not necessitate physical presence to be an element of prostitution. The role of physical presence in states with broad prostitution definitions has at times been debated in court. California has one such broad law about prostitution, and the California supreme court ruled in Wooten v. Superior Court that physical contact is necessary for a prostitution conviction. However, this only protects long distance buyers of cybersex in that state and may not apply to all sellers of cybersex services. Sellers of cybersex in California may still be convicted under existing prostitution laws. Court cases such as Pryor v. Municipal Court and People v. Hill determined that touching of genitals, buttocks, or the female breast is essential to a prostitution conviction. As a result, physical touching between two sellers of during a live cybersex service "for the purpose of sexual arousal", and coupled with "money or other considerations", may in fact qualify as prostitution. Buyers of cybersex services have more criminal liability under solicitation of prostitution laws than act-of-prostitution laws. For example, the Wisconsin Supreme Court ruled that illegal solicitation of prostitution can occur with a lack of physical contact. Wisconsin v. Kittilstad ruled that simply offering money to view people having in-person sex, even from a distance, counts as solicitation of prostitution. Wisconsin also explicitly criminalizes offers or requests of non-marital sexual intercourse for anything of value. This allows for criminal prosecution of non-marital, paid cybersex acts where at least one party resides in Wisconsin and at least one party has physical sex as part of the cybersex act. To reduce the potential of criminal liability in cybersex selling, VoyeurDorm.com, implemented a "no sex on camera" policy. Non-consensual cybersex Cybersex trafficking is the live streaming of coerced sexual acts and or rape. Victims are abducted, threatened, or deceived and transferred to 'cybersex dens.' The dens can be in any location where the cybersex traffickers have a computer, tablet, or phone with internet connection. Perpetrators use social media networks, videoconferences, pornographic video sharing websites, dating pages, online chat rooms, apps, dark web sites, and other platforms. They use online payment systems and cryptocurrencies to hide their identities. Millions of reports of its occurrence are sent to authorities annually. South China Morning Post has stated that new laws and police procedures are needed to combat this type of cybercrime. A man named Bjorn was the first person charged with rape-via-the-internet. This was because Bjorn blackmailed minors into performing sexual acts for him over a computer network, which would normally be prosecuted under other laws. He was charged by Swedish authorities and Swedish laws notably do not require physical penetration for a rape conviction. Criticism Cybersex often attracts ridicule because the partners frequently have little verifiable knowledge about each other. For many the primary point of cybersex is the plausible simulation of sexual activity, and this knowledge of the other is not always desired, but this is also criticized as the emptying out of embodied relations. In the words of Carkeek and James (1992): Without continuing to draw off our historically ambivalent faith in embodied relations, techno-sex quickly becomes hollow, unsatisfying, no more erotic than collecting answers to what-are-your-measurements questions. And herein lies the rub, or so we will argue. By continuing to draw off that ambivalent faith, techno-sex and the many other practices of disembodying interaction contribute to a changing and increasingly abstracted dominant ontology of embodiment. Privacy concerns are a difficulty with cybersex, since participants may log or record the interaction without the other's knowledge, and possibly disclose it to others or the public. There is disagreement over whether cybersex is a form of infidelity. While it does not involve physical contact, critics claim that the powerful emotions involved can cause marital stress, especially when cybersex culminates in an Internet romance. In several known cases, Internet adultery became the grounds for which a couple divorced. Therapists reported a growing number of patients addicted to this activity in 2002, and treated it as a form of both Internet addiction and sexual addiction, with the standard problems associated with addictive behaviors. See also References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Weak_artificial_intelligence#cite_note-3] | [TOKENS: 594]
Contents Weak artificial intelligence Weak artificial intelligence (weak AI) is artificial intelligence that implements a limited part of the mind, or, as narrow AI, artificial narrow intelligence (ANI), is focused on one narrow task. Weak AI is contrasted with strong AI, which can be interpreted in various ways: Narrow AI can be classified as being "limited to a single, narrowly defined task. Most modern AI systems would be classified in this category." Artificial general intelligence is conversely the opposite. Applications and risks Some examples of narrow AI are AlphaGo, self-driving cars, robot systems used in the medical field, and diagnostic doctors. Narrow AI systems are sometimes dangerous if unreliable. And the behavior that it follows can become inconsistent. It could be difficult for the AI to grasp complex patterns and get to a solution that works reliably in various environments. This "brittleness" can cause it to fail in unpredictable ways. Narrow AI failures can sometimes have significant consequences. It could for example cause disruptions in the electric grid, damage nuclear power plants, cause global economic problems, and misdirect autonomous vehicles. Medicines could be incorrectly sorted and distributed. Also, medical diagnoses can ultimately have serious and sometimes deadly consequences if the AI is faulty or biased. Simple AI programs have already worked their way into society, oftentimes unnoticed by the public. Autocorrection for typing, speech recognition for speech-to-text programs, and vast expansions in the data science fields are examples. Narrow AI has also been the subject of some controversy, including resulting in unfair prison sentences, discrimination against women in the workplace for hiring, resulting in death via autonomous driving, among other cases. Despite being "narrow" AI, recommender systems are efficient at predicting user reactions based their posts, patterns, or trends. For instance, TikTok's "For You" algorithm can determine a user's interests or preferences in less than an hour. Some other social media AI systems are used to detect bots that may be involved in propaganda or other potentially malicious activities. Weak AI versus strong AI John Searle contests the possibility of strong AI (by which he means conscious AI). He further believes that the Turing test (created by Alan Turing and originally called the "imitation game", used to assess whether a machine can converse indistinguishably from a human) is not accurate or appropriate for testing whether an AI is "strong". Scholars such as Antonio Lieto have argued that the current research on both AI and cognitive modelling are perfectly aligned with the weak-AI hypothesis (that should not be confused with the "general" vs "narrow" AI distinction) and that the popular assumption that cognitively inspired AI systems espouse the strong AI hypothesis is ill-posed and problematic since "artificial models of brain and mind can be used to understand mental phenomena without pretending that that they are the real phenomena that they are modelling" (as, on the other hand, implied by the strong AI assumption). See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/File:US_SlaveFree1858.gif] | [TOKENS: 67]
File:US SlaveFree1858.gif Original upload log This image is a derivative work of the following images: Uploaded with derivativeFX File history Click on a date/time to view the file as it appeared at that time. File usage The following 12 pages use this file: Global file usage The following other wikis use this file:
========================================
[SOURCE: https://en.wikipedia.org/wiki/Mars#cite_note-Martian_regolith_Claudin2006-122] | [TOKENS: 11899]
Contents Mars Mars is the fourth planet from the Sun. It is also known as the "Red Planet", for its orange-red appearance. Mars is a desert-like rocky planet with a tenuous atmosphere that is primarily carbon dioxide (CO2). At the average surface level the atmospheric pressure is a few thousandths of Earth's, atmospheric temperature ranges from −153 to 20 °C (−243 to 68 °F), and cosmic radiation is high. Mars retains some water, in the ground as well as thinly in the atmosphere, forming cirrus clouds, fog, frost, larger polar regions of permafrost and ice caps (with seasonal CO2 snow), but no bodies of liquid surface water. Its surface gravity is roughly a third of Earth's or double that of the Moon. Its diameter, 6,779 km (4,212 mi), is about half the Earth's, or twice the Moon's, and its surface area is the size of all the dry land of Earth. Fine dust is prevalent across the surface and the atmosphere, being picked up and spread at the low Martian gravity even by the weak wind of the tenuous atmosphere. The terrain of Mars roughly follows a north-south divide, the Martian dichotomy, with the northern hemisphere mainly consisting of relatively flat, low lying plains, and the southern hemisphere of cratered highlands. Geologically, the planet is fairly active with marsquakes trembling underneath the ground, but also hosts many enormous volcanoes that are extinct (the tallest is Olympus Mons, 21.9 km or 13.6 mi tall), as well as one of the largest canyons in the Solar System (Valles Marineris, 4,000 km or 2,500 mi long). Mars has two natural satellites that are small and irregular in shape: Phobos and Deimos. With a significant axial tilt of 25 degrees, Mars experiences seasons, like Earth (which has an axial tilt of 23.5 degrees). A Martian solar year is equal to 1.88 Earth years (687 Earth days), a Martian solar day (sol) is equal to 24.6 hours. Mars formed along with the other planets approximately 4.5 billion years ago. During the martian Noachian period (4.5 to 3.5 billion years ago), its surface was marked by meteor impacts, valley formation, erosion, the possible presence of water oceans and the loss of its magnetosphere. The Hesperian period (beginning 3.5 billion years ago and ending 3.3–2.9 billion years ago) was dominated by widespread volcanic activity and flooding that carved immense outflow channels. The Amazonian period, which continues to the present, is the currently dominating and remaining influence on geological processes. Because of Mars's geological history, the possibility of past or present life on Mars remains an area of active scientific investigation, with some possible traces needing further examination. Being visible with the naked eye in Earth's sky as a red wandering star, Mars has been observed throughout history, acquiring diverse associations in different cultures. In 1963 the first flight to Mars took place with Mars 1, but communication was lost en route. The first successful flyby exploration of Mars was conducted in 1965 with Mariner 4. In 1971 Mariner 9 entered orbit around Mars, being the first spacecraft to orbit any body other than the Moon, Sun or Earth; following in the same year were the first uncontrolled impact (Mars 2) and first successful landing (Mars 3) on Mars. Probes have been active on Mars continuously since 1997. At times, more than ten probes have simultaneously operated in orbit or on the surface, more than at any other planet beyond Earth. Mars is an often proposed target for future crewed exploration missions, though no such mission is currently planned. Natural history Scientists have theorized that during the Solar System's formation, Mars was created as the result of a random process of run-away accretion of material from the protoplanetary disk that orbited the Sun. Mars has many distinctive chemical features caused by its position in the Solar System. Elements with comparatively low boiling points, such as chlorine, phosphorus, and sulfur, are much more common on Mars than on Earth; these elements were probably pushed outward by the young Sun's energetic solar wind. After the formation of the planets, the inner Solar System may have been subjected to the so-called Late Heavy Bombardment. About 60% of the surface of Mars shows a record of impacts from that era, whereas much of the remaining surface is probably underlain by immense impact basins caused by those events. However, more recent modeling has disputed the existence of the Late Heavy Bombardment. There is evidence of an enormous impact basin in the Northern Hemisphere of Mars, spanning 10,600 by 8,500 kilometres (6,600 by 5,300 mi), or roughly four times the size of the Moon's South Pole–Aitken basin, which would be the largest impact basin yet discovered if confirmed. It has been hypothesized that the basin was formed when Mars was struck by a Pluto-sized body about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet. A 2023 study shows evidence, based on the orbital inclination of Deimos (a small moon of Mars), that Mars may once have had a ring system 3.5 billion years to 4 billion years ago. This ring system may have been formed from a moon, 20 times more massive than Phobos, orbiting Mars billions of years ago; and Phobos would be a remnant of that ring. Epochs: The geological history of Mars can be split into many periods, but the following are the three primary periods: Geological activity is still taking place on Mars. The Athabasca Valles is home to sheet-like lava flows created about 200 million years ago. Water flows in the grabens called the Cerberus Fossae occurred less than 20 million years ago, indicating equally recent volcanic intrusions. The Mars Reconnaissance Orbiter has captured images of avalanches. Physical characteristics Mars is approximately half the diameter of Earth or twice that of the Moon, with a surface area only slightly less than the total area of Earth's dry land. Mars is less dense than Earth, having about 15% of Earth's volume and 11% of Earth's mass, resulting in about 38% of Earth's surface gravity. Mars is the only presently known example of a desert planet, a rocky planet with a surface akin to that of Earth's deserts. The red-orange appearance of the Martian surface is caused by iron(III) oxide (nanophase Fe2O3) and the iron(III) oxide-hydroxide mineral goethite. It can look like butterscotch; other common surface colors include golden, brown, tan, and greenish, depending on the minerals present. Like Earth, Mars is differentiated into a dense metallic core overlaid by less dense rocky layers. The outermost layer is the crust, which is on average about 42–56 kilometres (26–35 mi) thick, with a minimum thickness of 6 kilometres (3.7 mi) in Isidis Planitia, and a maximum thickness of 117 kilometres (73 mi) in the southern Tharsis plateau. For comparison, Earth's crust averages 27.3 ± 4.8 km in thickness. The most abundant elements in the Martian crust are silicon, oxygen, iron, magnesium, aluminum, calcium, and potassium. Mars is confirmed to be seismically active; in 2019, it was reported that InSight had detected and recorded over 450 marsquakes and related events. Beneath the crust is a silicate mantle responsible for many of the tectonic and volcanic features on the planet's surface. The upper Martian mantle is a low-velocity zone, where the velocity of seismic waves is lower than surrounding depth intervals. The mantle appears to be rigid down to the depth of about 250 km, giving Mars a very thick lithosphere compared to Earth. Below this the mantle gradually becomes more ductile, and the seismic wave velocity starts to grow again. The Martian mantle does not appear to have a thermally insulating layer analogous to Earth's lower mantle; instead, below 1050 km in depth, it becomes mineralogically similar to Earth's transition zone. At the bottom of the mantle lies a basal liquid silicate layer approximately 150–180 km thick. The Martian mantle appears to be highly heterogenous, with dense fragments up to 4 km across, likely injected deep into the planet by colossal impacts ~4.5 billion years ago; high-frequency waves from eight marsquakes slowed as they passed these localized regions, and modeling indicates the heterogeneities are compositionally distinct debris preserved because Mars lacks plate tectonics and has a sluggishly convecting interior that prevents complete homogenization. Mars's iron and nickel core is at least partially molten, and may have a solid inner core. It is around half of Mars's radius, approximately 1650–1675 km, and is enriched in light elements such as sulfur, oxygen, carbon, and hydrogen. The temperature of the core is estimated to be 2000–2400 K, compared to 5400–6230 K for Earth's solid inner core. In 2025, based on data from the InSight lander, a group of researchers reported the detection of a solid inner core 613 kilometres (381 mi) ± 67 kilometres (42 mi) in radius. Mars is a terrestrial planet with a surface that consists of minerals containing silicon and oxygen, metals, and other elements that typically make up rock. The Martian surface is primarily composed of tholeiitic basalt, although parts are more silica-rich than typical basalt and may be similar to andesitic rocks on Earth, or silica glass. Regions of low albedo suggest concentrations of plagioclase feldspar, with northern low albedo regions displaying higher than normal concentrations of sheet silicates and high-silicon glass. Parts of the southern highlands include detectable amounts of high-calcium pyroxenes. Localized concentrations of hematite and olivine have been found. Much of the surface is deeply covered by finely grained iron(III) oxide dust. The Phoenix lander returned data showing Martian soil to be slightly alkaline and containing elements such as magnesium, sodium, potassium and chlorine. These nutrients are found in soils on Earth, and are necessary for plant growth. Experiments performed by the lander showed that the Martian soil has a basic pH of 7.7, and contains 0.6% perchlorate by weight, concentrations that are toxic to humans. Streaks are common across Mars and new ones appear frequently on steep slopes of craters, troughs, and valleys. The streaks are dark at first and get lighter with age. The streaks can start in a tiny area, then spread out for hundreds of metres. They have been seen to follow the edges of boulders and other obstacles in their path. The commonly accepted hypotheses include that they are dark underlying layers of soil revealed after avalanches of bright dust or dust devils. Several other explanations have been put forward, including those that involve water or even the growth of organisms. Environmental radiation levels on the surface are on average 0.64 millisieverts of radiation per day, and significantly less than the radiation of 1.84 millisieverts per day or 22 millirads per day during the flight to and from Mars. For comparison the radiation levels in low Earth orbit, where Earth's space stations orbit, are around 0.5 millisieverts of radiation per day. Hellas Planitia has the lowest surface radiation at about 0.342 millisieverts per day, featuring lava tubes southwest of Hadriacus Mons with potentially levels as low as 0.064 millisieverts per day, comparable to radiation levels during flights on Earth. Although Mars has no evidence of a structured global magnetic field, observations show that parts of the planet's crust have been magnetized, suggesting that alternating polarity reversals of its dipole field have occurred in the past. This paleomagnetism of magnetically susceptible minerals is similar to the alternating bands found on Earth's ocean floors. One hypothesis, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands suggest plate tectonic activity on Mars four billion years ago, before the planetary dynamo ceased to function and the planet's magnetic field faded. Geography and features Although better remembered for mapping the Moon, Johann Heinrich von Mädler and Wilhelm Beer were the first areographers. They began by establishing that most of Mars's surface features were permanent and by more precisely determining the planet's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars. Features on Mars are named from a variety of sources. Albedo features are named for classical mythology. Craters larger than roughly 50 km are named for deceased scientists and writers and others who have contributed to the study of Mars. Smaller craters are named for towns and villages of the world with populations of less than 100,000. Large valleys are named for the word "Mars" or "star" in various languages; smaller valleys are named for rivers. Large albedo features retain many of the older names but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian "continents" and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The permanent northern polar ice cap is named Planum Boreum. The southern cap is called Planum Australe. Mars's equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mädler and Beer selected a line for their first maps of Mars in 1830. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen by Merton E. Davies, Harold Masursky, and Gérard de Vaucouleurs for the definition of 0.0° longitude to coincide with the original selection. Because Mars has no oceans, and hence no "sea level", a zero-elevation surface had to be selected as a reference level; this is called the areoid of Mars, analogous to the terrestrial geoid. Zero altitude was defined by the height at which there is 610.5 Pa (6.105 mbar) of atmospheric pressure. This pressure corresponds to the triple point of water, and it is about 0.6% of the sea level surface pressure on Earth (0.006 atm). For mapping purposes, the United States Geological Survey divides the surface of Mars into thirty cartographic quadrangles, each named for a classical albedo feature it contains. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023. The vast upland region Tharsis contains several massive volcanoes, which include the shield volcano Olympus Mons. The edifice is over 600 km (370 mi) wide. Because the mountain is so large, with complex structure at its edges, giving a definite height to it is difficult. Its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over 21 km (13 mi), a little over twice the height of Mauna Kea as measured from its base on the ocean floor. The total elevation change from the plains of Amazonis Planitia, over 1,000 km (620 mi) to the northwest, to the summit approaches 26 km (16 mi), roughly three times the height of Mount Everest, which in comparison stands at just over 8.8 kilometres (5.5 mi). Consequently, Olympus Mons is either the tallest or second-tallest mountain in the Solar System; the only known mountain which might be taller is the Rheasilvia peak on the asteroid Vesta, at 20–25 km (12–16 mi). The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. It is possible that, four billion years ago, the Northern Hemisphere of Mars was struck by an object one-tenth to two-thirds the size of Earth's Moon. If this is the case, the Northern Hemisphere of Mars would be the site of an impact crater 10,600 by 8,500 kilometres (6,600 by 5,300 mi) in size, or roughly the area of Europe, Asia, and Australia combined, surpassing Utopia Planitia and the Moon's South Pole–Aitken basin as the largest impact crater in the Solar System. Mars is scarred by 43,000 impact craters with a diameter of 5 kilometres (3.1 mi) or greater. The largest exposed crater is Hellas, which is 2,300 kilometres (1,400 mi) wide and 7,000 metres (23,000 ft) deep, and is a light albedo feature clearly visible from Earth. There are other notable impact features, such as Argyre, which is around 1,800 kilometres (1,100 mi) in diameter, and Isidis, which is around 1,500 kilometres (930 mi) in diameter. Due to the smaller mass and size of Mars, the probability of an object colliding with the planet is about half that of Earth. Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter. Martian craters can[discuss] have a morphology that suggests the ground became wet after the meteor impact. The large canyon, Valles Marineris (Latin for 'Mariner Valleys, also known as Agathodaemon in the old canal maps), has a length of 4,000 kilometres (2,500 mi) and a depth of up to 7 kilometres (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 kilometres (277 mi) long and nearly 2 kilometres (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 kilometres (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement. Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the volcano Arsia Mons. The caves, named after loved ones of their discoverers, are collectively known as the "seven sisters". Cave entrances measure from 100 to 252 metres (328 to 827 ft) wide and they are estimated to be at least 73 to 96 metres (240 to 315 ft) deep. Because light does not reach the floor of most of the caves, they may extend much deeper than these lower estimates and widen below the surface. "Dena" is the only exception; its floor is visible and was measured to be 130 metres (430 ft) deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface. Martian geysers (or CO2 jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms – are the two most visible types of features ascribed to these eruptions. Similarly sized dust will settle from the thinner Martian atmosphere sooner than it would on Earth. For example, the dust suspended by the 2001 global dust storms on Mars only remained in the Martian atmosphere for 0.6 years, while the dust from Mount Pinatubo took about two years to settle. However, under current Martian conditions, the mass movements involved are generally much smaller than on Earth. Even the 2001 global dust storms on Mars moved only the equivalent of a very thin dust layer – about 3 μm thick if deposited with uniform thickness between 58° north and south of the equator. Dust deposition at the two rover sites has proceeded at a rate of about the thickness of a grain every 100 sols. Atmosphere Mars lost its magnetosphere 4 billion years ago, possibly because of numerous asteroid strikes, so the solar wind interacts directly with the Martian ionosphere, lowering the atmospheric density by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected ionized atmospheric particles trailing off into space behind Mars, and this atmospheric loss is being studied by the MAVEN orbiter. Compared to Earth, the atmosphere of Mars is quite rarefied. Atmospheric pressure on the surface today ranges from a low of 30 Pa (0.0044 psi) on Olympus Mons to over 1,155 Pa (0.1675 psi) in Hellas Planitia, with a mean pressure at the surface level of 600 Pa (0.087 psi). The highest atmospheric density on Mars is equal to that found 35 kilometres (22 mi) above Earth's surface. The resulting mean surface pressure is only 0.6% of Earth's 101.3 kPa (14.69 psi). The scale height of the atmosphere is about 10.8 kilometres (6.7 mi), which is higher than Earth's 6 kilometres (3.7 mi), because the surface gravity of Mars is only about 38% of Earth's. The atmosphere of Mars consists of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 μm in diameter which give the Martian sky a tawny color when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it. Despite repeated detections of methane on Mars, there is no scientific consensus as to its origin. One suggestion is that methane exists on Mars and that its concentration fluctuates seasonally. The existence of methane could be produced by non-biological process such as serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars, or by Martian life. Compared to Earth, its higher concentration of atmospheric CO2 and lower surface pressure may be why sound is attenuated more on Mars, where natural sources are rare apart from the wind. Using acoustic recordings collected by the Perseverance rover, researchers concluded that the speed of sound there is approximately 240 m/s for frequencies below 240 Hz, and 250 m/s for those above. Auroras have been detected on Mars. Because Mars lacks a global magnetic field, the types and distribution of auroras there differ from those on Earth; rather than being mostly restricted to polar regions as is the case on Earth, a Martian aurora can encompass the planet. In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25 times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month. Mars has seasons, alternating between its northern and southern hemispheres, similar to on Earth. Additionally the orbit of Mars has, compared to Earth's, a large eccentricity and approaches perihelion when it is summer in its southern hemisphere and winter in its northern, and aphelion when it is winter in its southern hemisphere and summer in its northern. As a result, the seasons in its southern hemisphere are more extreme and the seasons in its northern are milder than would otherwise be the case. The summer temperatures in the south can be warmer than the equivalent summer temperatures in the north by up to 30 °C (54 °F). Martian surface temperatures vary from lows of about −110 °C (−166 °F) to highs of up to 35 °C (95 °F) in equatorial summer. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure (about 1% that of the atmosphere of Earth), and the low thermal inertia of Martian soil. The planet is 1.52 times as far from the Sun as Earth, resulting in just 43% of the amount of sunlight. Mars has the largest dust storms in the Solar System, reaching speeds of over 160 km/h (100 mph). These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase global temperature. Seasons also produce dry ice covering polar ice caps. Hydrology While Mars contains water in larger amounts, most of it is dust covered water ice at the Martian polar ice caps. The volume of water ice in the south polar ice cap, if melted, would be enough to cover most of the surface of the planet with a depth of 11 metres (36 ft). Water in its liquid form cannot persist on the surface due to Mars's low atmospheric pressure, which is less than 1% that of Earth. Only at the lowest of elevations are the pressure and temperature high enough for liquid water to exist for short periods. Although little water is present in the atmosphere, there is enough to produce clouds of water ice and different cases of snow and frost, often mixed with snow of carbon dioxide dry ice. Landforms visible on Mars strongly suggest that liquid water has existed on the planet's surface. Huge linear swathes of scoured ground, known as outflow channels, cut across the surface in about 25 places. These are thought to be a record of erosion caused by the catastrophic release of water from subsurface aquifers, though some of these structures have been hypothesized to result from the action of glaciers or lava. One of the larger examples, Ma'adim Vallis, is 700 kilometres (430 mi) long, much greater than the Grand Canyon, with a width of 20 kilometres (12 mi) and a depth of 2 kilometres (1.2 mi) in places. It is thought to have been carved by flowing water early in Mars's history. The youngest of these channels is thought to have formed only a few million years ago. Elsewhere, particularly on the oldest areas of the Martian surface, finer-scale, dendritic networks of valleys are spread across significant proportions of the landscape. Features of these valleys and their distribution strongly imply that they were carved by runoff resulting from precipitation in early Mars history. Subsurface water flow and groundwater sapping may play important subsidiary roles in some networks, but precipitation was probably the root cause of the incision in almost all cases. Along craters and canyon walls, there are thousands of features that appear similar to terrestrial gullies. The gullies tend to be in the highlands of the Southern Hemisphere and face the Equator; all are poleward of 30° latitude. A number of authors have suggested that their formation process involves liquid water, probably from melting ice, although others have argued for formation mechanisms involving carbon dioxide frost or the movement of dry dust. No partially degraded gullies have formed by weathering and no superimposed impact craters have been observed, indicating that these are young features, possibly still active. Other geological features, such as deltas and alluvial fans preserved in craters, are further evidence for warmer, wetter conditions at an interval or intervals in earlier Mars history. Such conditions necessarily require the widespread presence of crater lakes across a large proportion of the surface, for which there is independent mineralogical, sedimentological and geomorphological evidence. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water. The chemical signature of water vapor on Mars was first unequivocally demonstrated in 1963 by spectroscopy using an Earth-based telescope. In 2004, Opportunity detected the mineral jarosite. This forms only in the presence of acidic water, showing that water once existed on Mars. The Spirit rover found concentrated deposits of silica in 2007 that indicated wet conditions in the past, and in December 2011, the mineral gypsum, which also forms in the presence of water, was found on the surface by NASA's Mars rover Opportunity. It is estimated that the amount of water in the upper mantle of Mars, represented by hydroxyl ions contained within Martian minerals, is equal to or greater than that of Earth at 50–300 parts per million of water, which is enough to cover the entire planet to a depth of 200–1,000 metres (660–3,280 ft). On 18 March 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 centimetres (24 in), during the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain. In September 2015, NASA announced that they had found strong evidence of hydrated brine flows in recurring slope lineae, based on spectrometer readings of the darkened areas of slopes. These streaks flow downhill in Martian summer, when the temperature is above −23 °C, and freeze at lower temperatures. These observations supported earlier hypotheses, based on timing of formation and their rate of growth, that these dark streaks resulted from water flowing just below the surface. However, later work suggested that the lineae may be dry, granular flows instead, with at most a limited role for water in initiating the process. A definitive conclusion about the presence, extent, and role of liquid water on the Martian surface remains elusive. Researchers suspect much of the low northern plains of the planet were covered with an ocean hundreds of meters deep, though this theory remains controversial. In March 2015, scientists stated that such an ocean might have been the size of Earth's Arctic Ocean. This finding was derived from the ratio of protium to deuterium in the modern Martian atmosphere compared to that ratio on Earth. The amount of Martian deuterium (D/H = 9.3 ± 1.7 10−4) is five to seven times the amount on Earth (D/H = 1.56 10−4), suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the former presence of an ocean. Other scientists caution that these results have not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water. Near the northern polar cap is the 81.4 kilometres (50.6 mi) wide Korolev Crater, which the Mars Express orbiter found to be filled with approximately 2,200 cubic kilometres (530 cu mi) of water ice. In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior (which is 12,100 cubic kilometers). During observations from 2018 through 2021, the ExoMars Trace Gas Orbiter spotted indications of water, probably subsurface ice, in the Valles Marineris canyon system. Orbital motion Mars's average distance from the Sun is roughly 230 million km (143 million mi), and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. The gravitational potential difference and thus the delta-v needed to transfer between Mars and Earth is the second lowest for Earth. The axial tilt of Mars is 25.19° relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day, the orientation of the north pole of Mars is close to the star Deneb. Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury has a larger orbital eccentricity. It is known that in the past, Mars has had a much more circular orbit. At one point, 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. Mars's cycle of eccentricity is 96,000 Earth years compared to Earth's cycle of 100,000 years. Mars has its closest approach to Earth (opposition) in a synodic period of 779.94 days. It should not be confused with Mars conjunction, where the Earth and Mars are at opposite sides of the Solar System and form a straight line crossing the Sun. The average time between the successive oppositions of Mars, its synodic period, is 780 days; but the number of days between successive oppositions can range from 764 to 812. The distance at close approach varies between about 54 and 103 million km (34 and 64 million mi) due to the planets' elliptical orbits, which causes comparable variation in angular size. At their furthest Mars and Earth can be as far as 401 million km (249 million mi) apart. Mars comes into opposition from Earth every 2.1 years. The planets come into opposition near Mars's perihelion in 2003, 2018 and 2035, with the 2020 and 2033 events being particularly close to perihelic opposition. The mean apparent magnitude of Mars is +0.71 with a standard deviation of 1.05. Because the orbit of Mars is eccentric, the magnitude at opposition from the Sun can range from about −3.0 to −1.4. The minimum brightness is magnitude +1.86 when the planet is near aphelion and in conjunction with the Sun. At its brightest, Mars (along with Jupiter) is second only to Venus in apparent brightness. Mars usually appears distinctly yellow, orange, or red. When farthest away from Earth, it is more than seven times farther away than when it is closest. Mars is usually close enough for particularly good viewing once or twice at 15-year or 17-year intervals. Optical ground-based telescopes are typically limited to resolving features about 300 kilometres (190 mi) across when Earth and Mars are closest because of Earth's atmosphere. As Mars approaches opposition, it begins a period of retrograde motion, which means it will appear to move backwards in a looping curve with respect to the background stars. This retrograde motion lasts for about 72 days, and Mars reaches its peak apparent brightness in the middle of this interval. Moons Mars has two relatively small (compared to Earth's) natural moons, Phobos (about 22 km (14 mi) in diameter) and Deimos (about 12 km (7.5 mi) in diameter), which orbit at 9,376 km (5,826 mi) and 23,460 km (14,580 mi) around the planet. The origin of both moons is unclear, although a popular theory states that they were asteroids captured into Martian orbit. Both satellites were discovered in 1877 by Asaph Hall and were named after the characters Phobos (the deity of panic and fear) and Deimos (the deity of terror and dread), twins from Greek mythology who accompanied their father Ares, god of war, into battle. Mars was the Roman equivalent to Ares. In modern Greek, the planet retains its ancient name Ares (Aris: Άρης). From the surface of Mars, the motions of Phobos and Deimos appear different from that of the Earth's satellite, the Moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbit – where the orbital period would match the planet's period of rotation – rises as expected in the east, but slowly. Because the orbit of Phobos is below a synchronous altitude, tidal forces from Mars are gradually lowering its orbit. In about 50 million years, it could either crash into Mars's surface or break up into a ring structure around the planet. The origin of the two satellites is not well understood. Their low albedo and carbonaceous chondrite composition have been regarded as similar to asteroids, supporting a capture theory. The unstable orbit of Phobos would seem to point toward a relatively recent capture. But both have circular orbits near the equator, which is unusual for captured objects, and the required capture dynamics are complex. Accretion early in the history of Mars is plausible, but would not account for a composition resembling asteroids rather than Mars itself, if that is confirmed. Mars may have yet-undiscovered moons, smaller than 50 to 100 metres (160 to 330 ft) in diameter, and a dust ring is predicted to exist between Phobos and Deimos. A third possibility for their origin as satellites of Mars is the involvement of a third body or a type of impact disruption. More-recent lines of evidence for Phobos having a highly porous interior, and suggesting a composition containing mainly phyllosilicates and other minerals known from Mars, point toward an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's satellite. Although the visible and near-infrared (VNIR) spectra of the moons of Mars resemble those of outer-belt asteroids, the thermal infrared spectra of Phobos are reported to be inconsistent with chondrites of any class. It is also possible that Phobos and Deimos were fragments of an older moon, formed by debris from a large impact on Mars, and then destroyed by a more recent impact upon the satellite. More recently, a study conducted by a team of researchers from multiple countries suggests that a lost moon, at least fifteen times the size of Phobos, may have existed in the past. By analyzing rocks which point to tidal processes on the planet, it is possible that these tides may have been regulated by a past moon. Human observations and exploration The history of observations of Mars is marked by oppositions of Mars when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars, which are distinguished because Mars is close to perihelion, making it even closer to Earth. The ancient Sumerians named Mars Nergal, the god of war and plague. During Sumerian times, Nergal was a minor deity of little significance, but, during later times, his main cult center was the city of Nineveh. In Mesopotamian texts, Mars is referred to as the "star of judgement of the fate of the dead". The existence of Mars as a wandering object in the night sky was also recorded by the ancient Egyptian astronomers and, by 1534 BCE, they were familiar with the retrograde motion of the planet. By the period of the Neo-Babylonian Empire, the Babylonian astronomers were making regular records of the positions of the planets and systematic observations of their behavior. For Mars, they knew that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. They invented arithmetic methods for making minor corrections to the predicted positions of the planets. In Ancient Greece, the planet was known as Πυρόεις. Commonly, the Greek name for the planet now referred to as Mars, was Ares. It was the Romans who named the planet Mars, for their god of war, often represented by the sword and shield of the planet's namesake. In the fourth century BCE, Aristotle noted that Mars disappeared behind the Moon during an occultation, indicating that the planet was farther away. Ptolemy, a Greek living in Alexandria, attempted to address the problem of the orbital motion of Mars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection later called the Almagest (from the Arabic for "greatest"), which became the authoritative treatise on Western astronomy for the next fourteen centuries. Literature from ancient China confirms that Mars was known by Chinese astronomers by no later than the fourth century BCE. In the East Asian cultures, Mars is traditionally referred to as the "fire star" (火星) based on the Wuxing system. In 1609 Johannes Kepler published a 10 year study of Martian orbit, using the diurnal parallax of Mars, measured by Tycho Brahe, to make a preliminary calculation of the relative distance to the planet. From Brahe's observations of Mars, Kepler deduced that the planet orbited the Sun not in a circle, but in an ellipse. Moreover, Kepler showed that Mars sped up as it approached the Sun and slowed down as it moved farther away, in a manner that later physicists would explain as a consequence of the conservation of angular momentum.: 433–437 In 1610 the first use of a telescope for astronomical observation, including Mars, was performed by Italian astronomer Galileo Galilei. With the telescope the diurnal parallax of Mars was again measured in an effort to determine the Sun-Earth distance. This was first performed by Giovanni Domenico Cassini in 1672. The early parallax measurements were hampered by the quality of the instruments. The only occultation of Mars by Venus observed was that of 13 October 1590, seen by Michael Maestlin at Heidelberg. By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. On 5 September 1877, a perihelic opposition to Mars occurred. The Italian astronomer Giovanni Schiaparelli used a 22-centimetre (8.7 in) telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which, with the possible exception of the natural canyon Valles Marineris, were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means "channels" or "grooves", was popularly mistranslated in English as "canals". Influenced by the observations, the orientalist Percival Lowell founded an observatory which had 30- and 45-centimetre (12- and 18-in) telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894, and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were independently observed by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time. The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summers) in combination with the canals led to speculation about life on Mars, and it was a long-held belief that Mars contained vast seas and vegetation. As bigger telescopes were used, fewer long, straight canali were observed. During observations in 1909 by Antoniadi with an 84-centimetre (33 in) telescope, irregular patterns were observed, but no canali were seen. The first spacecraft from Earth to visit Mars was Mars 1 of the Soviet Union, which flew by in 1963, but contact was lost en route. NASA's Mariner 4 followed and became the first spacecraft to successfully transmit from Mars; launched on 28 November 1964, it made its closest approach to the planet on 15 July 1965. Mariner 4 detected the weak Martian radiation belt, measured at about 0.1% that of Earth, and captured the first images of another planet from deep space. Once spacecraft visited the planet during the 1960s and 1970s, many previous concepts of Mars were radically broken. After the results of the Viking life-detection experiments, the hypothesis of a dead planet was generally accepted. The data from Mariner 9 and Viking allowed better maps of Mars to be made. Until 1997 and after Viking 1 shut down in 1982, Mars was only visited by three unsuccessful probes, two flying past without contact (Phobos 1, 1988; Mars Observer, 1993), and one (Phobos 2 1989) malfunctioning in orbit before reaching its destination Phobos. In 1997 Mars Pathfinder became the first successful rover mission beyond the Moon and started together with Mars Global Surveyor (operated until late 2006) an uninterrupted active robotic presence at Mars that has lasted until today. It produced complete, extremely detailed maps of the Martian topography, magnetic field and surface minerals. Starting with these missions a range of new improved crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars, with successful missions by the NASA (United States), Jaxa (Japan), ESA, United Kingdom, ISRO (India), Roscosmos (Russia), the United Arab Emirates, and CNSA (China) to study the planet's surface, climate, and geology, uncovering the different elements of the history and dynamic of the hydrosphere of Mars and possible traces of ancient life. As of 2023[update], Mars is host to ten functioning spacecraft. Eight are in orbit: 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, ExoMars Trace Gas Orbiter, the Hope orbiter, and the Tianwen-1 orbiter. Another two are on the surface: the Mars Science Laboratory Curiosity rover and the Perseverance rover. Collected maps are available online at websites including Google Mars. NASA provides two online tools: Mars Trek, which provides visualizations of the planet using data from 50 years of exploration, and Experience Curiosity, which simulates traveling on Mars in 3-D with Curiosity. Planned missions to Mars include: As of February 2024[update], debris from these types of missions has reached over seven tons. Most of it consists of crashed and inactive spacecraft as well as discarded components. In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Key areas include establishing telecommunications, payload delivery and surface imaging. Habitability and habitation During the late 19th century, it was widely accepted in the astronomical community that Mars had life-supporting qualities, including the presence of oxygen and water. However, in 1894 W. W. Campbell at Lick Observatory observed the planet and found that "if water vapor or oxygen occur in the atmosphere of Mars it is in quantities too small to be detected by spectroscopes then available". That observation contradicted many of the measurements of the time and was not widely accepted. Campbell and V. M. Slipher repeated the study in 1909 using better instruments, but with the same results. It was not until the findings were confirmed by W. S. Adams in 1925 that the myth of the Earth-like habitability of Mars was finally broken. However, even in the 1960s, articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars. The current understanding of planetary habitability – the ability of a world to develop environmental conditions favorable to the emergence of life – favors planets that have liquid water on their surface. Most often this requires the orbit of a planet to lie within the habitable zone, which for the Sun is estimated to extend from within the orbit of Earth to about that of Mars. During perihelion, Mars dips inside this region, but Mars's thin (low-pressure) atmosphere prevents liquid water from existing over large regions for extended periods. The past flow of liquid water demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface may have been too salty and acidic to support regular terrestrial life. The environmental conditions on Mars are a challenge to sustaining organic life: the planet has little heat transfer across its surface, it has poor insulation against bombardment by the solar wind due to the absence of a magnetosphere and has insufficient atmospheric pressure to retain water in a liquid form (water instead sublimes to a gaseous state). Mars is nearly, or perhaps totally, geologically dead; the end of volcanic activity has apparently stopped the recycling of chemicals and minerals between the surface and interior of the planet. Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there remains unknown. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites and had positive results, including a temporary increase in CO2 production on exposure to water and nutrients. This sign of life was later disputed by scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A 2014 analysis of Martian meteorite EETA79001 found chlorate, perchlorate, and nitrate ions in sufficiently high concentrations to suggest that they are widespread on Mars. UV and X-ray radiation would turn chlorate and perchlorate ions into other, highly reactive oxychlorines, indicating that any organic molecules would have to be buried under the surface to survive. Small quantities of methane and formaldehyde detected by Mars orbiters are both claimed to be possible evidence for life, as these chemical compounds would quickly break down in the Martian atmosphere. Alternatively, these compounds may instead be replenished by volcanic or other geological means, such as serpentinite. Impact glass, formed by the impact of meteors, which on Earth can preserve signs of life, has also been found on the surface of the impact craters on Mars. Likewise, the glass in impact craters on Mars could have preserved signs of life, if life existed at the site. The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available. Several plans for a human mission to Mars have been proposed, but none have come to fruition. The NASA Authorization Act of 2017 directed NASA to study the feasibility of a crewed Mars mission in the early 2030s; the resulting report concluded that this would be unfeasible. In addition, in 2021, China was planning to send a crewed Mars mission in 2033. Privately held companies such as SpaceX have also proposed plans to send humans to Mars, with the eventual goal to settle on the planet. As of 2024, SpaceX has proceeded with the development of the Starship launch vehicle with the goal of Mars colonization. In plans shared with the company in April 2024, Elon Musk envisions the beginning of a Mars colony within the next twenty years. This would be enabled by the planned mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. The moon Phobos has been proposed as an anchor point for a space elevator. Besides national space agencies and space companies, groups such as the Mars Society and The Planetary Society advocate for human missions to Mars. In culture Mars is named after the Roman god of war (Greek Ares), but was also associated with the demi-god Heracles (Roman Hercules) by ancient Greek astronomers, as detailed by Aristotle. This association between Mars and war dates back at least to Babylonian astronomy, in which the planet was named for the god Nergal, deity of war and destruction. It persisted into modern times, as exemplified by Gustav Holst's orchestral suite The Planets, whose famous first movement labels Mars "The Bringer of War". The planet's symbol, a circle with a spear pointing out to the upper right, is also used as a symbol for the male gender. The symbol dates from at least the 11th century, though a possible predecessor has been found in the Greek Oxyrhynchus Papyri. The idea that Mars was populated by intelligent Martians became widespread in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". In the present day, high-resolution mapping of the surface of Mars has revealed no artifacts of habitation, but pseudoscientific speculation about intelligent life on Mars still continues. Reminiscent of the canali observations, these speculations are based on small scale features perceived in the spacecraft images, such as "pyramids" and the "Face on Mars". In his book Cosmos, planetary astronomer Carl Sagan wrote: "Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears." The depiction of Mars in fiction has been stimulated by its dramatic red color and by nineteenth-century scientific speculations that its surface conditions might support not just life but intelligent life. This gave way to many science fiction stories involving these concepts, such as H. G. Wells's The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth; Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization; as well as Edgar Rice Burroughs's series Barsoom, C. S. Lewis's novel Out of the Silent Planet (1938), and a number of Robert A. Heinlein stories before the mid-sixties. Since then, depictions of Martians have also extended to animation. A comic figure of an intelligent Martian, Marvin the Martian, appeared in Haredevil Hare (1948) as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. After the Mariner and Viking spacecraft had returned pictures of Mars as a lifeless and canal-less world, these ideas about Mars were abandoned; for many science-fiction authors, the new discoveries initially seemed like a constraint, but eventually the post-Viking knowledge of Mars became itself a source of inspiration for works like Kim Stanley Robinson's Mars trilogy. See also Notes References Further reading External links Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
========================================
[SOURCE: https://en.wikipedia.org/wiki/Biological] | [TOKENS: 10547]
Contents Biology Biology is the scientific study of life and living organisms. It is a broad natural science that encompasses a wide range of fields and unifying principles that explain the structure, function, growth, origin, evolution, and distribution of life. Central to biology are five fundamental themes: the cell as the basic unit of life, genes and heredity as the basis of inheritance, evolution as the driver of biological diversity, energy transformation for sustaining life processes, and the maintenance of internal stability (homeostasis). Biology examines life across multiple levels of organization, from molecules and cells to organisms, populations, and ecosystems. Subdisciplines include molecular biology, physiology, ecology, evolutionary biology, developmental biology, and systematics, among others. Each of these fields applies a range of methods to investigate biological phenomena, including observation, experimentation, and mathematical modeling. Modern biology is grounded in the theory of evolution by natural selection, first articulated by Charles Darwin, and in the molecular understanding of genes encoded in DNA. The discovery of the structure of DNA and advances in molecular genetics have transformed many areas of biology, leading to applications in medicine, agriculture, biotechnology, and environmental science. Life on Earth is believed to have originated over 3.7 billion years ago. Today, it includes a vast diversity of organisms—from single-celled archaea and bacteria to complex multicellular plants, fungi, and animals. Biologists classify organisms based on shared characteristics and evolutionary relationships, using taxonomic and phylogenetic frameworks. These organisms interact with each other and with their environments in ecosystems, where they play roles in energy flow and nutrient cycling. As a constantly evolving field, biology incorporates new discoveries and technologies that enhance the understanding of life and its processes, while contributing to solutions for challenges such as disease, climate change, and biodiversity loss. Etymology From Greek βίος (bíos) 'life', (from Proto-Indo-European root *gwei-, to live) and λογία (logia) 'study of'. The compound appears in the title of Volume 3 of Michael Christoph Hanow's Philosophiae naturalis sive physicae dogmaticae: Geologia, biologia, phytologia generalis et dendrologia, published in 1766. The term biology in its modern sense appears to have been introduced independently by Thomas Beddoes (in 1799), Karl Friedrich Burdach (in 1800), Gottfried Reinhold Treviranus (Biologie oder Philosophie der lebenden Natur, 1802) and Jean-Baptiste Lamarck (Hydrogéologie, 1802). History The earliest of roots of science, which included medicine, can be traced to ancient Egypt and Mesopotamia in around 3000 to 1200 BCE. Their contributions shaped ancient Greek natural philosophy. Ancient Greek philosophers such as Aristotle (384–322 BCE) contributed extensively to the development of biological knowledge. He explored biological causation and the diversity of life. His successor, Theophrastus, began the scientific study of plants. Scholars of the medieval Islamic world who wrote on biology included al-Jahiz (781–869), Al-Dīnawarī (828–896), who wrote on botany, and Rhazes (865–925) who wrote on anatomy and physiology. Medicine was especially well studied by Islamic scholars working in Greek philosopher traditions, while natural history drew heavily on Aristotelian thought.[citation needed] Biology began to quickly develop with Anton van Leeuwenhoek's dramatic improvement of the microscope. It was then that scholars discovered spermatozoa, bacteria, infusoria and the diversity of microscopic life. Investigations by Jan Swammerdam led to new interest in entomology and helped to develop techniques of microscopic dissection and staining. Advances in microscopy had a profound impact on biological thinking. In the early 19th century, biologists pointed to the central importance of the cell. In 1838, Schleiden and Schwann began promoting the now universal ideas that (1) the basic unit of organisms is the cell and (2) that individual cells have all the characteristics of life, although they opposed the idea that (3) all cells come from the division of other cells, continuing to support spontaneous generation. However, Robert Remak and Rudolf Virchow were able to reify the third tenet, and by the 1860s most biologists accepted all three tenets which consolidated into cell theory. Meanwhile, taxonomy and classification became the focus of natural historians. Carl Linnaeus published a basic taxonomy for the natural world in 1735, and in the 1750s introduced scientific names for all his species. Georges-Louis Leclerc, Comte de Buffon, treated species as artificial categories and living forms as malleable—even suggesting the possibility of common descent. Serious evolutionary thinking originated with the works of Jean-Baptiste Lamarck, who presented a coherent theory of evolution. The British naturalist Charles Darwin, combining the biogeographical approach of Humboldt, the uniformitarian geology of Lyell, Malthus's writings on population growth, and his own morphological expertise and extensive natural observations, forged a more successful evolutionary theory based on natural selection; similar reasoning and evidence led Alfred Russel Wallace to independently reach the same conclusions. The basis for modern genetics began with the work of Gregor Mendel in 1865. This outlined the principles of biological inheritance. However, the significance of his work was not realized until the early 20th century when evolution became a unified theory as the modern synthesis reconciled Darwinian evolution with classical genetics. In the 1940s and early 1950s, a series of experiments by Alfred Hershey and Martha Chase pointed to DNA as the component of chromosomes that held the trait-carrying units that had become known as genes. A focus on new kinds of model organisms such as viruses and bacteria, along with the discovery of the double-helical structure of DNA by James Watson and Francis Crick in 1953, marked the transition to the era of molecular genetics. From the 1950s onwards, biology has been vastly extended in the molecular domain. The genetic code was cracked by Har Gobind Khorana, Robert W. Holley and Marshall Warren Nirenberg after DNA was understood to contain codons. The Human Genome Project was launched in 1990 to map the human genome. Chemical basis All organisms are made up of chemical elements; oxygen, carbon, hydrogen, and nitrogen account for most (96%) of the mass of all organisms, with calcium, phosphorus, sulfur, sodium, chlorine, and magnesium constituting essentially all the remainder. Different elements can combine to form compounds such as water, which is fundamental to life. Biochemistry is the study of chemical processes within and relating to living organisms. Molecular biology is the branch of biology that seeks to understand the molecular basis of biological activity in and between cells, including molecular synthesis, modification, mechanisms, and interactions.[citation needed] Life arose from the Earth's first ocean, which formed some 3.8 billion years ago. Since then, water continues to be the most abundant molecule in every organism. Water is important to life because it is an effective solvent, capable of dissolving solutes such as sodium and chloride ions or other small molecules to form an aqueous solution. Once dissolved in water, these solutes are more likely to come in contact with one another and therefore take part in chemical reactions that sustain life. In terms of its molecular structure, water is a small polar molecule with a bent shape formed by the polar covalent bonds of two hydrogen (H) atoms to one oxygen (O) atom (H2O). Because the O–H bonds are polar, the oxygen atom has a slight negative charge and the two hydrogen atoms have a slight positive charge. This polar property of water allows it to attract other water molecules via hydrogen bonds, which makes water cohesive. Surface tension results from the cohesive force due to the attraction between molecules at the surface of the liquid. Water is also adhesive as it is able to adhere to the surface of any polar or charged non-water molecules. Water is denser as a liquid than it is as a solid (or ice). This unique property of water allows ice to float above liquid water such as ponds, lakes, and oceans, thereby insulating the liquid below from the cold air above. Water has the capacity to absorb energy, giving it a higher specific heat capacity than other solvents such as ethanol. Thus, a large amount of energy is needed to break the hydrogen bonds between water molecules to convert liquid water into water vapor. As a molecule, water is not completely stable as each water molecule continuously dissociates into hydrogen and hydroxyl ions before reforming into a water molecule again. In pure water, the number of hydrogen ions balances (or equals) the number of hydroxyl ions, resulting in a pH that is neutral.[citation needed] Organic compounds are molecules that contain carbon bonded to another element such as hydrogen. With the exception of water, nearly all the molecules that make up each organism contain carbon. Carbon can form covalent bonds with up to four other atoms, enabling it to form diverse, large, and complex molecules. For example, a single carbon atom can form four single covalent bonds such as in methane, two double covalent bonds such as in carbon dioxide (CO2), or a triple covalent bond such as in carbon monoxide (CO). Moreover, carbon can form very long chains of interconnecting carbon–carbon bonds such as octane or ring-like structures such as glucose.[citation needed] The simplest form of an organic molecule is the hydrocarbon, which is a large family of organic compounds that are composed of hydrogen atoms bonded to a chain of carbon atoms. A hydrocarbon backbone can be substituted by other elements such as oxygen (O), hydrogen (H), phosphorus (P), and sulfur (S), which can change the chemical behavior of that compound. Groups of atoms that contain these elements (O-, H-, P-, and S-) and are bonded to a central carbon atom or skeleton are called functional groups. There are six prominent functional groups that can be found in organisms: amino group, carboxyl group, carbonyl group, hydroxyl group, phosphate group, and sulfhydryl group. In 1953, the Miller–Urey experiment showed that organic compounds could be synthesized abiotically within a closed system mimicking the conditions of early Earth, thus suggesting that complex organic molecules could have arisen spontaneously in early Earth (see abiogenesis). Macromolecules are large molecules made up of smaller subunits or monomers. Monomers include sugars, amino acids, and nucleotides. Carbohydrates include monomers and polymers of sugars. Lipids are the only class of macromolecules that are not made up of polymers. They include steroids, phospholipids, and fats, largely nonpolar and hydrophobic (water-repelling) substances. Proteins are the most diverse of the macromolecules. They include enzymes, transport proteins, large signaling molecules, antibodies, and structural proteins. The basic unit (or monomer) of a protein is an amino acid. Twenty amino acids are used in proteins. Nucleic acids are polymers of nucleotides. Their function is to store, transmit, and express hereditary information. Cells Cell theory states that cells are the fundamental units of life, that all living things are composed of one or more cells, and that all cells arise from preexisting cells through cell division. Most cells are very small, with diameters ranging from 1 to 100 micrometers and are therefore only visible under a light or electron microscope. There are generally two types of cells: eukaryotic cells, which contain a nucleus, and prokaryotic cells, which do not. Prokaryotes are single-celled organisms such as bacteria, whereas eukaryotes can be single-celled or multicellular. In multicellular organisms, every cell in the organism's body is derived ultimately from a single cell in a fertilized egg.[citation needed] Every cell is enclosed within a cell membrane that separates its cytoplasm from the extracellular space. A cell membrane consists of a lipid bilayer, including cholesterols that sit between phospholipids to maintain their fluidity at various temperatures. Cell membranes are semipermeable, allowing small molecules such as oxygen, carbon dioxide, and water to pass through while restricting the movement of larger molecules and charged particles such as ions. Cell membranes also contain membrane proteins, including integral membrane proteins that go across the membrane serving as membrane transporters, and peripheral proteins that loosely attach to the outer side of the cell membrane, acting as enzymes shaping the cell. Cell membranes are involved in various cellular processes such as cell adhesion, storing electrical energy, and cell signalling and serve as the attachment surface for several extracellular structures such as a cell wall, glycocalyx, and cytoskeleton. Within the cytoplasm of a cell, there are many biomolecules such as proteins and nucleic acids. In addition to biomolecules, eukaryotic cells have specialized structures called organelles that have their own lipid bilayers or are spatially units. These organelles include the cell nucleus, which contains most of the cell's DNA, or mitochondria, which generate adenosine triphosphate (ATP) to power cellular processes. Other organelles such as endoplasmic reticulum and Golgi apparatus play a role in the synthesis and packaging of proteins, respectively. Biomolecules such as proteins can be engulfed by lysosomes, another specialized organelle. Plant cells have additional organelles that distinguish them from animal cells such as a cell wall that provides support for the plant cell, chloroplasts that harvest sunlight energy to produce sugar, and vacuoles that provide storage and structural support as well as being involved in reproduction and breakdown of plant seeds. Eukaryotic cells also have cytoskeleton that is made up of microtubules, intermediate filaments, and microfilaments, all of which provide support for the cell and are involved in the movement of the cell and its organelles. In terms of their structural composition, the microtubules are made up of tubulin (e.g., α-tubulin and β-tubulin) whereas intermediate filaments are made up of fibrous proteins. Microfilaments are made up of actin molecules that interact with other strands of proteins. All cells require energy to sustain cellular processes. Metabolism is the set of chemical reactions in an organism. The three main purposes of metabolism are: the conversion of food to energy to run cellular processes; the conversion of food/fuel to monomer building blocks; and the elimination of metabolic wastes. These enzyme-catalyzed reactions allow organisms to grow and reproduce, maintain their structures, and respond to their environments. Metabolic reactions may be categorized as catabolic—the breaking down of compounds (for example, the breaking down of glucose to pyruvate by cellular respiration); or anabolic—the building up (synthesis) of compounds (such as proteins, carbohydrates, lipids, and nucleic acids). Usually, catabolism releases energy, and anabolism consumes energy. The chemical reactions of metabolism are organized into metabolic pathways, in which one chemical is transformed through a series of steps into another chemical, each step being facilitated by a specific enzyme. Enzymes are crucial to metabolism because they allow organisms to drive desirable reactions that require energy that will not occur by themselves, by coupling them to spontaneous reactions that release energy. Enzymes act as catalysts—they allow a reaction to proceed more rapidly without being consumed by it—by reducing the amount of activation energy needed to convert reactants into products. Enzymes also allow the regulation of the rate of a metabolic reaction, for example in response to changes in the cell's environment or to signals from other cells.[citation needed] Cellular respiration is a set of metabolic reactions and processes that take place in cells to convert chemical energy from nutrients into adenosine triphosphate (ATP), and then release waste products. The reactions involved in respiration are catabolic reactions, which break large molecules into smaller ones, releasing energy. Respiration is one of the key ways a cell releases chemical energy to fuel cellular activity. The overall reaction occurs in a series of biochemical steps, some of which are redox reactions. Although cellular respiration is technically a combustion reaction, it clearly does not resemble one when it occurs in a cell because of the slow, controlled release of energy from the series of reactions.[citation needed] Sugar in the form of glucose is the main nutrient used by animal and plant cells in respiration. Cellular respiration involving oxygen is called aerobic respiration, which has four stages: glycolysis, citric acid cycle (or Krebs cycle), electron transport chain, and oxidative phosphorylation. Glycolysis is a metabolic process that occurs in the cytoplasm whereby glucose is converted into two pyruvates, with two net molecules of ATP being produced at the same time. Each pyruvate is then oxidized into acetyl-CoA by the pyruvate dehydrogenase complex, which also generates NADH and carbon dioxide. Acetyl-CoA enters the citric acid cycle, which takes places inside the mitochondrial matrix. At the end of the cycle, the total yield from 1 glucose (or 2 pyruvates) is 6 NADH, 2 FADH2, and 2 ATP molecules. Finally, the next stage is oxidative phosphorylation, which in eukaryotes, occurs in the mitochondrial cristae. Oxidative phosphorylation comprises the electron transport chain, which is a series of four protein complexes that transfer electrons from one complex to another, thereby releasing energy from NADH and FADH2 that is coupled to the pumping of protons (hydrogen ions) across the inner mitochondrial membrane (chemiosmosis), which generates a proton motive force. Energy from the proton motive force drives the enzyme ATP synthase to synthesize more ATPs by phosphorylating ADPs. The transfer of electrons terminates with molecular oxygen being the final electron acceptor.[citation needed] If oxygen were not present, pyruvate would not be metabolized by cellular respiration but undergoes a process of fermentation. The pyruvate is not transported into the mitochondrion but remains in the cytoplasm, where it is converted to waste products that may be removed from the cell. This serves the purpose of oxidizing the electron carriers so that they can perform glycolysis again and removing the excess pyruvate. Fermentation oxidizes NADH to NAD+ so it can be re-used in glycolysis. In the absence of oxygen, fermentation prevents the buildup of NADH in the cytoplasm and provides NAD+ for glycolysis. This waste product varies depending on the organism. In skeletal muscles, the waste product is lactic acid. This type of fermentation is called lactic acid fermentation. In strenuous exercise, when energy demands exceed energy supply, the respiratory chain cannot process all of the hydrogen atoms joined by NADH. During anaerobic glycolysis, NAD+ regenerates when pairs of hydrogen combine with pyruvate to form lactate. Lactate formation is catalyzed by lactate dehydrogenase in a reversible reaction. Lactate can also be used as an indirect precursor for liver glycogen. During recovery, when oxygen becomes available, NAD+ attaches to hydrogen from lactate to form ATP. In yeast, the waste products are ethanol and carbon dioxide. This type of fermentation is known as alcoholic or ethanol fermentation. The ATP generated in this process is made by substrate-level phosphorylation, which does not require oxygen.[citation needed] Photosynthesis is a process used by plants and other organisms to convert light energy into chemical energy that can later be released to fuel the organism's metabolic activities via cellular respiration. This chemical energy is stored in carbohydrate molecules, such as sugars, which are synthesized from carbon dioxide and water. In most cases, oxygen is released as a waste product. Most plants, algae, and cyanobacteria perform photosynthesis, which is largely responsible for producing and maintaining the oxygen content of the Earth's atmosphere, and supplies most of the energy necessary for life on Earth. Photosynthesis has four stages: Light absorption, electron transport, ATP synthesis, and carbon fixation. Light absorption is the initial step of photosynthesis whereby light energy is absorbed by chlorophyll pigments attached to proteins in the thylakoid membranes. The absorbed light energy is used to remove electrons from a donor (water) to a primary electron acceptor, a quinone designated as Q. In the second stage, electrons move from the quinone primary electron acceptor through a series of electron carriers until they reach a final electron acceptor, which is usually the oxidized form of NADP+, which is reduced to NADPH, a process that takes place in a protein complex called photosystem I (PSI). The transport of electrons is coupled to the movement of protons (or hydrogen) from the stroma to the thylakoid membrane, which forms a pH gradient across the membrane as hydrogen becomes more concentrated in the lumen than in the stroma. This is analogous to the proton-motive force generated across the inner mitochondrial membrane in aerobic respiration. During the third stage of photosynthesis, the movement of protons down their concentration gradients from the thylakoid lumen to the stroma through the ATP synthase is coupled to the synthesis of ATP by that same ATP synthase. The NADPH and ATPs generated by the light-dependent reactions in the second and third stages, respectively, provide the energy and electrons to drive the synthesis of glucose by fixing atmospheric carbon dioxide into existing organic carbon compounds, such as ribulose bisphosphate (RuBP) in a sequence of light-independent (or dark) reactions called the Calvin cycle. Cell signaling (or communication) is the ability of cells to receive, process, and transmit signals with its environment and with itself. Signals can be non-chemical such as light, electrical impulses, and heat, or chemical signals (or ligands) that interact with receptors, which can be found embedded in the cell membrane of another cell or located deep inside a cell. There are generally four types of chemical signals: autocrine, paracrine, juxtacrine, and hormones. In autocrine signaling, the ligand affects the same cell that releases it. Tumor cells, for example, can reproduce uncontrollably because they release signals that initiate their own self-division. In paracrine signaling, the ligand diffuses to nearby cells and affects them. For example, brain cells called neurons release ligands called neurotransmitters that diffuse across a synaptic cleft to bind with a receptor on an adjacent cell such as another neuron or muscle cell. In juxtacrine signaling, there is direct contact between the signaling and responding cells. Finally, hormones are ligands that travel through the circulatory systems of animals or vascular systems of plants to reach their target cells. Once a ligand binds with a receptor, it can influence the behavior of another cell, depending on the type of receptor. For instance, neurotransmitters that bind with an ionotropic receptor can alter the excitability of a target cell. Other types of receptors include protein kinase receptors (e.g., receptor for the hormone insulin) and G protein-coupled receptors. Activation of G protein-coupled receptors can initiate second messenger cascades. The process by which a chemical or physical signal is transmitted through a cell as a series of molecular events is called signal transduction.[citation needed] The cell cycle is a series of events that take place in a cell that cause it to divide into two daughter cells. These events include the duplication of its DNA and some of its organelles, and the subsequent partitioning of its cytoplasm into two daughter cells in a process called cell division. In eukaryotes (i.e., animal, plant, fungal, and protist cells), there are two distinct types of cell division: mitosis and meiosis. Mitosis is part of the cell cycle, in which replicated chromosomes are separated into two new nuclei. Cell division gives rise to genetically identical cells in which the total number of chromosomes is maintained. In general, mitosis (division of the nucleus) is preceded by the S stage of interphase (during which the DNA is replicated) and is often followed by telophase and cytokinesis; which divides the cytoplasm, organelles and cell membrane of one cell into two new cells containing roughly equal shares of these cellular components. The different stages of mitosis all together define the mitotic phase of an animal cell cycle—the division of the mother cell into two genetically identical daughter cells. The cell cycle is a vital process by which a single-celled fertilized egg develops into a mature organism, as well as the process by which hair, skin, blood cells, and some internal organs are renewed. After cell division, each of the daughter cells begin the interphase of a new cycle. In contrast to mitosis, meiosis results in four haploid daughter cells by undergoing one round of DNA replication followed by two divisions. Homologous chromosomes are separated in the first division (meiosis I), and sister chromatids are separated in the second division (meiosis II). Both of these cell division cycles are used in the process of sexual reproduction at some point in their life cycle. Both are believed to be present in the last eukaryotic common ancestor.[citation needed] Prokaryotes (i.e., archaea and bacteria) can also undergo cell division (or binary fission). Unlike the processes of mitosis and meiosis in eukaryotes, binary fission in prokaryotes takes place without the formation of a spindle apparatus on the cell. Before binary fission, DNA in the bacterium is tightly coiled. After it has uncoiled and duplicated, it is pulled to the separate poles of the bacterium as it increases the size to prepare for splitting. Growth of a new cell wall begins to separate the bacterium (triggered by FtsZ polymerization and "Z-ring" formation). The new cell wall (septum) fully develops, resulting in the complete split of the bacterium. The new daughter cells have tightly coiled DNA rods, ribosomes, and plasmids.[citation needed] Meiosis is a central feature of sexual reproduction in eukaryotes, and the most fundamental function of meiosis appears to be conservation of the integrity of the genome that is passed on to progeny by parents. Two aspects of sexual reproduction, meiotic recombination and outcrossing, are likely maintained respectively by the adaptive advantages of recombinational repair of genomic DNA damage and genetic complementation which masks the expression of deleterious recessive mutations. The beneficial effect of genetic complementation, derived from outcrossing (cross-fertilization) is also referred to as hybrid vigor or heterosis. Charles Darwin in his 1878 book The Effects of Cross and Self-Fertilization in the Vegetable Kingdom at the start of chapter XII noted "The first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross-fertilisation is beneficial and self-fertilisation often injurious, at least with the plants on which I experimented." Genetic variation, often produced as a byproduct of sexual reproduction, may provide long-term advantages to those sexual lineages that engage in outcrossing. Genetics Genetics is the scientific study of inheritance. Mendelian inheritance, specifically, is the process by which genes and traits are passed on from parents to offspring. It has several principles. The first is that genetic characteristics, alleles, are discrete and have alternate forms (e.g., purple vs. white or tall vs. dwarf), each inherited from one of two parents. Based on the law of dominance and uniformity, which states that some alleles are dominant while others are recessive; an organism with at least one dominant allele will display the phenotype of that dominant allele. During gamete formation, the alleles for each gene segregate, so that each gamete carries only one allele for each gene. Heterozygotic individuals produce gametes with an equal frequency of two alleles. Finally, the law of independent assortment, states that genes of different traits can segregate independently during the formation of gametes, i.e., genes are unlinked. An exception to this rule would include traits that are sex-linked. Test crosses can be performed to experimentally determine the underlying genotype of an organism with a dominant phenotype. A Punnett square can be used to predict the results of a test cross. The chromosome theory of inheritance, which states that genes are found on chromosomes, was supported by Thomas Morgans's experiments with fruit flies, which established the sex linkage between eye color and sex in these insects. A gene is a unit of heredity that corresponds to a region of deoxyribonucleic acid (DNA) that carries genetic information that controls form or function of an organism. DNA is composed of two polynucleotide chains that coil around each other to form a double helix. It is found as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. The set of chromosomes in a cell is collectively known as its genome. In eukaryotes, DNA is mainly in the cell nucleus. In prokaryotes, the DNA is held within the nucleoid. The genetic information is held within genes, and the complete assemblage in an organism is called its genotype. DNA replication is a semiconservative process whereby each strand serves as a template for a new strand of DNA. Mutations are heritable changes in DNA. They can arise spontaneously as a result of replication errors that were not corrected by proofreading or can be induced by an environmental mutagen such as a chemical (e.g., nitrous acid, benzopyrene) or radiation (e.g., x-ray, gamma ray, ultraviolet radiation, particles emitted by unstable isotopes). Mutations can lead to phenotypic effects such as loss-of-function, gain-of-function, and conditional mutations. Some mutations are beneficial, as they are a source of genetic variation for evolution. Others are harmful if they were to result in a loss of function of genes needed for survival. Gene expression is the molecular process by which a genotype encoded in DNA gives rise to an observable phenotype in the proteins of an organism's body. This process is summarized by the central dogma of molecular biology, which was formulated by Francis Crick in 1958. According to the Central Dogma, genetic information flows from DNA to RNA to protein. There are two gene expression processes: transcription (DNA to RNA) and translation (RNA to protein). The regulation of gene expression by environmental factors and during different stages of development can occur at each step of the process such as transcription, RNA splicing, translation, and post-translational modification of a protein. Gene expression can be influenced by positive or negative regulation, depending on which of the two types of regulatory proteins called transcription factors bind to the DNA sequence close to or at a promoter. A cluster of genes that share the same promoter is called an operon, found mainly in prokaryotes and some lower eukaryotes (e.g., Caenorhabditis elegans). In positive regulation of gene expression, the activator is the transcription factor that stimulates transcription when it binds to the sequence near or at the promoter. Negative regulation occurs when another transcription factor called a repressor binds to a DNA sequence called an operator, which is part of an operon, to prevent transcription. Repressors can be inhibited by compounds called inducers (e.g., allolactose), thereby allowing transcription to occur. Specific genes that can be activated by inducers are called inducible genes, in contrast to constitutive genes that are almost constantly active. In contrast to both, structural genes encode proteins that are not involved in gene regulation. In addition to regulatory events involving the promoter, gene expression can also be regulated by epigenetic changes to chromatin, which is a complex of DNA and protein found in eukaryotic cells. Development is the process by which a multicellular organism (plant or animal) goes through a series of changes, starting from a single cell, and taking on various forms that are characteristic of its life cycle. There are four key processes that underlie development: Determination, differentiation, morphogenesis, and growth. Determination sets the developmental fate of a cell, which becomes more restrictive during development. Differentiation is the process by which specialized cells arise from less specialized cells such as stem cells. Stem cells are undifferentiated or partially differentiated cells that can differentiate into various types of cells and proliferate indefinitely to produce more of the same stem cell. Cellular differentiation dramatically changes a cell's size, shape, membrane potential, metabolic activity, and responsiveness to signals, which are largely due to highly controlled modifications in gene expression and epigenetics. With a few exceptions, cellular differentiation almost never involves a change in the DNA sequence itself. Thus, different cells can have very different physical characteristics despite having the same genome. Morphogenesis, or the development of body form, is the result of spatial differences in gene expression. A small fraction of the genes in an organism's genome called the developmental-genetic toolkit control the development of that organism. These toolkit genes are highly conserved among phyla, meaning that they are ancient and very similar in widely separated groups of animals. Differences in deployment of toolkit genes affect the body plan and the number, identity, and pattern of body parts. Among the most important toolkit genes are the Hox genes. Hox genes determine where repeating parts, such as the many vertebrae of snakes, will grow in a developing embryo or larva. Evolution Evolution is a central organizing concept in biology. It is the change in heritable characteristics of populations over successive generations. In artificial selection, animals were selectively bred for specific traits. Given that traits are inherited, populations contain a varied mix of traits, and reproduction is able to increase any population, Darwin argued that in the natural world, it was nature that played the role of humans in selecting for specific traits. Darwin inferred that individuals who possessed heritable traits better adapted to their environments are more likely to survive and produce more offspring than other individuals. He further inferred that this would lead to the accumulation of favorable traits over successive generations, thereby increasing the match between the organisms and their environment. A species is a group of organisms that mate with one another and speciation is the process by which one lineage splits into two lineages as a result of having evolved independently from each other. For speciation to occur, there has to be reproductive isolation. Reproductive isolation can result from incompatibilities between genes as described by Bateson–Dobzhansky–Muller model. Reproductive isolation also tends to increase with genetic divergence. Speciation can occur when there are physical barriers that divide an ancestral species, a process known as allopatric speciation. A phylogeny is an evolutionary history of a specific group of organisms or their genes. It can be represented using a phylogenetic tree, a diagram showing lines of descent among organisms or their genes. Each line drawn on the time axis of a tree represents a lineage of descendants of a particular species or population. When a lineage divides into two, it is represented as a fork or split on the phylogenetic tree. Phylogenetic trees are the basis for comparing and grouping different species. Different species that share a feature inherited from a common ancestor are described as having homologous features (or synapomorphy). Phylogeny provides the basis of biological classification. This classification system is rank-based, with the highest rank being the domain followed by kingdom, phylum, class, order, family, genus, and species. All organisms can be classified as belonging to one of three domains: Archaea (originally Archaebacteria), Bacteria (originally eubacteria), or Eukarya (includes the fungi, plant, and animal kingdoms). The history of life on Earth traces how organisms have evolved from the earliest emergence of life to present day. Earth formed about 4.5 billion years ago and all life on Earth, both living and extinct, descended from a last universal common ancestor that lived about 3.5 billion years ago. Geologists have developed a geologic time scale that divides the history of the Earth into major divisions, starting with four eons (Hadean, Archean, Proterozoic, and Phanerozoic), the first three of which are collectively known as the Precambrian, which lasted approximately 4 billion years. Each eon can be divided into eras, with the Phanerozoic eon that began 539 million years ago being subdivided into Paleozoic, Mesozoic, and Cenozoic eras. These three eras together comprise eleven periods (Cambrian, Ordovician, Silurian, Devonian, Carboniferous, Permian, Triassic, Jurassic, Cretaceous, Tertiary, and Quaternary). The similarities among all known present-day species indicate that they have diverged through the process of evolution from their common ancestor. Biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. Microbial mats of coexisting bacteria and archaea were the dominant form of life in the early Archean eon and many of the major steps in early evolution are thought to have taken place in this environment. The earliest evidence of eukaryotes dates from 1.85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. Later, around 1.7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. Algae-like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2.7 billion years ago. Microorganisms are thought to have paved the way for the inception of land plants in the Ordovician period. Land plants were so successful that they are thought to have contributed to the Late Devonian extinction event. Ediacara biota appear during the Ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the Cambrian explosion. During the Permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the Permian–Triassic extinction event 252 million years ago. During the recovery from this catastrophe, archosaurs became the most abundant land vertebrates; one archosaur group, the dinosaurs, dominated the Jurassic and Cretaceous periods. After the Cretaceous–Paleogene extinction event 66 million years ago killed off the non-avian dinosaurs, mammals increased rapidly in size and diversity. Such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. Diversity Bacteria are a type of cell that constitute a large domain of prokaryotic microorganisms. Typically a few micrometers in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. Bacteria were among the first life forms to appear on Earth, and are present in most of its habitats. Bacteria inhabit soil, water, acidic hot springs, radioactive waste, and the deep biosphere of the Earth's crust. Bacteria also live in symbiotic and parasitic relationships with plants and animals. Most bacteria have not been characterised, and only about 27 percent of the bacterial phyla have species that can be grown in the laboratory. Archaea constitute the other domain of prokaryotic cells and were initially classified as bacteria, receiving the name archaebacteria (in the Archaebacteria kingdom), a term that has fallen out of use. Archaeal cells have unique properties separating them from the other two domains, Bacteria and Eukaryota. Archaea are further divided into multiple recognized phyla. Archaea and bacteria are generally similar in size and shape, although a few archaea have very different shapes, such as the flat and square cells of Haloquadratum walsbyi. Despite this morphological similarity to bacteria, archaea possess genes and several metabolic pathways that are more closely related to those of eukaryotes, notably for the enzymes involved in transcription and translation. Other aspects of archaeal biochemistry are unique, such as their reliance on ether lipids in their cell membranes, including archaeols. Archaea use more energy sources than eukaryotes: these range from organic compounds, such as sugars, to ammonia, metal ions or even hydrogen gas. Salt-tolerant archaea (the Haloarchaea) use sunlight as an energy source, and other species of archaea fix carbon, but unlike plants and cyanobacteria, no known species of archaea does both. Archaea reproduce asexually by binary fission, fragmentation, or budding; unlike bacteria, no known species of Archaea form endospores.[citation needed] The first observed archaea were extremophiles, living in extreme environments, such as hot springs and salt lakes with no other organisms. Improved molecular detection tools led to the discovery of archaea in almost every habitat, including soil, oceans, and marshlands. Archaea are particularly numerous in the oceans, and the archaea in plankton may be one of the most abundant groups of organisms on the planet.[citation needed] Archaea are a major part of Earth's life. They are part of the microbiota of all organisms. In the human microbiome, they are important in the gut, mouth, and on the skin. Their morphological, metabolic, and geographical diversity permits them to play multiple ecological roles: carbon fixation; nitrogen cycling; organic compound turnover; and maintaining microbial symbiotic and syntrophic communities, for example. Eukaryotes are hypothesized to have split from archaea, which was followed by their endosymbioses with bacteria (or symbiogenesis) that gave rise to mitochondria and chloroplasts, both of which are now part of modern-day eukaryotic cells. The major lineages of eukaryotes diversified in the Precambrian about 1.5 billion years ago and can be classified into eight major clades: alveolates, excavates, stramenopiles, plants, rhizarians, amoebozoans, fungi, and animals. Five of these clades are collectively known as protists, which are mostly microscopic eukaryotic organisms that are not plants, fungi, or animals. While it is likely that protists share a common ancestor (the last eukaryotic common ancestor), protists by themselves do not constitute a separate clade as some protists may be more closely related to plants, fungi, or animals than they are to other protists. Like groupings such as algae, invertebrates, or protozoans, the protist grouping is not a formal taxonomic group but is used for convenience. Most protists are unicellular; these are called microbial eukaryotes. Plants are mainly multicellular organisms, predominantly photosynthetic eukaryotes of the kingdom Plantae, which would exclude fungi and some algae. Plant cells were derived by endosymbiosis of a cyanobacterium into an early eukaryote about one billion years ago, which gave rise to chloroplasts. The first several clades that emerged following primary endosymbiosis were aquatic and most of the aquatic photosynthetic eukaryotic organisms are collectively described as algae, which is a term of convenience as not all algae are closely related. Algae comprise several distinct clades such as glaucophytes, which are microscopic freshwater algae that may have resembled in form to the early unicellular ancestor of Plantae. Unlike glaucophytes, the other algal clades such as red and green algae are multicellular. Green algae comprise three major clades: chlorophytes, coleochaetophytes, and stoneworts. Fungi are eukaryotes that digest foods outside their bodies, secreting digestive enzymes that break down large food molecules before absorbing them through their cell membranes. Many fungi are also saprobes, feeding on dead organic matter, making them important decomposers in ecological systems. Animals are multicellular eukaryotes. With few exceptions, animals consume organic material, breathe oxygen, are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Over 1.5 million living animal species have been described—of which around 1 million are insects—but it has been estimated there are over 7 million animal species in total. They have complex interactions with each other and their environments, forming intricate food webs. Viruses are submicroscopic infectious agents that replicate inside the cells of organisms. Viruses infect all types of life forms, from animals and plants to microorganisms, including bacteria and archaea. More than 6,000 virus species have been described in detail. Viruses are found in almost every ecosystem on Earth and are the most numerous type of biological entity. The origins of viruses in the evolutionary history of life are unclear: some may have evolved from plasmids—pieces of DNA that can move between cells—while others may have evolved from bacteria. In evolution, viruses are an important means of horizontal gene transfer, which increases genetic diversity in a way analogous to sexual reproduction. Because viruses possess some but not all characteristics of life, they have been described as "organisms at the edge of life", and as self-replicators. Ecology Ecology is the study of the distribution and abundance of life, the interaction between organisms and their environment. The community of living (biotic) organisms in conjunction with the nonliving (abiotic) components (e.g., water, light, radiation, temperature, humidity, atmosphere, acidity, and soil) of their environment is called an ecosystem. These biotic and abiotic components are linked together through nutrient cycles and energy flows. Energy from the sun enters the system through photosynthesis and is incorporated into plant tissue. By feeding on plants and on one another, animals move matter and energy through the system. They also influence the quantity of plant and microbial biomass present. By breaking down dead organic matter, decomposers release carbon back to the atmosphere and facilitate nutrient cycling by converting nutrients stored in dead biomass back to a form that can be readily used by plants and other microbes. A population is the group of organisms of the same species that occupies an area and reproduce from generation to generation. Population size can be estimated by multiplying population density by the area or volume. The carrying capacity of an environment is the maximum population size of a species that can be sustained by that specific environment, given the food, habitat, water, and other resources that are available. The carrying capacity of a population can be affected by changing environmental conditions such as changes in the availability of resources and the cost of maintaining them. In human populations, new technologies such as the Green revolution have helped increase the Earth's carrying capacity for humans over time, which has stymied the attempted predictions of impending population decline, the most famous of which was by Thomas Malthus in the 18th century. A community is a group of populations of species occupying the same geographical area at the same time. A biological interaction is the effect that a pair of organisms living together in a community have on each other. They can be either of the same species (intraspecific interactions), or of different species (interspecific interactions). These effects may be short-term, like pollination and predation, or long-term; both often strongly influence the evolution of the species involved. A long-term interaction is called a symbiosis. Symbioses range from mutualism, beneficial to both partners, to competition, harmful to both partners. Every species participates as a consumer, resource, or both in consumer–resource interactions, which form the core of food chains or food webs. There are different trophic levels within any food web, with the lowest level being the primary producers (or autotrophs) such as plants and algae that convert energy and inorganic material into organic compounds, which can then be used by the rest of the community. At the next level are the heterotrophs, which are the species that obtain energy by breaking apart organic compounds from other organisms. Heterotrophs that consume plants are primary consumers (or herbivores) whereas heterotrophs that consume herbivores are secondary consumers (or carnivores). And those that eat secondary consumers are tertiary consumers and so on. Omnivorous heterotrophs are able to consume at multiple levels. Finally, there are decomposers that feed on the waste products or dead bodies of organisms. On average, the total amount of energy incorporated into the biomass of a trophic level per unit of time is about one-tenth of the energy of the trophic level that it consumes. Waste and dead material used by decomposers as well as heat lost from metabolism make up the other ninety percent of energy that is not consumed by the next trophic level. In the global ecosystem or biosphere, matter exists as different interacting compartments, which can be biotic or abiotic as well as accessible or inaccessible, depending on their forms and locations. For example, matter from terrestrial autotrophs are both biotic and accessible to other organisms whereas the matter in rocks and minerals are abiotic and inaccessible. A biogeochemical cycle is a pathway by which specific elements of matter are turned over or moved through the biotic (biosphere) and the abiotic (lithosphere, atmosphere, and hydrosphere) compartments of Earth. There are biogeochemical cycles for nitrogen, carbon, and water.[citation needed] Conservation biology is the study of the conservation of Earth's biodiversity with the aim of protecting species, their habitats, and ecosystems from excessive rates of extinction and the erosion of biotic interactions. It is concerned with factors that influence the maintenance, loss, and restoration of biodiversity and the science of sustaining evolutionary processes that engender genetic, population, species, and ecosystem diversity. The concern stems from estimates suggesting that up to 50% of all species on the planet will disappear within the next 50 years, which has contributed to poverty, starvation, and will reset the course of evolution on this planet. Biodiversity affects the functioning of ecosystems, which provide a variety of services upon which people depend. Conservation biologists research and educate on the trends of biodiversity loss, species extinctions, and the negative effect these are having on our capabilities to sustain the well-being of human society. Organizations and citizens are responding to the current biodiversity crisis through conservation action plans that direct research, monitoring, and education programs that engage concerns at local through global scales. See also References Further reading External links Journal links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Raku_(programming_language)] | [TOKENS: 4659]
Contents Raku (programming language) Raku is a member of the Perl family of programming languages. Formerly named Perl 6, it was renamed in October 2019. Raku introduces elements of many modern and historical languages. Compatibility with Perl was not a goal, though a compatibility mode is part of the specification. The design process for Raku began in 2000. History In Perl 6, we decided it would be better to fix the language than fix the user. — Larry Wall The Raku design process was first announced on 19 July 2000, on the fourth day of that year's Perl Conference, by Larry Wall in his State of the Onion 2000 talk. At that time, the primary goals were to remove "historical warts" from the language; "easy things should stay easy, hard things should get easier, and impossible things should get hard"; and a general cleanup of the internal design and application programming interfaces (APIs). The process began with a series of Request for Comments (RFCs). This process was open to all contributors, and left no aspect of the language closed to change. Once the RFC process was complete, Wall reviewed and classified each of the 361 requests received. He then began the process of writing several "Apocalypses", using the original meaning of the term, "revealing". While the original goal was to write one Apocalypse for each chapter of Programming Perl, it became obvious that, as each Apocalypse was written, previous Apocalypses were being invalidated by later changes. For this reason, a set of Synopses was published, each one relating the contents of an Apocalypse, but with any subsequent changes reflected in updates. Today, the Raku specification is managed through the "roast" testing suite, while the Synopses are kept as a historical reference. There is also a series of Exegeses written by Damian Conway that explain the content of each Apocalypse in terms of practical usage. Each Exegesis consists of code examples along with a discussion of the usage and implications of the examples. There are three primary methods of communication used in the development of Raku today. The first is the Raku Internet Relay Chat (IRC) channel on Libera Chat. The second is a set of mailing lists. The third is the Git source code repository hosted at GitHub. The major goal Wall suggested in his initial speech was the removal of historical warts. These included the confusion surrounding sigil usage for containers, the ambiguity between the select functions, and the syntactic impact of bareword filehandles. There were many other problems that Perl programmers had discussed fixing for years, and these were explicitly addressed by Wall in his speech.[citation needed] An implication of these goals was that Perl 6 would not have backward compatibility with the existing Perl codebase. This meant that some code which was correctly interpreted by a Perl 5 compiler would not be accepted by a Perl 6 compiler. Since backward compatibility is a common goal when enhancing software, the breaking changes in Perl 6 had to be stated explicitly. The distinction between Perl 5 and Perl 6 became so large that eventually Perl 6 was renamed Raku. The language's mascot is "Camelia, the Raku bug". Her name is a nod to the camel mascot associated with Perl, and her form, in the pun-loving tradition of the Perl community, is a play on "software bug". Spiral designs embedded in her butterfly-like wings resemble the characters "P6", the favored nickname for Perl 6, and off-center eye placement is an intentional pun on "Wall-eyed". One of the goals behind the lively and colorful design of the logo was to discourage misogyny in the community and for it to be an opportunity for those of "masculine persuasion" to show their sensitive side. Implementations As of 2017[update], only the Rakudo implementation is under active development. No implementation will be designated as the official Raku implementation; rather, "Raku is anything that passes the official test suite." Rakudo Perl 6 targets a number of virtual machines, such as MoarVM, the Java Virtual Machine, and JavaScript. MoarVM is a virtual machine built especially for Rakudo and the NQP Compiler Toolchain. There is a layer between Raku and the virtual machines named Not Quite Perl 6 (NQP), which implements Raku rules for parsing Raku, and an abstract syntax tree and backend-specific code generation. Large portions of Rakudo are written in Raku, or in its subset NQP. Rakudo is not a completely self-hosting implementation, nor are there concrete plans at this point to make Rakudo a bootstrapping compiler. Pugs was an initial implementation of Perl 6 written in Haskell, led by Audrey Tang. Pugs used to be the most advanced implementation of Perl 6. As of mid-2007, it is mostly dormant, with updates made only to track the latest version of the Glasgow Haskell Compiler (GHC). As of November 2014, Pugs is unmaintained. In 2007, v6-MiniPerl6 ("mp6") and its reimplementation, v6-KindaPerl6 ("kp6") were written as a means to bootstrap the Perl-6.0.0 STD, using Perl 5. The STD is a full grammar for Perl 6 and is written in Perl 6. In theory, anything capable of parsing the STD and generating executable code is a suitable bootstrapping system for Perl 6. kp6 is currently compiled by mp6 and can work with multiple backends. mp6 and kp6 are not full Perl 6 implementations and are designed only to implement the minimum featureset required to bootstrap a full Perl 6 compiler. Yapsi was a Perl 6 compiler and runtime written in Perl 6. As a result, it required an existing Perl 6 interpreter, such as one of the Rakudo Star releases, to run. Niecza, another major Perl 6 implementation effort, focused on optimization and efficient implementation research. It targets the Common Language Infrastructure. Module system The Raku specification requests that modules be identified by name, version, and authority. It is possible to load only a specific version of a module, or even two modules of the same name that differ in version or authority. As a convenience, aliasing to a short name is provided. CPAN, the Perl module distribution system, does not yet handle Raku modules. Instead a prototype module system is in use. Major changes from Perl Perl and Raku differ fundamentally, though in general the intent has been to "keep Raku Perl", so that Raku is clearly "a Perl programming language". Most of the changes are intended to normalize the language, to make it easier for novice and expert programmers alike to understand, and to make "easy things easier and hard things more possible". A major non-technical difference between Perl and Raku is that Raku began as a specification. This means that Raku can be re-implemented if needed, and it also means that programmers do not have to read the source code for the ultimate authority on any given feature. In contrast, in Perl, the official documentation is not considered authoritative and only describes the behavior of the actual Perl interpreter informally. Any discrepancies found between the documentation and the implementation may lead to either being changed to reflect the other, a dynamic which drives the continuing development and refinement of the Perl releases. In Raku, the dynamic type system of Perl has been augmented by the addition of static types. For example: However, static typing remains optional, so programmers can do most things without any explicit typing at all: Raku offers a gradual typing system, whereby the programmer may choose to use static typing, use dynamic typing, or mix the two. Perl defines subroutines without formal parameter lists at all (though simple parameter counting and some type checking can be done using Perl's "prototypes"). Subroutine arguments passed in are aliased into the elements of the array @_. If the elements of @_ are modified, the changes are reflected in the original data. Raku introduces true formal parameters to the language. In Raku, a subroutine declaration looks like this: As in Perl, the formal parameters (i.e., the variables in the parameter list) are aliases to the actual parameters (the values passed in), but by default, the aliases are constant so they cannot be modified. They may be declared explicitly as read-write aliases for the original value or as copies using the is rw or is copy directives respectively should the programmer require them to be modified locally. Raku provides three basic modes of parameter passing: positional parameters, named parameters, and slurpy parameters. Positional parameters are the typical ordered list of parameters that most programming languages use. All parameters may also be passed by using their name in an unordered way. Named-only parameters (indicated by a : before the parameter name) can only be passed by specifying its name, i.e. it never captures a positional argument. Slurpy parameters (indicated by an * before the parameter name) are Raku's tool for creating variadic functions. A slurpy hash will capture remaining passed-by-name parameters, whereas a slurpy array will capture remaining passed-by-position parameters. Here is an example of the use of all three parameter-passing modes: Positional parameters, such as those used above, are always required unless followed by ? to indicate that they are optional. Named parameters are optional by default, but may be marked as required by adding ! after the variable name. Slurpy parameters are always optional. Parameters can also be passed to arbitrary blocks, which act as closures. This is how, for example, for and while loop iterators are named. In the following example, a list is traversed, 3 elements at a time, and passed to the loop's block as the variables, $a, $b, $c. This is generally referred to as a "pointy sub" or "pointy block", and the arrow behaves almost exactly like the sub keyword, introducing an anonymous closure (or anonymous subroutine in Perl terminology). In Perl, sigils – the punctuation characters that precede a variable name – change depending on how the variable is used: In Raku, sigils are invariant, which means that they do not change based on whether it is the array or the array element that is needed: The variance in Perl is inspired by number agreement in English and many other natural languages: However, this conceptual mapping breaks down when using references, since they may refer to data structures even though they are scalars. Thus, dealing with nested data structures may require an expression of both singular and plural form in a single term: This complexity has no equivalent either in common use of natural language or in other programming languages,[dubious – discuss] and it causes high cognitive load when writing code to manipulate complex data structures. This is the same code in Raku: Perl supports object-oriented programming via a mechanism known as blessing. Any reference can be blessed into being an object of a particular class. A blessed object can have methods invoked on it using the "arrow syntax" which will cause Perl to locate or "dispatch" an appropriate subroutine by name, and call it with the blessed variable as its first argument. While extremely powerful, it makes the most common case of object orientation, a struct-like object with some associated code, unnecessarily difficult. In addition, because Perl can make no assumptions about the object model in use, method invocation cannot be optimized very well. In the spirit of making the "easy things easy and hard things possible", Raku retains the blessing model and supplies a more robust object model for the common cases. For example, a class to encapsulate a Cartesian point could be defined and used this way: The dot replaces the arrow in a nod to the many other languages (e.g. C++, Java, Python, etc.) that have coalesced around dot as the syntax for method invocation. In the terminology of Raku, $.x is called an "attribute". Some languages call these fields or members. The method used to access an attribute is called an "accessor". An auto-accessor method is a method created automatically and named after the attribute's name, as the method x is in the example above. These accessor functions return the value of the attribute. When a class or individual attribute is declared with the is rw modifier (short for "read/write"), the auto-accessors can be passed a new value to set the attribute to, or it can be directly assigned to as an lvalue (as in the example). Auto-accessors can be replaced by user-defined methods, should the programmer desire a richer interface to an attribute. Attributes can only be accessed directly from within a class definition via the $! syntax regardless of how the attributes are declared. All other access must go through the accessor methods. The Raku object system has inspired the Moose framework that introduces many of Raku's OOP features to Perl.[clarification needed] Inheritance is the technique by which an object or type can re-use code or definitions from existing objects or types. For example, a programmer may want to have a standard type but with an extra attribute. Inheritance in other languages, such as Java, is provided by allowing Classes to be sub-classes of existing classes. Raku provides for inheritance via Classes, which are similar to Classes in other languages, and Roles. Roles in Raku take on the function of interfaces in Java, mixins in Ruby, and traits in PHP and in the Smalltalk variant Squeak. These are much like classes, but they provide a safer composition mechanism. These are used to perform composition when used with classes rather than adding to their inheritance chain. Roles define nominal types; they provide semantic names for collections of behavior and state. The fundamental difference between a role and a class is that classes can be instantiated; roles are not. Although Roles are distinct from Classes, it is possible to write Raku code that directly instantiates a Role or uses a Role as a type object, Raku will automatically create a class with the same name as the role, making it possible to transparently use a role as if it were a class. Essentially, a role is a bundle of (possibly abstract) methods and attributes that can be added to a class without using inheritance. A role can even be added to an individual object; in this case, Raku will create an anonymous subclass, add the role to the subclass, and change the object's class to the anonymous subclass. For example, a Dog is a Mammal because dogs inherit certain characteristics from Mammals, such as mammary glands and (through Mammal's parent, Vertebrate) a backbone. On the other hand, dogs also may have one of several distinct types of behavior, and these behaviours may change over time. For example, a Dog may be a Pet, a Stray (an abandoned pet will acquire behaviours to survive not associated with a pet), or a Guide for the blind (guide dogs are trained, so they do not start life as guide dogs). However, these are sets of additional behaviors that can be added to a Dog. It is also possible to describe these behaviors in such a way that they can be usefully applied to other animals, for example, a Cat can equally be a Pet or Stray. Hence, Dog and Cat are distinct from each other, while both remain within the more general category Mammal. So Mammal is a Class and Dog and Cat are classes that inherit from Mammal. But the behaviours associated with Pet, Stray, and Guide are Roles that can be added to Classes, or objects instantiated from Classes. Roles are added to a class or object with the does keyword. In order to show inheritance from a class, there is a different keyword is. The keywords reflect the differing meanings of the two features: role composition gives a class the behavior of the role, but doesn't indicate that it is truly the same thing as the role. Although roles are distinct from classes, both are types, so a role can appear in a variable declaration where one would normally put a class. For example, a Blind role for a Human could include an attribute of type Guide; this attribute could contain a Guide Dog, a Guide Horse, a Guide Human, or even a Guide Machine. Perl's regular expression and string-processing support has always been one of its defining features. Since Perl's pattern-matching constructs have exceeded the capabilities of regular language expressions for some time, Raku documentation exclusively refers to them as regexes, distancing the term from the formal definition. Raku provides a superset of Perl features with respect to regexes, folding them into a larger framework called "rules" which provide the capabilities of context-sensitive parsing formalisms (such as the syntactic predicates of parsing expression grammars and ANTLR), as well as acting as a closure with respect to their lexical scope. Rules are introduced with the rule keyword which has a usage quite similar to subroutine definition. Anonymous rules can also be introduced with the regex (or rx) keyword, or they can simply be used inline as regexps were in Perl via the m (matching) or s (substitute) operators. In Apocalypse 5, Larry Wall enumerated 20 problems with "current regex culture". Among these were that Perl's regexes were "too compact and 'cute'", had "too much reliance on too few metacharacters", "little support for named captures", "little support for grammars", and "poor integration with 'real' language". Some Perl constructs have been changed in Raku, optimized for different syntactic cues for the most common cases. For example, the parentheses (round brackets) required in control flow constructs in Perl are now optional: Also, the , (comma) operator is now a list constructor, so enclosing parentheses are no longer required around lists. The code now makes @array an array with exactly the elements '1', '2', '3', and '4'. Raku allows comparisons to "chain". That is, a sequence of comparisons such as the following is allowed: This is treated as if each left-to-right comparison were performed on its own, and the result is logically combined via the and operation. Raku uses the method of lazy evaluation of lists that has been a feature of some functional programming languages such as Haskell: The code above will not crash by attempting to assign a list of infinite size to the array @integers, nor will it hang indefinitely in attempting to expand the list if a limited number of slots are searched. This simplifies many common tasks in Raku including input/output operations, list transformations, and parameter passing. Related to lazy evaluation is the construction of lazy lists using gather and take, behaving somewhat like generators in languages like Icon or Python. $squares will be an infinite list of square numbers, but lazy evaluation of the gather ensures that elements are only computed when they are accessed. Raku introduces the concept of junctions: values that are composites of other values. In their simplest form, junctions are created by combining a set of values with junctive operators: | indicates a value which is equal to either its left- or right-hand arguments. & indicates a value which is equal to both its left- and right-hand arguments. These values can be used in any code that would use a normal value. Operations performed on a junction act on all members of the junction equally, and combine according to the junctive operator. So, ("apple"|"banana") ~ "s" would yield "apples"|"bananas". In comparisons, junctions return a single true or false result for the comparison. "any" junctions return true if the comparison is true for any one of the elements of the junction. "all" junctions return true if the comparison is true for all of the elements of the junction. Junctions can also be used to more richly augment the type system by introducing a style of generic programming that is constrained to junctions of types: In low-level languages, the concept of macros has become synonymous with textual substitution of source-code due to the widespread use of the C preprocessor. However, high-level languages such as Lisp pre-dated C in their use of macros that were far more powerful. It is this Lisp-like macro concept that Raku will take advantage of. The power of this sort of macro stems from the fact that it operates on the program as a high-level data structure, rather than as simple text, and has the full capabilities of the programming language at its disposal. A Raku macro definition will look like a subroutine or method definition, and it can operate on unparsed strings, an AST representing pre-parsed code, or a combination of the two. A macro definition would look like this: In this particular example, the macro is no more complex than a C-style textual substitution, but because parsing of the macro parameter occurs before the macro operates on the calling code, diagnostic messages would be far more informative. However, because the body of a macro is executed at compile time each time it is used, many techniques of optimization can be employed. It is even possible to eliminate complex computations from resulting programs by performing the work at compile-time. In Perl, identifier names can use the ASCII alphanumerics and underscores also available in other languages. In Raku, the alphanumerics can include most Unicode characters. In addition, hyphens and apostrophes can be used (with certain restrictions, such as not being followed by a digit). Using hyphens instead of underscores to separate words in a name leads to a style of naming called "kebab case". Examples The hello world program is a common program used to introduce a language. In Raku, hello world is: — though there is more than one way to do it. The factorial function in Raku, defined in a few different ways: Quicksort is a well-known sorting algorithm. A working implementation[a] using the functional programming paradigm can be succinctly written in Raku: Tower of Hanoi is often used to introduce recursive programming in computer science. This implementation uses Raku's multi-dispatch mechanism and parametric constraints: Books In the history of Raku there were two waves of book writing. The first wave followed the initial announcement of Perl 6 in 2000. Those books reflect the state of the design of the language of that time, and contain mostly outdated material. The second wave, that followed the announcement of Version 1.0 in 2015, includes several books that have already been published and some others that are in the process of being written. Also, a book dedicated to one of the first Perl 6 virtual machines, Parrot, was published in 2009. References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Template_talk:Meta_sidebar] | [TOKENS: 48]
Template talk:Meta sidebar Talk pages are where people discuss how to make content on Wikipedia the best that it can be. You can use this page to start a discussion with others about how to improve the "Template:Meta sidebar" page.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Orion_(constellation)#cite_ref-40] | [TOKENS: 4993]
Contents Orion (constellation) Orion is a prominent set of stars visible during winter in the northern celestial hemisphere. It is one of the 88 modern constellations; it was among the 48 constellations listed by the 2nd-century AD/CE astronomer Ptolemy. It is named after a hunter in Greek mythology. Orion is most prominent during winter evenings in the Northern Hemisphere, as are five other constellations that have stars in the Winter Hexagon asterism. Orion's two brightest stars, Rigel (β) and Betelgeuse (α), are both among the brightest stars in the night sky; both are supergiants and slightly variable. There are a further six stars brighter than magnitude 3.0, including three making the short straight line of the Orion's Belt asterism. Orion also hosts the radiant of the annual Orionids, the strongest meteor shower associated with Halley's Comet, and the Orion Nebula, one of the brightest nebulae in the sky. Characteristics Orion is bordered by Taurus to the northwest, Eridanus to the southwest, Lepus to the south, Monoceros to the east, and Gemini to the northeast. Covering 594 square degrees, Orion ranks 26th of the 88 constellations in size. The constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 26 sides. In the equatorial coordinate system, the right ascension coordinates of these borders lie between 04h 43.3m and 06h 25.5m , while the declination coordinates are between 22.87° and −10.97°. The constellation's three-letter abbreviation, as adopted by the International Astronomical Union in 1922, is "Ori". Orion is most visible in the evening sky from January to April, winter in the Northern Hemisphere, and summer in the Southern Hemisphere. In the tropics (less than about 8° from the equator), the constellation transits at the zenith. From May to July (summer in the Northern Hemisphere, winter in the Southern Hemisphere), Orion is in the daytime sky and thus invisible at most latitudes. However, for much of Antarctica in the Southern Hemisphere's winter months, the Sun is below the horizon even at midday. Stars (and thus Orion, but only the brightest stars) are then visible at twilight for a few hours around local noon, just in the brightest section of the sky low in the North where the Sun is just below the horizon. At the same time of day at the South Pole itself (Amundsen–Scott South Pole Station), Rigel is only 8° above the horizon, and the Belt sweeps just along it. In the Southern Hemisphere's summer months, when Orion is normally visible in the night sky, the constellation is actually not visible in Antarctica because the Sun does not set at that time of year south of the Antarctic Circle. In countries close to the equator (e.g. Kenya, Indonesia, Colombia, Ecuador), Orion appears overhead in December around midnight and in the February evening sky. Navigational aid Orion is very useful as an aid to locating other stars. By extending the line of the Belt southeastward, Sirius (α CMa) can be found; northwestward, Aldebaran (α Tau). A line eastward across the two shoulders indicates the direction of Procyon (α CMi). A line from Rigel through Betelgeuse points to Castor and Pollux (α Gem and β Gem). Additionally, Rigel is part of the Winter Circle asterism. Sirius and Procyon, which may be located from Orion by following imaginary lines (see map), also are points in both the Winter Triangle and the Circle. Features Orion's seven brightest stars form a distinctive hourglass-shaped asterism, or pattern, in the night sky. Four stars—Rigel, Betelgeuse, Bellatrix, and Saiph—form a large roughly rectangular shape, at the center of which lie the three stars of Orion's Belt—Alnitak, Alnilam, and Mintaka. His head is marked by an additional eighth star called Meissa, which is fairly bright to the observer. Descending from the Belt is a smaller line of three stars, Orion's Sword (the middle of which is in fact not a star but the Orion Nebula), also known as the hunter's sword. Many of the stars are luminous hot blue supergiants, with the stars of the Belt and Sword forming the Orion OB1 association. Standing out by its red hue, Betelgeuse may nevertheless be a runaway member of the same group. Orion's Belt, or The Belt of Orion, is an asterism within the constellation. It consists of three bright stars: Alnitak (Zeta Orionis), Alnilam (Epsilon Orionis), and Mintaka (Delta Orionis). Alnitak is around 800 light-years away from Earth, 100,000 times more luminous than the Sun, and shines with a magnitude of 1.8; much of its radiation is in the ultraviolet range, which the human eye cannot see. Alnilam is approximately 2,000 light-years from Earth, shines with a magnitude of 1.70, and with an ultraviolet light that is 375,000 times more luminous than the Sun. Mintaka is 915 light-years away and shines with a magnitude of 2.21. It is 90,000 times more luminous than the Sun and is a double star: the two orbit each other every 5.73 days. In the Northern Hemisphere, Orion's Belt is best visible in the night sky during the month of January at around 9:00 pm, when it is approximately around the local meridian. Just southwest of Alnitak lies Sigma Orionis, a multiple star system composed of five stars that have a combined apparent magnitude of 3.7 and lying at a distance of 1150 light-years. Southwest of Mintaka lies the quadruple star Eta Orionis. Orion's Sword contains the Orion Nebula, the Messier 43 nebula, Sh 2-279 (also known as the Running Man Nebula), and the stars Theta Orionis, Iota Orionis, and 42 Orionis. Three stars comprise a small triangle that marks the head. The apex is marked by Meissa (Lambda Orionis), a hot blue giant of spectral type O8 III and apparent magnitude 3.54, which lies some 1100 light-years distant. Phi-1 and Phi-2 Orionis make up the base. Also nearby is the young star FU Orionis. Stretching north from Betelgeuse are the stars that make up Orion's club. Mu Orionis marks the elbow, Nu and Xi mark the handle of the club, and Chi1 and Chi2 mark the end of the club. Just east of Chi1 is the Mira-type variable red giant star U Orionis. West from Bellatrix lie six stars all designated Pi Orionis (π1 Ori, π2 Ori, π3 Ori, π4 Ori, π5 Ori, and π6 Ori) which make up Orion's shield. Around 20 October each year, the Orionid meteor shower (Orionids) reaches its peak. Coming from the border with the constellation Gemini, as many as 20 meteors per hour can be seen. The shower's parent body is Halley's Comet. Hanging from Orion's Belt is his sword, consisting of the multiple stars θ1 and θ2 Orionis, called the Trapezium and the Orion Nebula (M42). This is a spectacular object that can be clearly identified with the naked eye as something other than a star. Using binoculars, its clouds of nascent stars, luminous gas, and dust can be observed. The Trapezium cluster has many newborn stars, including several brown dwarfs, all of which are at an approximate distance of 1,500 light-years. Named for the four bright stars that form a trapezoid, it is largely illuminated by the brightest stars, which are only a few hundred thousand years old. Observations by the Chandra X-ray Observatory show both the extreme temperatures of the main stars—up to 60,000 kelvins—and the star forming regions still extant in the surrounding nebula. M78 (NGC 2068) is a nebula in Orion. With an overall magnitude of 8.0, it is significantly dimmer than the Great Orion Nebula that lies to its south; however, it is at approximately the same distance, at 1600 light-years from Earth. It can easily be mistaken for a comet in the eyepiece of a telescope. M78 is associated with the variable star V351 Orionis, whose magnitude changes are visible in very short periods of time. Another fairly bright nebula in Orion is NGC 1999, also close to the Great Orion Nebula. It has an integrated magnitude of 10.5 and is 1500 light-years from Earth. The variable star V380 Orionis is embedded in NGC 1999. Another famous nebula is IC 434, the Horsehead Nebula, near Alnitak (Zeta Orionis). It contains a dark dust cloud whose shape gives the nebula its name. NGC 2174 is an emission nebula located 6400 light-years from Earth. Besides these nebulae, surveying Orion with a small telescope will reveal a wealth of interesting deep-sky objects, including M43, M78, and multiple stars including Iota Orionis and Sigma Orionis. A larger telescope may reveal objects such as the Flame Nebula (NGC 2024), as well as fainter and tighter multiple stars and nebulae. Barnard's Loop can be seen on very dark nights or using long-exposure photography. All of these nebulae are part of the larger Orion molecular cloud complex, which is located approximately 1,500 light-years away and is hundreds of light-years across. Due to its proximity, it is one of the most intense regions of stellar formation visible from Earth. The Orion molecular cloud complex forms the eastern part of an even larger structure, the Orion–Eridanus Superbubble, which is visible in X-rays and in hydrogen emissions. History and mythology The distinctive pattern of Orion is recognized in numerous cultures around the world, and many myths are associated with it. Orion is used as a symbol in the modern world. In Siberia, the Chukchi people see Orion as a hunter; an arrow he has shot is represented by Aldebaran (Alpha Tauri), with the same figure as other Western depictions. In Greek mythology, Orion was a gigantic, supernaturally strong hunter, born to Euryale, a Gorgon, and Poseidon (Neptune), god of the sea. One myth recounts Gaia's rage at Orion, who dared to say that he would kill every animal on Earth. The angry goddess tried to dispatch Orion with a scorpion. This is given as the reason that the constellations of Scorpius and Orion are never in the sky at the same time. However, Ophiuchus, the Serpent Bearer, revived Orion with an antidote. This is said to be the reason that the constellation of Ophiuchus stands midway between the Scorpion and the Hunter in the sky. The constellation is mentioned in Horace's Odes (Ode 3.27.18), Homer's Odyssey (Book 5, line 283) and Iliad, and Virgil's Aeneid (Book 1, line 535). In old Hungarian tradition, Orion is known as "Archer" (Íjász), or "Reaper" (Kaszás). In recently rediscovered myths, he is called Nimrod (Hungarian: Nimród), the greatest hunter, father of the twins Hunor and Magor. The π and o stars (on upper right) form together the reflex bow or the lifted scythe. In other Hungarian traditions, Orion's Belt is known as "Judge's stick" (Bírópálca). In Ireland and Scotland, Orion was called An Bodach, a figure from Irish folklore whose name literally means "the one with a penis [bod]" and was the husband of the Cailleach (hag). In Scandinavian tradition, Orion's Belt was known as "Frigg's Distaff" (friggerock) or "Freyja's distaff". The Finns call Orion's Belt and the stars below it "Väinämöinen's scythe" (Väinämöisen viikate). Another name for the asterism of Alnilam, Alnitak, and Mintaka is "Väinämöinen's Belt" (Väinämöisen vyö) and the stars "hanging" from the Belt as "Kaleva's sword" (Kalevanmiekka). There are claims in popular media that the Adorant from the Geißenklösterle cave, an ivory carving estimated to be 35,000 to 40,000 years old, is the first known depiction of the constellation. Scholars dismiss such interpretations, saying that perceived details such as a belt and sword derive from preexisting features in the grain structure of the ivory. The Babylonian star catalogues of the Late Bronze Age name Orion MULSIPA.ZI.AN.NA,[note 1] "The Heavenly Shepherd" or "True Shepherd of Anu" – Anu being the chief god of the heavenly realms. The Babylonian constellation is sacred to Papshukal and Ninshubur, both minor gods fulfilling the role of "messenger to the gods". Papshukal is closely associated with the figure of a walking bird on Babylonian boundary stones, and on the star map the figure of the Rooster is located below and behind the figure of the True Shepherd—both constellations represent the herald of the gods, in his bird and human forms respectively. In ancient Egypt, the stars of Orion were regarded as a god, called Sah. Because Orion rises before Sirius, the star whose heliacal rising was the basis for the Solar Egyptian calendar, Sah was closely linked with Sopdet, the goddess who personified Sirius. The god Sopdu is said to be the son of Sah and Sopdet. Sah is syncretized with Osiris, while Sopdet is syncretized with Osiris' mythological wife, Isis. In the Pyramid Texts, from the 24th and 23rd centuries BC, Sah is one of many gods whose form the dead pharaoh is said to take in the afterlife. The Armenians identified their legendary patriarch and founder Hayk with Orion. Hayk is also the name of the Orion constellation in the Armenian translation of the Bible. The Bible mentions Orion three times, naming it "Kesil" (כסיל, literally – fool). Though, this name perhaps is etymologically connected with "Kislev", the name for the ninth month of the Hebrew calendar (i.e. November–December), which, in turn, may derive from the Hebrew root K-S-L as in the words "kesel, kisla" (כֵּסֶל, כִּסְלָה, hope, positiveness), i.e. hope for winter rains.: Job 9:9 ("He is the maker of the Bear and Orion"), Job 38:31 ("Can you loosen Orion's belt?"), and Amos 5:8 ("He who made the Pleiades and Orion"). In ancient Aram, the constellation was known as Nephîlā′, the Nephilim are said to be Orion's descendants. In medieval Muslim astronomy, Orion was known as al-jabbar, "the giant". Orion's sixth brightest star, Saiph, is named from the Arabic, saif al-jabbar, meaning "sword of the giant". In China, Orion was one of the 28 lunar mansions Sieu (Xiù) (宿). It is known as Shen (參), literally meaning "three", for the stars of Orion's Belt. The Chinese character 參 (pinyin shēn) originally meant the constellation Orion (Chinese: 參宿; pinyin: shēnxiù); its Shang dynasty version, over three millennia old, contains at the top a representation of the three stars of Orion's Belt atop a man's head (the bottom portion representing the sound of the word was added later). The Rigveda refers to the constellation as Mriga (the Deer). Nataraja, "the cosmic dancer", is often interpreted as the representation of Orion. Rudra, the Rigvedic form of Shiva, is the presiding deity of Ardra nakshatra (Betelgeuse) of Hindu astrology. The Jain Symbol carved in the Udayagiri and Khandagiri Caves, India in 1st century BCE has a striking resemblance with Orion. Bugis sailors identified the three stars in Orion's Belt as tanra tellué, meaning "sign of three". The Seri people of northwestern Mexico call the three stars in Orion's Belt Hapj (a name denoting a hunter) which consists of three stars: Hap (mule deer), Haamoja (pronghorn), and Mojet (bighorn sheep). Hap is in the middle and has been shot by the hunter; its blood has dripped onto Tiburón Island. The same three stars are known in Spain and most of Latin America as "Las tres Marías" (Spanish for "The Three Marys"). In Puerto Rico, the three stars are known as the "Los Tres Reyes Magos" (Spanish for The Three Wise Men). The Ojibwa/Chippewa Native Americans call this constellation Mesabi for Big Man. To the Lakota Native Americans, Tayamnicankhu (Orion's Belt) is the spine of a bison. The great rectangle of Orion is the bison's ribs; the Pleiades star cluster in nearby Taurus is the bison's head; and Sirius in Canis Major, known as Tayamnisinte, is its tail. Another Lakota myth mentions that the bottom half of Orion, the Constellation of the Hand, represented the arm of a chief that was ripped off by the Thunder People as a punishment from the gods for his selfishness. His daughter offered to marry the person who can retrieve his arm from the sky, so the young warrior Fallen Star (whose father was a star and whose mother was human) returned his arm and married his daughter, symbolizing harmony between the gods and humanity with the help of the younger generation. The index finger is represented by Rigel; the Orion Nebula is the thumb; the Belt of Orion is the wrist; and the star Beta Eridani is the pinky finger. The seven primary stars of Orion make up the Polynesian constellation Heiheionakeiki which represents a child's string figure similar to a cat's cradle. Several precolonial Filipinos referred to the belt region in particular as "balatik" (ballista) as it resembles a trap of the same name which fires arrows by itself and is usually used for catching pigs from the bush. Spanish colonization later led to some ethnic groups referring to Orion's Belt as "Tres Marias" or "Tatlong Maria." In Māori tradition, the star Rigel (known as Puanga or Puaka) is closely connected with the celebration of Matariki. The rising of Matariki (the Pleiades) and Rigel before sunrise in midwinter marks the start of the Māori year. In Javanese culture, the constellation is often called Lintang Waluku or Bintang Bajak, referring to the shape of a paddy field plow. The imagery of the Belt and Sword has found its way into popular Western culture, for example in the form of the shoulder insignia of the 27th Infantry Division of the United States Army during both World Wars, probably owing to a pun on the name of the division's first commander, Major General John F. O'Ryan. The film distribution company Orion Pictures used the constellation as its logo. In artistic renderings, the surrounding constellations are sometimes related to Orion: he is depicted standing next to the river Eridanus with his two hunting dogs Canis Major and Canis Minor, fighting Taurus. He is sometimes depicted hunting Lepus the hare. He sometimes is depicted to have a lion's hide in his hand. There are alternative ways to visualise Orion. From the Southern Hemisphere, Orion is oriented south-upward, and the Belt and Sword are sometimes called the saucepan or pot in Australia and New Zealand. Orion's Belt is called Drie Konings (Three Kings) or the Drie Susters (Three Sisters) by Afrikaans speakers in South Africa and are referred to as les Trois Rois (the Three Kings) in Daudet's Lettres de Mon Moulin (1866). The appellation Driekoningen (the Three Kings) is also often found in 17th and 18th-century Dutch star charts and seaman's guides. The same three stars are known in Spain, Latin America, and the Philippines as "Las Tres Marías" (The Three Marys), and as "Los Tres Reyes Magos" (The Three Wise Men) in Puerto Rico. Even traditional depictions of Orion have varied greatly. Cicero drew Orion in a similar fashion to the modern depiction. The Hunter held an unidentified animal skin aloft in his right hand; his hand was represented by Omicron2 Orionis and the skin was represented by the five stars designated Pi Orionis. Saiph and Rigel represented his left and right knees, while Eta Orionis and Lambda Leporis were his left and right feet, respectively. As in the modern depiction, Mintaka, Alnilam, and Alnitak represented his Belt. His left shoulder was represented by Betelgeuse, and Mu Orionis made up his left arm. Meissa was his head, and Bellatrix his right shoulder. The depiction of Hyginus was similar to that of Cicero, though the two differed in a few important areas. Cicero's animal skin became Hyginus's shield (Omicron and Pi Orionis), and instead of an arm marked out by Mu Orionis, he holds a club (Chi Orionis). His right leg is represented by Theta Orionis and his left leg is represented by Lambda, Mu, and Epsilon Leporis. Further Western European and Arabic depictions have followed these two models. Future Orion is located on the celestial equator, but it will not always be so located due to the effects of precession of the Earth's axis. Orion lies well south of the ecliptic, and it only happens to lie on the celestial equator because the point on the ecliptic that corresponds to the June solstice is close to the border of Gemini and Taurus, to the north of Orion. Precession will eventually carry Orion further south, and by AD 14000, Orion will be far enough south that it will no longer be visible from the latitude of Great Britain. Further in the future, Orion's stars will gradually move away from the constellation due to proper motion. However, Orion's brightest stars all lie at a large distance from Earth on an astronomical scale—much farther away than Sirius, for example. Orion will still be recognizable long after most of the other constellations—composed of relatively nearby stars—have distorted into new configurations, with the exception of a few of its stars eventually exploding as supernovae, for example Betelgeuse, which is predicted to explode sometime in the next million years. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Griefer] | [TOKENS: 1647]
Contents Griefer A griefer or bad-faith player is a player in a multiplayer video game who deliberately annoys, disrupts, or trolls others in ways that are not part of the intended gameplay. Griefing is often accomplished by killing other players unnecessarily, destroying player-built structures, or stealing items. A griefer derives pleasure from the act of annoying other users, and as such, is a nuisance in online gaming communities. History The term "griefing" was applied to online multiplayer video games by the year 2000 or earlier, as illustrated by postings to the rec.games.computer.ultima.online USENET group. The player is said to cause "grief" in the sense of "giving someone grief".[citation needed] The term "griefing" dates to the late 1990s, when it was used to describe the willfully antisocial behaviors seen in early massively multiplayer online games like Ultima Online, and later, in the 2000s, first-person shooters such as Counter-Strike. Even before it had a name, griefer-like behavior was familiar in the virtual worlds of text-based Multi-User Domains (MUDs), where joyriding invaders inflicted "virtual rape" and similar offenses on the local populace. Julian Dibbell's 1993 article "A Rape in Cyberspace" analyzed the griefing events in a particular MUD, LambdaMOO, and the staff's response. In the culture of massively multiplayer online role-playing games (MMORPGs) in Taiwan, such as Lineage, griefers are known as "white-eyed"—a metaphor meaning that their eyes have no pupils and so they look without seeing. Behaviors other than griefing that can cause players to be stigmatized as "white-eyed" include cursing, cheating, stealing, or unreasonable killing. Methods Methods of griefing differ from game to game. What might be considered griefing in one area of a game, may even be an intended function or mechanic in another area. Common methods may include, but are not limited to: The term is sometimes applied more generally to refer to a person who uses the internet to cause distress to others as a prank, or to intentionally inflict harm, as when it was used to describe an incident in March 2008, when malicious users posted seizure-inducing animations on epilepsy forums. Industry response Many subscription-based games actively oppose griefers, since their behavior can drive away business. It is common for developers to release server-side upgrades and patches to annul griefing methods. Many online games employ gamemasters that reprimand offenders. Some use a crowdsourcing approach, where players can report griefing. Malicious players are then red-flagged, and are then dealt with at a gamemaster's discretion. As many as 25% of customer support calls to companies operating online games deal specifically with griefing. Blizzard Entertainment has enacted software components to combat griefing. To prevent non-consensual attacks between players, some games such as Ultima Online have created separate realms for those who wish to be able to attack anyone at any time, and for those who do not. Others implemented separate servers.[citation needed] When EverQuest was released, Sony included a PvP switch where people could fight each other only if they had enabled that option. This was done in order to prevent the player-killing that was driving people away from Ultima Online, which at that time had no protection on any of its servers. Second Life bans players for harassment (defined as being rude or threatening, making unwelcome sexual advances, or performing activities likely to annoy or alarm somebody) and assault (shooting, pushing, or shoving in a safe area, or creating scripted objects that target another user and hinder their enjoyment of the game) in its community standards. Sanctions include warnings, suspension from Second Life, or being banned altogether.[citation needed] Eve Online has incorporated activities typically considered griefing into the gameplay mechanisms. Corporate spying, theft, scams, gate-camping, and PvP on non-PvP players are all part of the gaming experience. This does not mean that the developers are indifferent to the negative effect that these activities may have on players, it is simply their choice with regards to the culture and atmosphere that they intended for the game. Players are advised to approach unfamiliar situations in the game with an appropriate level of caution, develop strategies to deal with the presence of these elements, and take personal responsibility for their in-game actions. Certain activities are allowed by the developers, but are still considered illegal in the game itself and result in in-game consequences, such as the unavoidable loss of the attacker's ship when engaging in combat with a non-allowed target in high-security space. Shooters such as Counter Strike: Global Offensive have implemented peer review systems, where if a player is reported too many times, multiple higher ranked players are allowed to review the player and determine if the reports are valid, and apply a temporary ban to the player's account if necessary. The player's name is omitted during the replay, as well as those of the other 9 players in the game. In October 2016, Valve implemented a change that will permanently ban a player if they receive two penalties for griefing.[citation needed] Many Minecraft servers have rules against griefing. In Minecraft freebuild servers, griefing is often the destruction of another player's build, and in other servers the definition ranges, but almost all servers recognize griefing as harassment. Most servers use temporary bans for minor and/or first-time incidents, and indefinite bans from the server for more serious and/or repeat offences. While many servers try to fight this, other servers, like 2b2t, allow griefing as part of the gameplay.[citation needed] By the early 2020s, Grand Theft Auto Online has experienced a drastic increase in griefing, due in part to the emergence of bugs and better money-making opportunities. Common griefing techniques within the game abuse passive mode and trivially accessible weaponized vehicles. Developer Rockstar has implemented measures such as a longer cool-down on passive mode, patching invincibility glitches, and removing passive mode from weaponized vehicles in recent updates. In addition, the game also features a reputation system that, in effect, after excessive "bad sport point" accumulation, will mark players as "bad sports", allowing them to only play in lobbies with other "bad sports". Such points are either accumulated over time or gained within a certain time frame and are acquired by actions such as destroying another player's personal vehicle, or quitting jobs early. This is one of the more controversial features of the game, as some point out flaws such as the game not considering if destruction of a vehicle was self-defense.[citation needed] Bethesda Softworks Games, a division of ZeniMax Media Inc., has a clear code of conduct that does not allow griefing, as indicated in section 3.2. Whether this has any effect is debatable, with numerous forum posts about ongoing griefing behaviour. Because the boilerplate response to generating a ticket about such a player, contains the clause "Please note, to protect individual privacy, we do not disclose the outcome of our investigation.", there is unfortunately no transparency to indicate whether violations to the code of conduct (by griefing) are taken seriously by Bethesda/ZeniMax. Fallout 76 attempted to discourage players from griefing by marking them as wanted criminals, which one can get a reward for killing. Wanted players cannot see any other players on the world map, and must rely on their normal player view. However, this has instead become another mechanism to engage in griefing, by luring other players into PvP, in which they largely have no chance to survive because of the perk loadout and weapons used by the griefer. An example of this is by breaking resource locks in a player camp, which will make the griefer wanted, with the hope that the camp owner will find them to retaliate, and thereby initiate PvP with the griefer.[citation needed] See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Jewish_Autonomism] | [TOKENS: 2538]
Contents Jewish Autonomism Jewish Autonomism, not connected to the contemporary political movement autonomism, was a non-Zionist political movement and ideology that emerged in the Russian and Austro-Hungarian empires, before spreading throughout Eastern Europe in the late 19th and early 20th century. In the late 19th century, Jewish Autonomism was seen "together with Zionism [as] the most important political expression of the Jewish people in the modern era." One of its first and major proponents was the historian and activist Simon Dubnow. Jewish Autonomism is often referred to as "Dubnovism" or "folkism". The Autonomists believed that the future survival of the Jews as a nation depends on their spiritual and cultural strength, in developing "spiritual nationhood" and in viability of Jewish diaspora as long as Jewish communities maintain self-rule and rejected assimilation. Autonomists often stressed the vitality of modern Yiddish culture. Various concepts of the Autonomism were adopted in the platforms of the Folkspartei, the Sejmists and socialist Jewish parties such as the Bund. The movement's beliefs were similar to those of the Austro-Marxists who advocated national personal autonomy within the multinational Austro-Hungarian empire and cultural pluralists in America such as Randolph Bourne and Horace Kallen. Origins of Jewish Autonomism Though Simon Dubnow was key in proliferating Autonomism's popularity, his ideas were not completely novel. In 1894, Jakob Kohn, a board member of the National Jewish Party of Austria published Assimilation, Antisemitismus und Nationaljudentum, a philosophical work detailing his party's perspective. Kohn argued that Jews shared not only a religion, but were connected by a long, deep-rooted ethnic history of centuries of discrimination, attempts at assimilation and exile. To Kohn, the Jews were a nation. Similar to Dubnow, Kohn called for the establishment of a Jewish organization to represent Jewish interests within the state's policies. Again, Similar to Dubnow, Kohn denounced assimilation, claiming that it worked against the establishment of a Jewish nation. The origins of Autonomism and Dubnow's ideas remain unclear. Notable philosophical thinkers from Eastern and Western Europe including Ernest Renan, John Stuart Mill, Herbert Spencer and Auguste Comte are cited to have influenced Dubnow's ideas. Ideas from Vladimir Solovyov, Dmitry Pisarev, Nikolay Chernyshevsky and Konstantin Aksakov concerning the Russian people's distinct spiritual heritage may have brought rise to Dubnow's own ideas on the Jews shared heritage. In his memoirs, Dubnow himself refers to some of these thinkers as major influences. In addition, Dubnov had been immersed in histiographical study of Russian Jewry, its institutions and spiritual movements. This research led Dubnov to question the legitimacy of the Russians' monopoly of political power and fueled his own demands for Jewish political representation. Jewish Autonomist ideology Jewish Autonomism advocates for the sovereignty of the Jews without a division from the governing state. Instead, Jewish Autonomism was concerned with establishing Jewish cultural minority rights within the state, primarily with an emphasis on language and educational rights. Dubnow argued that Jewish autonomism allowed Jews to simultaneously identify with Jewish nationalism and loyalty to their own state Dubnow was the preeminent Jewish historian of his time and his Autonomism was based on his analysis of history and the implications he drew for the future. Dubnow broke the history of the Jewish nation (and all nations) into three different periods: tribal, political-territorial, and spiritual. The Jewish nation had experienced a series of tests (the loss of political independence, the loss of a homeland, the loss of a unifying language) which by passing, had allowed it (and only it so far) to ascend to the highest stage of nationhood. Without those traditional markers of nationhood, the Jewish people's continued existence was proof to him that they "had crystallized into a spiritual people... drawing on the natural or intellectual will to live." Thus, in contrast to many other ideologies, Dubnow believed that as a nation the Jews had transformed for the better. The Jews had transformed from a nation connected by a territory to a nation connected by a spirituality and heritage. Whereas Zionism advocates for the establishment of an entirely separate Jewish state, Autonomism advocates for the sovereignty of the Jews without a division from the governing state. In fact, Dubnow felt that by his generation, the Jewish nation (unlike other nations) had superseded the use of force, and that if the Jewish nation ever developed into a state that resorted to military might, it would signify a step backwards. Given this disagreement, it makes sense that Dubnow was skeptical both over the mission and practicality of a Jewish nation-state in Palestine, instead seeing the diaspora as the true home of the Jewish people. However, he was more receptive to Ahad Ha'am's idea of a cultural center in Palestine, although Dubnow saw it as one of many Jewish centers rather than the dominant one. As Dubnow aged, he continued to become more receptive towards Zionism, as his final thoughts on the subject were recorded in 1937 as: "a Jewish State will accommodate only a part of the Diaspora, just as was the case in ancient times... a small Judea [Palestine] alongside a ten-tribe Israel [the Diaspora]." Unlike most assimilationists, Dubnow believed not only in full civil rights for Jews as individuals, but also stressed the need for rights for the Jewish nation within a multiethnic Russia. Dubnow feared that the Jews of the Diaspora would lose their spiritual connection with one another through assimilation, going so far as to claim that "no self respecting minority will take notice of such accusations [of separatism] because it considers its free development to be a sacred and inalienable right." Jewish Autonomism's spread to the United States Although Jewish Autonomism originated in Eastern Europe, the movement spread to the United States, a result of the prominence that American Jews obtained in negotiating for Jewish rights in East Europe from 1919 to 1945. Oscar Janowsky perhaps most influentially advocated American diaspora nationalism; yet his version of Jewish Autonomism differed in key ways with Dubnow. First, he called for both national autonomy in Eastern Europe and national sovereignty in Palestine, a compromise between the Zionist and traditional autonomist positions. Janowsky believed that if autonomism could be successful in meeting Jewish national demands in Eastern Europe, it could also present a solution for the Arab population of Palestine. Eastern European Jews would benefit from the international recognition of Jewish nationalism due to the creation of a state in Palestine and could simultaneously serve as living proof that an Arab minority population could retain nationhood and autonomy in a majority-Jewish state. Janowsky also broke with traditional Dubnowian thought in suggesting that Jewish people in both the United States and enlightened Western Europe did not need the form of national autonomy that they favored for Eastern European Jews, favoring assimilation then in some cases, unlike Dubnow. Other prominent American autonomists disagreed with Janowsky, viewing Jewish cultural autonomy in the United States as essential rather than unnecessary or subservient to cultural autonomy in East Europe or political autonomy in Palestine. Key historical moments for Jewish Autonomism In the early 1900s, the Folkspartei, a political party advocating for Jewish Autonomism strove for good relations with other Jewish parties, including the Zionists. An attempt was made to establish a Jewish National Club, an inter-party organization to coordinate collaboration between the two parties. However, this failed when the Folkists objected to accepting an unequal number of committee representatives. One of the primary functions of the Paris Peace Conference of 1919 was to grant new states international recognition as the successors of failed and outdated multi-ethnic empires. Central to the conference's objectives was devising a solution for the minority groups that resided in each new state. The Jewish problem was particularly put front and center as if its questions were paradigmatic of all national minority issues. Jewish leaders demanded that they be recognized as an autonomous group with the right to organize its own religious, cultural, philanthropic, and social institutions. This primarily meant the ability for Jews to run schools and other cultural institutions in the language of their choosing. While these represented important achievements, some Jewish leaders who took a more maximalist view of minority rights saw the Paris Peace Conference as insufficient. Despite successes of Jewish citizenship and linguistic and cultural rights, membership in the League of Nations, reparations, and self-regulated emigration were all ideas that were not adopted. Without these some felt that Jewish people had still not achieved true diaspora nationalism. Unfortunately, even the limited objectives won by diaspora nationalists were not realized, as the Peace Conference relied on either nation-states to enforce these rights themselves (which they were never keen to do) or let the League of Nations punish violators (which never occurred due to its gridlock and incompetence). The Holocaust was the end of Jewish Autonomism as a popular concept. The failures of Jewish autonomists to foresee the horrors and destruction that the Holocaust would cause permanently tainted their message, and most Jewish thinkers gravitated over to supporting Zionism. The Jewish populace at large gave up on ideas of both assimilation or minority rights, viewing the Holocaust as a culmination of those ideologies flaws. Tragically, Jewish Autonomism's most influential proponent, Simon Dubnow, was murdered in the 1941 Rumbula massacre and with his death came the end of Autonomism's practical impact in politics. From April 1947, Folkists active in post-war Łódź attempted to operate under the name Frajlandige Organization (Frajland-Ligt) in Poland. It was part of an international organization of the same name. Representatives of "Frajland-Ligt" advocated for the creation of several autonomous centers where Jews could live. They believed that Jews could live in what was then Palestine, Birobidzhan in the Russian Far East, and Suriname, which was then a colony of the Netherlands and known as Dutch Guiana. "Frajland-Ligt" activists particularly emphasized the possibility of some Jews emigrating to what was then Dutch Guiana. They cited several arguments: - opposition to assimilation, which was popular in some European countries, - disputes between Zionists and some Arabs and the authorities in London over the future of Palestine, and the Zionists' reluctance to cultivate Yiddish culture, - supporters of "Frajland-Ligt" noted that the Jewish nation, deprived of its ancient homeland, was developing in the diaspora countries. Proposing the settlement of approximately 30,000 Jews in Dutch Guiana, representatives of "Frajland-Ligt" pointed out that these areas were sparsely populated. They planned to build an economy based on agriculture and industry. They raised the possibility of granting Jews local government, autonomous rights, and temporary tax exemptions. "Frajland-Ligt" activists argued that Jews would be citizens of Dutch Guiana, with the opportunity to study Yiddish and Dutch in schools. Supporters of "Frajland-Ligt" valued ensuring the free development of Jewish culture, including religion and customs, as well as the ability to observe Jewish Sabbaths and holidays. To further their cause, representatives of "Frajland-Ligt" held talks with the Dutch authorities between 1946 and 1948 regarding the possibility of Jewish emigration to Dutch Guiana. These talks ended in failure. The Łódź branch of "Frajland-Ligt" had 15 members and had not been registered since the beginning of 1948. This was due to the lack of consent from Łódź's mayor, Eugeniusz Stawiński; the organization operated until the end of 1948. The Jewish Democratic Party (ŻSD) published the Bulletin of the Jewish Democratic Party. The bulletin appeared from May 1946 to May 1948, with a circulation of 2,000 copies. The party opposed the Central Committee of Polish Jews, which was dominated by Jewish communists from the Polish Workers' Party (PPR). Despite numerous attempts, the party did not establish cooperation with the Polish Democratic Party. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/File:Abeoforma_whisleri-2.jpg] | [TOKENS: 106]
File:Abeoforma whisleri-2.jpg Summary Licensing File history Click on a date/time to view the file as it appeared at that time. File usage The following 7 pages use this file: Global file usage The following other wikis use this file: View more global usage of this file. Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file.
========================================
[SOURCE: https://en.wikipedia.org/wiki/File:PIA24039-MarsCuriosityRover-DustDevil-20200809.gif] | [TOKENS: 494]
File:PIA24039-MarsCuriosityRover-DustDevil-20200809.gif Summary https://photojournal.jpl.nasa.gov/catalog/PIA24039 NASA's Curiosity Mars rover spotted this dust devil with one of its Navigation Cameras around 11:35 a.m. local Mars time on Aug. 9, 2020 (the 2,847th Martian day, or sol, of the mission). The frames in this GIF were shot over 4 minutes and 15 seconds. Taken from the "Mary Anning" drill site, this dust devil appears to be passing through small hills just above Curiosity's present location on Mount Sharp. The dust devil is approximately one-third to a half-mile (half-a-kilometer to a kilometer) away and estimated to be about 16 feet (5 meters) wide. The dust plume disappears past the top of the frame, so an exact height can't be known, but it's estimated to be at least 164 feet (50 meters) tall. Contrast has been modified to make frame-to-frame changes easier to see. If you think this file should be featured on Wikimedia Commons as well, feel free to nominate it. If you have an image of similar quality that can be published under a suitable copyright license, be sure to upload it, tag it, and nominate it. Licensing Attribution Caltech's disclaimer: Caltech makes no representations or warranties with respect to ownership of copyrights in the images, and does not represent others who may claim to be authors or owners of copyright of any of the images, and makes no warranties as to the quality of the images. Caltech shall not be responsible for any loss or expenses resulting from the use of the images, and you release and hold Caltech harmless from all liability arising from such use. Usage on the English Wikipedia: On the English Wikipedia you can use the {{JPL Image}} template to display the copyright notice. (See w:Wikipedia:Using JPL images for details) File history Click on a date/time to view the file as it appeared at that time. File usage The following 25 pages use this file: Global file usage The following other wikis use this file: Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file.
========================================
[SOURCE: https://en.wikipedia.org/w/index.php?title=Joke&mobileaction=toggle_view_mobile] | [TOKENS: 8460]
Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] Contents History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading
========================================
[SOURCE: https://en.wikipedia.org/w/index.php?title=Maor_Farid&printable=yes] | [TOKENS: 1458]
Contents Maor Farid Dr. Maor Farid (Hebrew: מאור פריד; born April 20, 1992) is an Israeli scientist, engineer and artificial intelligence researcher at Massachusetts Institute of Technology, social activist, and author. He is the founder and CEO of Learn to Succeed (Hebrew: ללמוד להצליח) for empowering of youths from the Israeli socio-economic periphery and youths at risk, a regional manager of the Israeli center of ScienceAbroad at MIT, and an activist in the American Technion Society. He is an alumnus of Unit 8200, and a fellow of Fulbright Program and the Israel Scholarship Educational Foundation [he]. Dr. Farid was elected to the Forbes 30 Under 30 list of 2019, and won the Moskowitz Prize for Zionism. Early life Maor was born in Ness Ziona, a city in central Israel, as the eldest son for parents from immigrating families of Mizrahi Jews from Iraq and Libya. Maor suffered from Attention deficit hyperactivity disorder (ADHD) from a young age, and was classified as a problematic and violent student. His ADHD issues were diagnosed only after he began his university studies. However, inspired by his parents' background, he aspired to excel at school for a better future for his family. During elementary school, Maor attended local quizzes about Jewish history and Zionism, which significantly shaped his identity and national perspective. Farid graduated high school with the highest GPA in school. Later he was recruited to the Israel Defense Forces and drafted to the Brakim Program [he] – an excellence program of the Israeli Intelligence Corps for training leading R&D officers for the Israeli military and defense industry. Maor graduated the program with honors and was elected by the Israeli Prime Minister's Office and Unit 8200, where he served as an artificial intelligence researcher, officer, and commander. During his Military service, he received various honors and awards, such as the Excellent Scientist Award, given to the top three academics serving in the Israel Defense Forces. In 2019, Farid completed his military service in the rank of a Captain. Education and academic career As part of the (4 years) Brakim Program, Maor completed his Bachelor's and Master's degrees at the Technion in Mechanical Engineering with honors. Then, he initiated his Ph.D. research as a collaboration with the Israel Atomic Energy Commission (IAEC) in parallel to his duty military service. The main goals of his Ph.D. research were predicting irreversible effects of major earthquakes on Israel's nuclear facilities, and improving their seismic resistance using energy absorption technologies. The mathematical models developed by Farid were able to forecast earthquake effects on facilities with major hazard potential, and predicted the failure of liquid storage tanks due to earthquakes took place in Italy (2012) and Mexico (2017). The energy absorption technologies used, increased in up to 90% the seismic resistance abilities of those sensitive facilities. The research results were published in multiple papers in peer-reviewed academic journals and presented in international academic conferences. Later, this research expanded to an official collaboration between the Technion and the Shimon Peres Negev Nuclear Research Center, which aims to implement the findings obtained on existing sensitive systems, and won funding of 1.5 million NIS from the Pazy foundation of the Israel Atomic Energy Commission and the Council for Higher Education. In 2017, Farid completed his Ph.D. and as the youngest graduate at the Technion for that year, at the age of 24. In the graduation ceremonies, he honored his parents to receive the diplomas on his behalf. At the same year, he served as a lecturer at Ben-Gurion University in an original course he developed as a solution for knowledge gaps he identified in the Israeli defense industry. In 2018, Dr. Farid served as an artificial intelligence researcher at a Data Science team of Unit 8200, where he developed machine learning-based solutions for military and operational needs. In 2019, Farid won the Fulbright and the Israel Scholarship Educational Foundation scholarships, and was accepted to post-doctoral position at Massachusetts Institute of Technology where he develops real-time methods for predicting earthquake effects using machine learning techniques. In 2020, Farid was accepted to the Emerging Leaders Program at Harvard Kennedy School in Cambridge, Massachusetts. At the same year, he received the excellence research grant of the Israel Academy of Sciences and Humanities for leading his research in collaboration between MIT and the Technion. Social activism Farid social activism focuses on empowering youths from disadvantaged backgrounds from an early age. In 2010–2015, he served as a mentor of a robotics team from Dimona in FIRST Robotics Competition, a mathematics tutor in "Aharai!" [he] program for high-school students at risk in Dimona and Be'er Sheva, and a mentor and private tutor of adolescence and reserve duty soldiers from disadvantaged backgrounds. In 2010, he initiated "Learn to Succeed" (Hebrew: ללמוד להצליח) project, for mitigating the social gaps in the Israeli society by empowering youths from the social, economical, and geographical periphery for excellence, self-fulfillment and gaining formal education. In 2018, Learn to Succeed became an official non-profit organization. At the same year, Farid led a crowdfunding project of 150,000 NIS in order to expand the organization to a national scale. In 2019, he published the book "Learn to Succeed", in which he describes his struggle with ADHD, the violent environment in which he grew up, and the changing process he went through from being a violent teenager to becoming the youngest Ph.D. graduate at the Technion. The book was given to more than two thousand youths at risk and became a top seller in Israel shortly after its publication. Maor dedicated the book to his parents and to the memorial of his friend Captain Tal Nachman who was killed in operational activity during his military service in 2014. The organization consists of hundreds of volunteers, gives full scholarships to STEM students from the periphery who serve as mentors of youths, both Jews and Arabs, from disadvantaged backgrounds, runs a hotline which gives online practical and mental support to hundreds of youths, parents and educators, initiates inspirational activities with military orientation to increase the motivation of its teen-age members for significant military service, and gives inspirational lectures to more than 5,000 youths each year. In 2019, Maor initiated a collaboration with Unit 8200 in which tens of the program's members are being interviewed to the unit. This opportunity is usually given to students with the highest grades in the matriculate exams in each class. In 2020, Dr. Farid established the ScienceAbroad center at MIT, aiming to strengthen the connections between Israeli researchers in the institute and the state of Israel. Moreover, he serves as a volunteer in the American Technion Society. Honors and awards Personal life Farid is married to Michal. Interviews and articles References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Midrange_computer] | [TOKENS: 670]
Contents Midrange computer Midrange computers, or midrange systems, were a class of computer systems that fell in between mainframe computers and microcomputers.[failed verification] This class of machine emerged in the 1960s, with models from Digital Equipment Corporation (PDP lines), Data General (NOVA), and Hewlett-Packard (HP 2100 and HP 3000) widely used in science and research as well as for business - and referred to as minicomputers.[disputed – discuss] IBM favored the term "midrange computer" for their comparable, but more business-oriented systems. IBM midrange systems Positioning The main similarity of midrange computers and mainframes is that they are both oriented for decimal-precision computing[citation needed] and high volume input and output (I/O), but most midrange computers have a reduced and specially designed internal architecture, with limited compatibility with mainframes. A low-end mainframe can be more affordable and less powerful than a high-end midrange system, but a midrange system is still a "replacement solution" with another service process, different OS and internal architecture. The difference between similar-size midrange computers and superminis/minicomputers is the purpose for which they are used - supers/minis are oriented towards floating-point scientific computing, and midrange computers are oriented towards decimal business-oriented computing - but without a clear distinction border between classes. The earliest midrange computers were single-user business calculation machines. Virtualization, a typical feature of mainframes since 1972 (partially from 1965), was ported to midrange systems only in 1977; multi-user support was added to midrange systems in 1976 compared to 1972 for mainframes (but that's still significantly earlier than the limited release of x86 virtualization (1985/87) or multi-user support (1983)). The latest midrange systems are primarily mid-class multi-user local network servers that can handle the large-scale processing of many business applications. Although not as powerful and reliable as full-size mainframe computers, they are less costly to buy, operate, and maintain than mainframe systems and thus meet the computing needs of many organizations. Midrange systems were relatively popular as powerful network servers to help manage large Internet Web sites, but more oriented for corporate intranets and extranets, and other networks. Today, midrange systems include servers used in industrial process-control and manufacturing plants and play major roles in computer-aided manufacturing (CAM). They can also take the form of powerful technical workstations for computer-aided design (CAD) and other computation and graphics-intensive applications. Midrange system are also used as front-end servers to assist mainframe computers in telecommunications processing and network management. Since the end of 1980s, when the client–server model of computing became predominant, computers of the comparable class are instead usually known as workgroup servers and online transaction processing servers to recognize that they usually "serve" end users at their "client" computers. During the 1990s and 2000s, in some non-critical cases both lines were replaced by web servers, oriented for working with global networks, but with less security background, and mainly using General purpose architectures (currently x86 or ARM). See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Far_darrig] | [TOKENS: 226]
Contents Far darrig A far darrig or fear dearg is a faerie of Irish mythology. The name far darrig is an Anglophone pronunciation of the Irish words fear dearg, meaning Red Man, as the far darrig is said to wear a red coat and cap. They are also sometimes known as Rat Boys as they are said to be rather fat, have dark, hairy skin, long snouts and skinny tails. According to Fairy and Folk Tales of the Irish Peasantry, the far darrig is classified as a solitary fairy along with the leprechaun and the clurichaun, all of whom are "most sluttish, slouching, jeering, mischievous phantoms". The far darrig in particular is described as one who "busies himself with practical joking, especially with gruesome joking". One example of this is replacing babies with changelings. They are also said to have some connection to nightmares. See also References This article relating to a Celtic myth or legend is a stub. You can help Wikipedia by adding missing information.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Kansas%E2%80%93Nebraska_Act] | [TOKENS: 6701]
Contents Kansas–Nebraska Act The Kansas–Nebraska Act of 1854 (10 Stat. 277) was a territorial organic act that created the territories of Kansas and Nebraska. It was drafted by Democratic Senator Stephen A. Douglas, passed by the 33rd United States Congress, and signed into law by President Franklin Pierce. Douglas introduced the bill intending to open up new lands to develop and facilitate the construction of a transcontinental railroad. However, the Kansas–Nebraska Act effectively repealed the Missouri Compromise of 1820, stoking national tensions over slavery and contributing to a series of armed conflicts known as "Bleeding Kansas". The United States had acquired vast amounts of land in the 1803 Louisiana Purchase, and since the 1840s, Douglas had sought to establish a territorial government in a portion of the Louisiana Purchase that was still unorganized. Douglas's efforts were stymied by Senator David Rice Atchison of Missouri and other Southern leaders who refused to allow the creation of territories that banned slavery; slavery would have been banned because the Missouri Compromise outlawed slavery in the territory north of latitude 36° 30′ north (except for Missouri). To win the support of Southerners like Atchison, Pierce and Douglas agreed to back the repeal of the Missouri Compromise, with the status of slavery instead decided based on "popular sovereignty". Under popular sovereignty, the citizens of each territory, rather than Congress, would determine whether slavery would be allowed. Douglas's bill to repeal the Missouri Compromise and organize Kansas Territory and Nebraska Territory won approval by a wide margin in the Senate, but faced stronger opposition in the House of Representatives. Though Northern Whigs strongly opposed the bill, it passed the House with the support of almost all Southerners and some Northern Democrats. After the passage of the act, pro- and anti-slavery elements flooded into Kansas to establish a population that would vote for or against slavery, resulting in a series of armed conflicts known as "Bleeding Kansas". Douglas and Pierce hoped that popular sovereignty would help bring an end to the national debate over slavery, but the Kansas–Nebraska Act outraged Northerners. The division between pro-slavery and anti-slavery forces caused by the Act was the death knell for the ailing Whig Party, which broke apart after the Act. Its Northern remnants would give rise to the anti-slavery Republican Party. The Act, and the tensions over slavery it inflamed, were key events leading to the American Civil War. Background In his 1853 inaugural address, President Franklin Pierce expressed hope that the Compromise of 1850 had settled the debate over the issue of slavery in the territories. For the Utah Territory and New Mexico Territory, lands acquired in the Mexican–American War, the Compromise left the issue of slavery to be decided by popular sovereignty. The Missouri Compromise, which banned slavery in territories north of the 36°30′ parallel, remained in place for the other U.S. territories acquired in the Louisiana Purchase, including a vast unorganized territory often referred to as "Nebraska". As settlers poured into the unorganized territory, and commercial and political interests called for a transcontinental railroad through the region, pressure mounted for the organization of the eastern parts of the unorganized territory. Though the organization of the territory was required to develop the region, an organization bill threatened to re-open the contentious debates over slavery in the territories that had taken place during and after the Mexican–American War. The topic of a transcontinental railroad had been discussed since the 1840s. While there were debates over the specifics, especially the route to be taken, there was a public consensus that such a railroad should be built by private interests, and financed by public land grants. In 1845, Stephen A. Douglas, then serving in his first term in the U.S. House of Representatives, had submitted an unsuccessful plan to organize the Nebraska Territory formally, as the first step in building a railroad with its eastern terminus in Chicago. Railroad proposals were debated in all subsequent sessions of Congress with cities such as Chicago, St. Louis, Quincy, Memphis, and New Orleans competing to be the jumping-off point for the construction. Several proposals in late 1852 and early 1853 had strong support, but they failed because of disputes over whether the railroad would follow a northern or a southern route. In early 1853, the House of Representatives passed a bill 107 to 49 to organize the Nebraska Territory in the land west of Iowa and Missouri. In March, the bill moved to the Senate Committee on Territories, which was headed by Douglas. Missouri Senator David Atchison announced that he would support the Nebraska proposal only if slavery were to be permitted. While the bill was silent on this issue, slavery would have been prohibited under the Missouri Compromise in the territory north of 36°30' latitude and west of the Mississippi River. Other Southern senators were as inflexible as Atchison. By a vote of 23 to 17, the Senate voted to table the motion, with every senator from the states south of Missouri voting to the table. During the Senate adjournment, the issues of the railroad and the repeal of the Missouri Compromise became entangled in Missouri politics, as Atchison campaigned for re-election against the forces of Thomas Hart Benton. Atchison was maneuvered into choosing between antagonizing the state's railroad interests or its slaveholders. Finally, he took the position that he would rather see Nebraska "sink in hell" before he would allow it to be overrun by free soilers. Representatives then generally found lodging in boarding houses when they were in the nation's capital to perform their legislative duties. Atchison shared lodgings in an F Street house shared by the leading Southerners in Congress. He was the Senate's President pro tempore. His housemates included Robert T. Hunter (from Virginia, chairman of the Finance Committee), James Mason (from Virginia, chairman of the Foreign Affairs Committee) and Andrew P. Butler (from South Carolina, chairman of the Judiciary Committee). When Congress reconvened on December 5, 1853, the group, termed the F Street Mess, along with Virginian William O. Goode, formed the nucleus that would insist on slaveholder equality in Nebraska. Douglas was aware of the group's opinions and power and knew that he needed to address its concerns. Douglas was also a fervent believer in popular sovereignty—the policy of letting the voters, almost exclusively white males, of a territory decide whether or not slavery should exist in it. Iowa Senator Augustus C. Dodge immediately reintroduced the same legislation to organize Nebraska that had stalled in the previous session; it was referred to Douglas's committee on December 14. Douglas, hoping to achieve the support of the Southerners, publicly announced that the same principle that had been established in the Compromise of 1850 should apply in Nebraska. In the Compromise of 1850, Utah and New Mexico Territories had been organized without any restrictions on slavery, and many supporters of Douglas argued that the compromise had already superseded the Missouri Compromise. The territories were, however, given the authority to decide for themselves whether they would apply for statehood as either free or slaves states whenever they chose to apply. The two territories, however, unlike Nebraska, had not been part of the Louisiana Purchase and had arguably never been subject to the Missouri Compromise. Congressional action The bill was reported to the main body of the Senate on January 4, 1854. It had been modified by Douglas, who had also authored the New Mexico Territory and Utah Territory Acts, to mirror the language from the Compromise of 1850. In the bill, a vast new Nebraska Territory was created to extend from Kansas north to the 49th parallel, the US–Canada border. A large portion of Nebraska Territory would soon be split off into Dakota Territory (1861), and smaller portions transferred to Colorado Territory (1861) and Idaho Territory (1863) before the balance of the land became the State of Nebraska in 1867.[citation needed] Furthermore, any decisions on slavery in the new lands were to be made "when admitted as a state or states, the said territory, or any portion of the same, shall be received into the Union, with or without slavery, as their constitution may prescribe at the time of their admission." In a report accompanying the bill, Douglas's committee wrote that the Utah and New Mexico Acts: ... were intended to have a far more comprehensive and enduring effect than the mere adjustment of the difficulties arising out of the recent acquisition of Mexican territory. They were designed to establish certain great principles, which would not only furnish adequate remedies for existing evils, but, in all times to come, avoid the perils of a similar agitation, by withdrawing the question of slavery from the halls of Congress and the political arena, and committing it to the arbitrament of those who were immediately interested in, and alone responsible for its consequences. The report compared the situation in New Mexico and Utah with the situation in Nebraska. In the first instance, many had argued that slavery had previously been prohibited under Mexican law, just as it was prohibited in Nebraska under the 1820 Missouri Compromise. Just as the creation of New Mexico and Utah territories had not ruled on the validity of Mexican law on the acquired territory, the Nebraska bill was neither "affirming nor repealing ... the Missouri act". In other words, popular sovereignty was being established by ignoring, rather than addressing, the problem presented by the Missouri Compromise. Douglas's attempt to finesse his way around the Missouri Compromise did not work. Kentucky Whig Archibald Dixon believed that unless the Missouri Compromise was explicitly repealed, slaveholders would be reluctant to move to the new territory until slavery was approved by the settlers, who would most likely oppose slavery. On January 16 Dixon surprised Douglas by introducing an amendment that would repeal the section of the Missouri Compromise that prohibited slavery north of the 36°30' parallel. Douglas met privately with Dixon and in the end, despite his misgivings on Northern reaction, agreed to accept Dixon's arguments. A similar amendment was offered in the House by Philip Phillips of Alabama. With the encouragement of the "F Street Mess", Douglas met with them and Phillips to ensure that the momentum for passing the bill remained with the Democratic Party. They arranged to meet with President Franklin Pierce to ensure that the issue would be declared a test of party loyalty within the Democratic Party. Pierce was not enthusiastic about the implications of repealing the Missouri Compromise and had barely referred to Nebraska in his State of the Union message delivered December 5, 1853, just a month before. Close advisors Senator Lewis Cass of Michigan, a proponent of popular sovereignty as far back as 1848 as an alternative to the Wilmot Proviso, and Secretary of State William L. Marcy both told Pierce that repeal would create serious political problems. The full cabinet met and only Secretary of War Jefferson Davis and Secretary of Navy James C. Dobbin supported repeal. Instead, the president and cabinet submitted to Douglas an alternative plan that would have sought out a judicial ruling on the constitutionality of the Missouri Compromise. Both Pierce and Attorney General Caleb Cushing believed that the Supreme Court would find it unconstitutional. Douglas's committee met later that night. Douglas was agreeable to the proposal, but the Atchison group was not. Determined to offer the repeal to Congress on January 23 but reluctant to act without Pierce's commitment, Douglas arranged through Davis to meet with Pierce on January 22 even though it was a Sunday when Pierce generally refrained from conducting any business. Douglas was accompanied at the meeting by Atchison, Hunter, Phillips, and John C. Breckinridge of Kentucky. Douglas and Atchison first met alone with Pierce before the whole group convened. Pierce was persuaded to support repeal, and at Douglas' insistence, Pierce provided a written draft, asserting that the Missouri Compromise had been made inoperative by the principles of the Compromise of 1850. Pierce later informed his cabinet, which concurred with the change of direction. The Washington Union, the communications organ for the administration, wrote on January 24 that support for the bill would be "a test of Democratic orthodoxy". On January 23, a revised bill was introduced in the Senate that repealed the Missouri Compromise and split the unorganized land into two new territories: Kansas and Nebraska. The division was the result of concerns expressed by settlers already in Nebraska as well as the senators from Iowa, who were concerned with the location of the territory's seat of government if such a large territory were created. As well, the bill's proposed border for Kansas was perceived by Southern legislators as the best method to ensure at least one slave state would be created by the law (Kansas was made to only border Missouri, a slave state, and therefore more likely to attract pro-slavery settlers). Existing language to affirm the application of all other laws of the United States in the new territory was supplemented by the language agreed on with Pierce: "except the eighth section of the act preparatory to the admission of Missouri into the Union, approved March 6, 1820 [the Missouri Compromise], which was superseded by the legislation of 1850, commonly called the compromise measures [the Compromise of 1850], and is declared inoperative." Identical legislation was soon introduced in the House. Historian Allan Nevins wrote that the country then became convulsed with two interconnected battles over slavery. A political battle was being fought in Congress over the question of slavery in the new states that were coming. At the same time, there was a moral debate. Southerners claimed that slavery was beneficent, endorsed by the Bible, and generally good policy, whose expansion must be supported. The publications and speeches of abolitionists, some of them former slaves themselves, were telling Northerners that the supposed beneficence of slavery was a Southern lie, and that enslaving another person was un-Christian, a horrible sin that must be fought. Both battles were "fought with a pertinacity, bitterness, and rancor unknown even in Wilmot Proviso days". The freesoilers were at a distinct disadvantage in Congress. The Democrats held large majorities in each house, and Douglas, "a ferocious fighter, the fiercest, most ruthless, and most unscrupulous that Congress had perhaps ever known", led a tightly disciplined party. In the nation at large, the opponents of Nebraska hoped to achieve a moral victory. The New York Times, which had earlier supported Pierce, predicted that this would be the last straw for Northern supporters of the slavery forces and would "create a deep-seated, intense, and ineradicable hatred of the institution which will crush its political power, at all hazards, and at any cost". The day after the bill was reintroduced, two Ohioans, Representative Joshua Giddings and Senator Salmon P. Chase, published a free-soil response, "Appeal of the Independent Democrats in Congress to the People of the United States": We arraign this bill as a gross violation of a sacred pledge; as a criminal betrayal of precious rights; as part and parcel of an atrocious plot to exclude from vast unoccupied region immigrants from the Old World and free laborers from our States, and convert it into a dreary region of despotism, inhabited by masters and slaves. Douglas took the appeal personally and responded in Congress, when the debate was opened on January 30 before a full House and packed gallery. Douglas biographer Robert W. Johanssen described part of the speech: Douglas charged the authors of the "Appeal", whom he referred to throughout as the "Abolitionist confederates", with having perpetrated a "base falsehood" in their protest. He expressed his sense of betrayal, recalling that Chase, "with a smiling face and the appearance of friendship", had appealed for a postponement of debate on the ground that he had not yet familiarized himself with the bill. "Little did I suppose at the time that I granted that act of courtesy", Douglas remarked, that Chase and his compatriots had published a document "in which they arraigned me as having been guilty of a criminal betrayal of my trust", of bad faith, and of plotting against the cause of free government. While other Senators were attending divine worship, they had been "assembled in a secret conclave", devoting the Sabbath to their own conspiratorial and deceitful purposes. The debate would continue for four months, as many Anti-Nebraska political rallies were held across the north. Douglas remained the main advocate for the bill while Chase, William Seward, of New York, and Charles Sumner, of Massachusetts, led the opposition. The New-York Tribune wrote on March 2: The unanimous sentiment of the North is indignant resistance. ... The whole population is full of it. The feeling in 1848 was far inferior to this in strength and universality. The debate in the Senate concluded on March 4, 1854, when Douglas, beginning near midnight on March 3, made a five-and-a-half-hour speech. The final vote in favor of passage was 37 to 14. Free-state senators voted 14 to 12 in favor, and slave-state senators supported the bill 23 to 2. On March 21, 1854, as a delaying tactic in the House of Representatives, the legislation was referred by a vote of 110 to 95 to the Committee of the Whole, where it was the last item on the calendar. Realizing from the vote to stall that the act faced an uphill struggle, the Pierce administration made it clear to all Democrats that passage of the bill was essential to the party and would dictate how federal patronage would be handled. Davis and Cushing, from Massachusetts, along with Douglas, spearheaded the partisan efforts. By the end of April, Douglas believed that there were enough votes to pass the bill. The House leadership then began a series of roll call votes in which legislation ahead of the Kansas–Nebraska Act was called to the floor and tabled without debate. Thomas Hart Benton was among those speaking forcefully against the measure. On April 25, in a House speech that biographer William Nisbet Chambers called "long, passionate, historical, [and] polemical", Benton attacked the repeal of the Missouri Compromise, which he "had stood upon ... above thirty years, and intended to stand upon it to the end—solitary and alone, if need be; but preferring company". The speech was distributed afterward as a pamphlet when opposition to the action moved outside the walls of Congress. It was not until May 8 that the debate began in the House. The debate was even more intense than in the Senate. While it seemed to be a foregone conclusion that the bill would pass, the opponents went all out to fight it. Historian Michael Morrison wrote: A filibuster led by Lewis D. Campbell, an Ohio free-soiler, nearly provoked the House into a war of more than words. Campbell, joined by other antislavery northerners, exchanged insults and invectives with southerners, neither side giving quarter. Weapons were brandished on the floor of the House. Finally, bumptiousness gave way to violence. Henry A. Edmundson, a Virginia Democrat, well oiled and well-armed, had to be restrained from making a violent attack on Campbell. Only after the sergeant at arms arrested him, the debate was cut off, and the House adjourned did the melee subside. The floor debate was handled by Alexander Stephens, of Georgia, who insisted that the Missouri Compromise had never been a true compromise but had been imposed on the South. He argued that the issue was whether republican principles, "that the citizens of every distinct community or State should have the right to govern themselves in their domestic matters as they please", would be honored. The final House vote in favor of the bill was 113 to 100. Northern Democrats supported the bill 44 to 42, but all 45 northern Whigs opposed it. Southern Democrats voted in favor by 57 to 2, and Southern Whigs supported it by 12 to 7. President Franklin Pierce signed the Kansas–Nebraska Act into law on May 30, 1854. This act repealed the Missouri Compromise, "substituting for the ban on slavery what Douglas called 'popular sovereignty' ... arous[ing] a storm of protest throughout the North". Aftermath Immediate responses to the passage of the Kansas–Nebraska Act fell into two classes. The less common response was held by Douglas's supporters, who believed that the bill would withdraw "the question of slavery from the halls of Congress and the political arena, committing it to the arbitration of those who were immediately interested in, and alone responsible for, its consequences". In other words, they believed that the Act would leave decisions about whether slavery would be permitted in the hands of the people rather than the Federal government. The far more common response was one of outrage, interpreting Douglas's actions as, in their words, "part and parcel of an atrocious plot to exclude from a vast unoccupied region emigrant from the Old World, and free laborers from our States, and convert it into a dreary region of despotism, inhabited by masters and slaves". Especially in the eyes of northerners, the Kansas–Nebraska Act was aggression and an attack on the power and beliefs of free states. The response led to calls for public action against the South, as seen in broadsides that advertised gatherings in northern states to discuss publicly what to do about the presumption of the Act. Douglas and former Illinois Representative Abraham Lincoln aired their disagreement over the Kansas–Nebraska Act in seven public speeches during September and October 1854. Lincoln gave his most comprehensive argument against slavery and the provisions of the act in Peoria, Illinois, on October 16, in the Peoria Speech. He and Douglas both spoke to the large audience, Douglas first and Lincoln in response, two hours later. Lincoln's three-hour speech presented thorough moral, legal, and economic arguments against slavery and raised Lincoln's political profile for the first time. The speeches set the stage for the Lincoln-Douglas debates four years later, when Lincoln sought Douglas's Senate seat. Bleeding Kansas, Bloody Kansas, or the Border War was a series of violent political confrontations in the United States between 1854 and 1861 involving anti-slavery "Free-Staters" and pro-slavery "Border Ruffian", or "Southern" elements in Kansas. At the heart of the conflict was the question of whether Kansas would allow or outlaw slavery, and thus enter the Union as a slave state or a free state. Pro-slavery settlers came to Kansas mainly from neighboring Missouri, successfully vote stacking to form a temporary pro-slavery government prior to statehood. Their influence in territorial elections was often bolstered by resident Missourians who crossed into Kansas solely for voting in such ballots. They formed groups such as the Blue Lodges and were dubbed border ruffians, a term coined by the opponent and abolitionist Horace Greeley. Abolitionist settlers, known as "jayhawkers", moved from the East expressly to make Kansas a free state. A clash between the opposing sides was inevitable. Successive territorial governors, usually sympathetic to slavery, attempted to maintain the peace. The territorial capital of Lecompton, the target of much agitation, became such a hostile environment for Free-Staters that they set up their own, unofficial legislature at Topeka. Abolitionist John Brown and his sons gained notoriety in the fight against slavery by murdering five pro-slavery farmers with a broadsword in the 1856 Pottawatomie massacre. Brown also helped defend a few dozen Free-State supporters from several hundred angry pro-slavery supporters at Osawatomie. Before the organization of the Kansas–Nebraska territory in 1854, the Kansas and Nebraska Territories were consolidated as part of the Indian Territory. Throughout the 1830s, the United States engaged in large-scale relocations of American Indian tribes to the Indian Territory, with many Southeastern nations removed to present-day Oklahoma, a process ordered by the Indian Removal Act of 1830 and also known as the Trail of Tears, and many Midwestern nations removed by way of the treaty to present-day Kansas. Among the latter were the Shawnee, Delaware, Kickapoo, Kaskaskia and Peoria, Ioway, and Miami. The passing of the Kansas–Nebraska Act came into direct conflict with the relocations. White American settlers from both the free-soil North and pro-slavery South flooded the Northern Indian Territory, hoping to influence the vote on slavery that would come following the admittance of Kansas and, to a lesser extent, Nebraska to the United States. To address the reservation-settlement problem, US officials attempted further treaty negotiations with the tribes of Kansas and Nebraska. In 1854 alone, the U.S. agreed to acquire lands in Kansas or Nebraska from several tribes including the Kickapoo, Delaware, Omaha, Shawnee, Otoe and Missouri, Miami, and Kaskaskia and Peoria. In exchange for their land cessions, the tribes largely received small reservations in the Indian Territory of Oklahoma or Kansas in some cases. For the nations that remained in Kansas beyond 1854, the Kansas–Nebraska Act introduced a host of other problems. In 1855, white "squatters" built the city of Leavenworth on the Delaware reservation without the consent of either Delaware or the US government. When Commissioner of Indian Affairs George Manypenny ordered military support in removing the squatters, both the military and the squatters refused to comply, undermining both Federal authority and the treaties in place with Delaware. Construction and infrastructure improvement projects promised in nearly every treaty, for example, took a great deal longer to complete than expected. Perhaps the most damaging violation by white American settlers was the mistreatment of American Indians and their properties. Numerous episodes of personal maltreatment, destroyed property, and deforestation at the hands of squatters were reported. Furthermore, the squatters' premature and illegal settlement of the Kansas Territory jeopardized the value of the land, and with it the future of the Indian tribes living on them. Because treaties were land cessions and purchases, the value of the land handed over to the Federal government was critical to the payment received by a given Indian nation. Deforestation, destruction of property, and other general injuries to the land lowered the value of the territories that were ceded by the Kansas Territory Indian tribes. Manypenny's 1856 "Report on Indian Affairs" further explained the devastating effect on Indian populations of diseases that white settlers brought to Kansas. Without providing statistics, Indian Affairs Superintendent to the area Colonel Alfred Cumming reported at least more deaths than births in most tribes in the territory. While noting intemperance, or alcoholism, as a leading cause of death, Cumming specifically cited cholera, smallpox, and measles, none of which the American Indians were able to ameliorate. The disastrous effects of such disease were exemplified by deaths among the Osage people. The Osage had already encountered epidemics associated with relocation and white settlement prior to 1854. The initial removal acts in the 1830s had brought both white American settlers and foreign American Indian tribes to the Great Plains and into contact with the Osage people. Between 1829 and 1843, influenza, cholera, and smallpox killed an estimated 1,242 Osage Indians, resulting in a population recession of roughly 20 percent between 1830 and 1850. Between 1852 and 1856, an estimated 1,300 Osage people lost their lives to scurvy, measles, smallpox, and scrofula, contributing, in part, to a massive decline in population, from 8,000 in 1850 to just 3,500 in 1860. From a political standpoint, the Whig Party had been in decline in the South because of the effectiveness with which it had been hammered by the Democratic Party over slavery. The Southern Whigs hoped that by seizing the initiative on this issue, they would be identified as strong defenders of slavery. Many Northern Whigs broke with them in the Act. The American party system had been dominated by Whigs and Democrats for decades leading up to the Civil War. But the Whig party's increasing internal divisions had made it a party of strange bedfellows by the 1850s. An ascendant anti-slavery wing clashed with a traditionalist and increasingly pro-slavery Southern wing. These divisions came to a head in the 1852 election, where Whig candidate Winfield Scott was trounced by Franklin Pierce. Southern Whigs felt burned by prior Whig president Zachary Taylor, and were unwilling to support another Whig (Taylor, despite being a slave owner himself, had proved willing to stand against slaveholder interests during his presidency). The erosion of Southern Whig support alongside the loss of votes in the North to the Free Soil Party doomed the Whigs, who would never again contest a presidential election. The Kansas–Nebraska Act was also the spark that began the Republican Party, which would bring together former Whigs, Free Soilers, and anti-slavery Democrats to oppose slavery in a manner that the Whig Party had never truly could. Since the act was viewed by anti-slavery Northerners as an aggressive, expansionist maneuver by the slave-owning South, opponents became intensely motivated to form a new party capable of checking this expansion. From its inception, the party was a coalition of the anti-slavery forces within American politics at that time. The first anti-Nebraska local meeting where "Republican" was suggested as a name for a new anti-slavery party was held in a Ripon, Wisconsin schoolhouse on March 20, 1854. The first statewide convention that formed a platform and nominated candidates under the Republican name was held near Jackson, Michigan, on July 6, 1854. At that convention, the party opposed the expansion of slavery into new territories and selected a statewide slate of candidates. The Midwest took the lead in forming state Republican Party tickets; apart from St. Louis and a few areas adjacent to free states, there were no efforts to organize the Party in the Southern states. So was born the Republican Party—campaigning on the popular, emotional issue of "free soil" in the frontier—which would capture the White House just six years later. The Kansas–Nebraska Act divided the nation and pointed it toward civil war. House Democrats suffered huge losses in the midterm elections of 1854, as voters provided support to a wide array of new parties that opposed the Democrats and the Kansas–Nebraska Act. Pierce deplored the new Republican Party, because of its perceived anti-Southern, anti-slavery stance. To Northerners, the President's perceived Southern bias did anything but de-escalate public mood and helped inflame abolitionist anger. Partly due to the unpopularity of the Kansas–Nebraska Act, Pierce lost his bid for re-nomination at the 1856 Democratic National Convention to James Buchanan. Pierce was the first elected president who actively sought reelection but was denied his party's nomination for a second term. Republicans nominated John C. Frémont in the 1856 presidential election and campaigned on "Bleeding Kansas" and the unpopularity of the Kansas–Nebraska Act. Buchanan won the election, but Frémont carried a majority of the free states. Two days after Buchanan's inauguration, Chief Justice Roger Taney delivered the Dred Scott decision, which asserted that Congress had no constitutional power to exclude slavery in the territories. Douglas continued to support the doctrine of popular sovereignty, but Buchanan insisted that Democrats respect the Dred Scott decision and its repudiation of federal interference with slavery in the territories. Guerrilla warfare in Kansas continued throughout Buchanan's presidency and extended into the 1860s, continuing until the American Civil War ended in 1865, with many unjust killings and lootings performed by partisans on either side of the border. Buchanan attempted to admit Kansas as a state under the pro-slavery Lecompton Constitution, but Kansas voters rejected that constitution in an August 1858 referendum. Anti-slavery delegates won a majority of the elections to the 1859 Kansas constitutional convention, and Kansas won admission as a free state under the anti-slavery Wyandotte Constitution in the final months of Buchanan's presidency. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Special:EditPage/Template:Meta_sidebar] | [TOKENS: 1446]
Editing Template:Meta sidebar Copy and paste: – — ° ′ ″ ≈ ≠ ≤ ≥ ± − × ÷ ← → · § Sign your posts on talk pages: ~~~~ Cite your sources: <ref></ref> {{}} {{{}}} | [] [[]] [[Category:]] #REDIRECT [[]] &nbsp; <s></s> <sup></sup> <sub></sub> <code></code> <pre></pre> <blockquote></blockquote> <ref></ref> <ref name="" /> {{Reflist}} <references /> <includeonly></includeonly> <noinclude></noinclude> {{DEFAULTSORT:}} <nowiki></nowiki> <!-- --> <span class="plainlinks"></span> Symbols: ~ | ¡ ¿ † ‡ ↔ ↑ ↓ • ¶ # ∞ ‹› «» ¤ ₳ ฿ ₵ ¢ ₡ ₢ $ ₫ ₯ € ₠ ₣ ƒ ₴ ₭ ₤ ℳ ₥ ₦ ₧ ₰ £ ៛ ₨ ₪ ৳ ₮ ₩ ¥ ♠ ♣ ♥ ♦ 𝄫 ♭ ♮ ♯ 𝄪 © ¼ ½ ¾ Latin: A a Á á À à  â Ä ä Ǎ ǎ Ă ă Ā ā à ã Å å Ą ą Æ æ Ǣ ǣ B b C c Ć ć Ċ ċ Ĉ ĉ Č č Ç ç D d Ď ď Đ đ Ḍ ḍ Ð ð E e É é È è Ė ė Ê ê Ë ë Ě ě Ĕ ĕ Ē ē Ẽ ẽ Ę ę Ẹ ẹ Ɛ ɛ Ǝ ǝ Ə ə F f G g Ġ ġ Ĝ ĝ Ğ ğ Ģ ģ H h Ĥ ĥ Ħ ħ Ḥ ḥ I i İ ı Í í Ì ì Î î Ï ï Ǐ ǐ Ĭ ĭ Ī ī Ĩ ĩ Į į Ị ị J j Ĵ ĵ K k Ķ ķ L l Ĺ ĺ Ŀ ŀ Ľ ľ Ļ ļ Ł ł Ḷ ḷ Ḹ ḹ M m Ṃ ṃ N n Ń ń Ň ň Ñ ñ Ņ ņ Ṇ ṇ Ŋ ŋ O o Ó ó Ò ò Ô ô Ö ö Ǒ ǒ Ŏ ŏ Ō ō Õ õ Ǫ ǫ Ọ ọ Ő ő Ø ø Œ œ Ɔ ɔ P p Q q R r Ŕ ŕ Ř ř Ŗ ŗ Ṛ ṛ Ṝ ṝ S s Ś ś Ŝ ŝ Š š Ş ş Ș ș Ṣ ṣ ß T t Ť ť Ţ ţ Ț ț Ṭ ṭ Þ þ U u Ú ú Ù ù Û û Ü ü Ǔ ǔ Ŭ ŭ Ū ū Ũ ũ Ů ů Ų ų Ụ ụ Ű ű Ǘ ǘ Ǜ ǜ Ǚ ǚ Ǖ ǖ V v W w Ŵ ŵ X x Y y Ý ý Ŷ ŷ Ÿ ÿ Ỹ ỹ Ȳ ȳ Z z Ź ź Ż ż Ž ž ß Ð ð Þ þ Ŋ ŋ Ə ə Greek: Ά ά Έ έ Ή ή Ί ί Ό ό Ύ ύ Ώ ώ Α α Β β Γ γ Δ δ Ε ε Ζ ζ Η η Θ θ Ι ι Κ κ Λ λ Μ μ Ν ν Ξ ξ Ο ο Π π Ρ ρ Σ σ ς Τ τ Υ υ Φ φ Χ χ Ψ ψ Ω ω {{Polytonic|}} Cyrillic: А а Б б В в Г г Ґ ґ Ѓ ѓ Д д Ђ ђ Е е Ё ё Є є Ж ж З з Ѕ ѕ И и І і Ї ї Й й Ј ј К к Ќ ќ Л л Љ љ М м Н н Њ њ О о П п Р р С с Т т Ћ ћ У у Ў ў Ф ф Х х Ц ц Ч ч Џ џ Ш ш Щ щ Ъ ъ Ы ы Ь ь Э э Ю ю Я я ́ IPA: t̪ d̪ ʈ ɖ ɟ ɡ ɢ ʡ ʔ ɸ β θ ð ʃ ʒ ɕ ʑ ʂ ʐ ç ʝ ɣ χ ʁ ħ ʕ ʜ ʢ ɦ ɱ ɳ ɲ ŋ ɴ ʋ ɹ ɻ ɰ ʙ ⱱ ʀ ɾ ɽ ɫ ɬ ɮ ɺ ɭ ʎ ʟ ɥ ʍ ɧ ʼ ɓ ɗ ʄ ɠ ʛ ʘ ǀ ǃ ǂ ǁ ɨ ʉ ɯ ɪ ʏ ʊ ø ɘ ɵ ɤ ə ɚ ɛ œ ɜ ɝ ɞ ʌ ɔ æ ɐ ɶ ɑ ɒ ʰ ʱ ʷ ʲ ˠ ˤ ⁿ ˡ ˈ ˌ ː ˑ ̪ {{IPA|}} Wikidata entities used in this page Pages transcluded onto the current version of this page (help):
========================================
[SOURCE: https://en.wikipedia.org/wiki/Ness_Ziona] | [TOKENS: 2300]
Contents Ness Ziona Ness Ziona (Hebrew: נֵס צִיּוֹנָה, Nes Tziyona) is a city in Central District, Israel. In 2023 it had a population of 47,534, and its jurisdiction was 15,579 dunams (15.579 km2 [6.015 sq mi]). Identification Lying within Ness Ziona's city bounds is the ruin of the Arab village of Sarafand al-Kharab, which was depopulated in 1948. Some scholars believe that this is the site that the medieval Jewish traveller Ishtori Haparchi identified as the Talmudic Tzrifin, but other scholars believe Haparchi was referring to Sarafand al-Amar, 5 km distant. However, neither site has revealed archaeological remains from Talmudic times. On the basis of excavations at Sarafand al-Kharab, it is believed to have been founded no earlier than the late Byzantine period. History In 1878, the German Templer Gustav Reisler purchased lands in Wadi Hunayn, planted an orchard, and lived there with his family. The name "Wadi-Chanin", with its German orthography, became the standard Western name for the place for several decades to come. After losing his wife and children to malaria, Reisler returned to Europe. He travelled to Odessa in 1882 and met Reuben Lehrer, born Patchornik (1832–1917), a religiously observant Russian Jew with Zionist ideals, who had his own farmland there. Reisler traded his parcel of land in Palestine for Lehrer's land in Russia. Reuben Lehrer made aliyah (emigrated to Palestine) with his eldest son Moshe in 1883, bringing over his wife and another four of his children the following year. Lehrer placed advertisements near Jaffa port asking others to join him offering plots in his land for a small amount of money. The pioneers that arrived established a settlement named Tel Aviv (the city of Tel Aviv did not yet exist), although the area was still known as Wadi Chanin, from its Arabic name, Wadi Hunayn. The settlement (colony, moshava) was known for a while as Wadi Chanin after the local Arab village,[dubious – discuss] and as Nahalat Reuben (lit. "Reuben's Estate") after Reuben Lehrer. In 1891, Michael Halperin bought more land in the wadi. He gathered a group of people on the "Hill of Love"[clarification needed], where he arrived with the "Mahane Yehuda" mounted guards company he had founded, and unfurled a blue and white flag emblazoned with the Star of David and the words "Ness Ziona" ('Banner toward Zion' or 'Miracle of Zion') written in gold. The name is based on a verse from the Book of Jeremiah, Jeremiah 4:6: "Raise a standard toward Zion...". This flag was taken by Halperin to the First Zionist Congress seven years later, where it became the model for the official flag adopted by the nascent movement. In 1905, the "Geula" organisation bought the piece of land separating the older Wadi Chanin/Nahalat Reuben and the newer Ness Ziona, allowing the two Jewish settlements to unite into one larger village. In 1926, a new Arab village, Wadi Hunayn, developed across the Jaffa–Jerusalem road from a watermelon farm established there by the Abu Jaber clan from Sarafand el-Kharab, and became part of the same administrative unit as Ness Ziona. Until the 1948 Arab–Israeli War, it was the only mixed Arab–Jewish village in Mandatory Palestine. The coexistence was, on the whole, a peaceful one. According to a census conducted in 1922 by the British Mandate authorities, Ness Ziona had a population of 319 Jews. By the 1931 census, it had increased to 1,013 inhabitants in 221 houses. In 1921 a pump and a system of water pipes were installed. In 1924 the British Army contracted the Israel Electric Company[dubious – discuss] for wired electric power. The contract allowed the Electric Company to extend the grid beyond the original geographical limits that had been projected by the concession it was given. The high-tension line that exceeded the limits of the original concession ran along some major towns and agricultural settlements, offering extended connections to the Jewish settlements of Rishon Le-Zion, Nes-Ziona and Rehovot (in spite of their proximity to the high-tension line, the Arab towns of Ramleh and Lydda remained unconnected). The Great Synagogue of Ness Ziona was built in the 1920s, during the period of the Third Aliyah. In 1935, a temporary workers' camp named Givat Michael after Michael Halperin, was established near Ness Ziona. It was meant as a training camp for new settlement groups ("gar'in"), two of which went on to establish the kibbutzim of Gal On and Mesilot. Ness Ziona was attacked by Arab forces during the 1936–39 Arab Revolt, and the 1948 Arab–Israeli War. The outlying villages of Kfar Aharon and Tirat Shalom (now part of Ness Ziona) frequently exchanged fire with the Arab villages al-Qubayba and Zarnuqa (now western Rehovot). Most of Ness Ziona's youth joined the Haganah to fight off these threats. On May 15, 1948, Sarafand al-Kharab was evacuated of Arab inhabitants, and on May 19, al-Qubayba and Zarnuqa were conquered by the Givati Brigade. Much of the territory abandoned by the fleeing Arab residents of nearby villages was added to Ness Ziona, increasing its size from 8 to 15.3 square kilometres (3.1 to 5.9 sq mi) immediately after the war. During the war, Ness Ziona's population almost tripled to become 4,446 (according to an October 23, 1949 survey), and until 1950 the local council absorbed 9,000 olim, most of whom were housed in ma'abarot (provisional housing camps). In 1952, a new industrial zone was approved for the town on an area of 70 dunams. In 1955, a second industrial zone was approved. Geography Ness Ziona is located on the Israeli coastal plain approximately 10 km (6 mi) inland of the Mediterranean Sea, to the south of Tel Aviv. The city is bordered to the north by Rishon LeZion, to the east by Be'er Ya'akov, and to the south by Rehovot. Beit Hanan, Beit Oved, Ayanot youth village and Kibbutz Netzer Sereni also border the city. The city has been designed to have a rural character due to urban planning that bans the construction of buildings higher than eight stories. Property values have risen by 30 percent in recent years. Ness Ziona is located in the Gush Dan metropolitan area. Ness Ziona is composed of a central core and villages that came under its municipal jurisdiction over time. The city also has two industrial zones and a high-tech park, Kiryat Weizmann. Demographics According to the Israeli Central Bureau of Statistics (CBS), in 2005 the ethnic makeup of the city was 99.6% Jewish and other non-Arabs. At the end of 2004 there were 612 immigrants (2.2%), although this rose sharply to 7.8% in 2005. The city also receives significant internal migration, and is popular among Tel Aviv residents seeking to leave the city. In 2005 there were 14,400 males and 14,900 females. 31.8% of the population was 19 years of age or younger, 15.2% between 20 and 29, 21% between 30 and 44, 19.1% from 45 to 59, 3.1% from 60 to 64, and 9.7% 65 years of age or older. The population growth rate in 2006 was 5.8%. In 2005, there were 11,830 salaried workers and 984 self-employed. The mean monthly wage for a salaried worker was NIS 7,597, a 9.2% increase over 2000. Salaried males had a mean monthly wage of NIS 9,802 (an 8.4% increase) versus NIS 5,595 for females (a 14% increase). The mean income for the self-employed was 7,064. There were 290 people receiving unemployment benefits and 986 receiving an income guarantee (welfare). Economy Ness Ziona is home to the Israel Institute for Biological Research (IIBR), a secret government defence research institute working in chemical and biological research with 350 employees, and Zenith Solar, a solar energy company. The Kiryat Weizmann Science Park is a magnet for many Israeli start-ups, among them Indigo Digital Press, which was acquired by Hewlett-Packard in 2002 and manufactures high-end digital printing presses. Education Until 1961 there was only elementary school in Ness Ziona. In 1961 (שנת הלימודים תשכ"ב), Ben Gurion High school was opened. there are 20 schools in Ness Ziona The following youth organizations have chapters in Ness Ziona: Sports The city has been represented in the top division of Israeli football by two different clubs; Maccabi Ness Ziona competed in the top flight in the first post-independence season. However, they lost all 24 games, and were relegated. A new club, Sektzia Ness Ziona was formed in 1956 and reached the top flight in 1966. However, they were relegated after only one season. After folding, they reformed as Ironi Ness Ziona in 2001, and since then have reverted to their former name and reached Liga Leumit, the second tier. The club plays at the Ness Ziona Stadium. The town is also home to a basketball team, Ironi Nes Ziona B.C., playing in the national premier league. Transportation Ness Ziona has two main roads – Highway 42 to the west, and Road 412 (Weizmann Street), which goes through the city center and connects to Rishon LeZion and Rehovot. Ness Ziona is also served by 5 bus lines operated by Egged (company). Notable people Twin towns – sister cities Ness Ziona is twinned with: See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Hack_and_slash] | [TOKENS: 980]
Contents Hack and slash Hack and slash, also known as hack and slay (H&S or HnS) or slash 'em up, refers to a type of gameplay that emphasizes combat with melee-based weapons (such as swords or blades). They may also feature projectile-based weapons as well (such as guns) as secondary weapons. It is a sub-genre of beat 'em up games, which focuses on melee combat, but developed into a genre of its own. The genre then developed its own branches such as character action games with Devil May Cry. The term "hack and slash" was originally used to describe a play style in tabletop role-playing games, carrying over from there to MUDs, massively multiplayer online role-playing games, and role-playing video games. In arcade and console style action video games, the term has an entirely different usage, specifically referring to action games with a focus on real-time combat with hand-to-hand weapons as opposed to guns or fists. The two types of hack-and-slash games are largely unrelated, though action role-playing games may combine elements of both. Types of hack-and-slash games In the context of action video games, the terms "hack and slash" or "slash 'em up" refer to melee weapon-based action games that are a sub-genre of beat 'em ups. Traditional 2D side-scrolling examples include Taito's The Legend of Kage (1985) and Rastan (1987), Sega's arcade video game series Shinobi (1987 debut) and Golden Axe (1989 debut), Data East's arcade game Captain Silver (1987), Tecmo's early Ninja Gaiden (Shadow Warriors) 2D games (1988 debut), Capcom's Strider (1989), the Master System game Danan: The Jungle Fighter (1990), Taito's Saint Sword (1991), Vivid Image's home computer game First Samurai (1991), and Vanillaware's Dragon's Crown (2013). The term "hack-and-slash" in reference to action-adventure games dates back to 1987, when Computer Entertainer reviewed The Legend of Zelda and said it had "more to offer than the typical hack-and-slash" epics. In the early 21st century, journalists covering the video game industry often use the term "hack and slash" to refer to a distinct genre of 3D, third-person, weapon-based, melee action games. Examples include Capcom's Devil May Cry, Onimusha, and Sengoku Basara franchises, Koei Tecmo's Dynasty Warriors and 3D Ninja Gaiden games, Sony's Genji: Dawn of the Samurai and God of War, as well as Bayonetta, Darksiders, Dante's Inferno, Kingdom Hearts, and No More Heroes. The sub-genre that modernized the hack and slash is sometimes known as a "character action" game, and represents a modern evolution of traditional arcade-action, hack and slash games. This subgenre of games was largely initiated and defined by Hideki Kamiya, creator of the first Devil May Cry, Okami, and Bayonetta. In turn, Devil May Cry (2001) was influenced by earlier hack-and-slash games, including Onimusha: Warlords (2001) and Strider. Other franchises from different genres had some games which left their previous genres or identities and dived into the hack and slash genre such as the Final Fantasy series with Final Fantasy XVI and Final Fantasy Crystal Chronicles. Hack-and-slash games have adopted concepts from beat 'em up and fighting games. These concepts include knockdowns, meter management, canceling, I-frames, hit/block stun, super armor, punishing, and zoning/spacing. The term "hack and slash" itself has roots in "pen and paper" role-playing games such as Dungeons & Dragons (D&D), denoting campaigns of violence with no other plot elements or significant goal. The term itself dates at least as far back as 1980, as shown in a Dragon article by Jean Wells and Kim Mohan which includes the following statement: "There is great potential for more than hacking and slashing in D&D or AD&D; there is the possibility of intrigue, mystery and romance involving both sexes, to the benefit of all characters in a campaign." Hack and slash made the transition from the tabletop to role-playing video games, usually starting in D&D-like worlds. This form of gameplay influenced a wide range of action role-playing games, including games such as Xanadu and Diablo. Meanwhile, RPG games under the Soulslike theme have originally been referred to as hack and slash RPG games, especially with the release of Bloodborne in 2015 with its similar stylish combat mechanics. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Fugitive_Slave_Act_of_1850] | [TOKENS: 2864]
Contents Fugitive Slave Act of 1850 The Fugitive Slave Act or Fugitive Slave Law was a statute passed by the 31st United States Congress on September 18, 1850, as part of the Compromise of 1850 between Southern interests in slavery and Northern Free-Soilers. The Act was one of the most controversial elements of the 1850 compromise and heightened Northern fears of a slave power conspiracy. It required that all escaped slaves, upon capture, be returned to the slave-owner and that officials and citizens of free states had to cooperate. The Act contributed to the growing polarization of the country over the issue of slavery. It was one of the factors that led to the founding of the Republican Party and the start of the American Civil War. Background By 1843, several hundred enslaved people per year escaped to the North successfully, making slavery an unstable institution in the border states.[page needed] The earlier Fugitive Slave Act of 1793 was a federal law that was written with the intent to enforce Article 4, Section 2, Clause 3 of the United States Constitution, which required the return of escaped slaves. It sought to force the authorities in free states to return fugitive slaves to their enslavers. Many free states wanted to disregard the Fugitive Slave Act. Some jurisdictions passed personal liberty laws, mandating a jury trial before alleged fugitive slaves could be moved, while others forbade the use of local jails or the assistance of state officials in arresting or returning fugitive slaves. In some cases, juries refused to convict individuals who had been indicted under the federal law. The Missouri Supreme Court routinely held that enslaved people who had been relocated into neighboring free states along with their enslavers gained their freedom as a result. The 1793 act dealt with slaves who escaped to free states without their enslavers' consent. The Supreme Court of the United States ruled in Prigg v. Pennsylvania (1842) that states were not required to aid in the hunting or recapture of slaves, significantly weakening the law of 1793. After 1840, the Black population of Cass County, Michigan, proliferated as families were attracted by White defiance of discriminatory laws, by numerous highly supportive Quakers, and by low-priced land. Free and escaping Blacks found Cass County a haven. Their good fortune attracted the attention of Southern slavers. In 1847 and 1849, planters from Bourbon and Boone Counties, Kentucky, led raids into Cass County to recapture escaped slaves. The attacks failed, but the situation contributed to Southern demands in 1850 to pass a strengthened fugitive-slave act. Southern politicians often exaggerated the number of people escaping enslavement, blaming the escapes on Northern abolitionists, whom they believed to be inciting their allegedly happy slaves and interfering with Southern property rights. According to the Columbus Enquirer of 1850, the support from Northerners for fugitive slaves caused more ill will between the North and the South than did all the other causes combined.: 6 New law In response to the weakening of the original Fugitive Slave Act, Democratic senator James M. Mason of Virginia drafted the Fugitive Slave Act of 1850, which penalized officials who did not arrest fugitive slaves and made them liable to a fine of $1,000 (equivalent to $38,700 in 2025). Law-enforcement officials everywhere were required to arrest suspected escaped slaves on as little as a claimant's sworn testimony of ownership. Habeas corpus was declared irrelevant. The commissioners before whom the alleged fugitive slaves were brought for a hearing (no jury was permitted, and the alleged slaves could not testify) were compensated $10 (equivalent to $390 in 2025) if the subject was proven to be a fugitive and only $5 (equivalent to $190 in 2025) if he determined the proof to be insufficient. In addition, any person aiding a fugitive by providing food or shelter was subject to as long as six months of imprisonment and a fine as high as $1,000. Officers who captured fugitive slaves were entitled to bonuses or promotions for their work. Enslavers needed only to supply an affidavit to a federal marshal to capture a fugitive from slavery. Since a suspected enslaved person was not eligible for a trial, the law resulted in the kidnapping and conscription of free Blacks into slavery, as purported fugitive slaves had no rights in court and could not defend themselves against accusations. James Oakes writes that "there was no statute of limitations. African Americans who had built families and worked as free people in the North for decades were vulnerable to re-enslavement at any time." The act adversely affected the prospects of escape from slavery, particularly in states close to the North. One study found that while slave prices rose across the South in the years after 1850, it appears that "the 1850 Fugitive Slave Act increased prices in border states by 15% to 30% more than in states further south", illustrating how the act altered the chance of successful escape. According to abolitionist John Brown, even in the supposedly safe refuge of Springfield, Massachusetts, "some of them are so alarmed that they tell me that they cannot sleep on account of either them or their wives and children. I can only say I think I have been enabled to do something to revive their broken spirits. I want all my family to imagine themselves in the same dreadful condition." In 1855, the Wisconsin Supreme Court became the only state high court to declare the Fugitive Slave Act unconstitutional as a result of a case involving fugitive slave Joshua Glover and Sherman Booth, who led efforts that thwarted Glover's recapture. In 1859 in Ableman v. Booth, the Supreme Court of the United States overruled the state court. Jury nullification occurred as Northern juries acquitted men accused of violating the law. Secretary of State Daniel Webster was a key supporter of the law, as expressed in his famous "Seventh of March" speech, who wanted high-profile convictions. The jury nullifications ruined Webster's presidential aspirations and his last-ditch efforts to find a compromise between North and South. Webster led the prosecution against men accused of rescuing Shadrach Minkins in 1851 from Boston officials who intended to return Minkins to slavery; the juries convicted none of the men. Webster sought to enforce a law that was extremely unpopular in the North, and his Whig Party rejected him again when it chose a presidential nominee in 1852. In November 1850, the Vermont legislature passed the Habeas Corpus Law, requiring Vermont judicial and law enforcement officials to assist captured fugitive slaves. It also established a state judicial process, parallel to the federal process, for people accused of being fugitive slaves. This law rendered the federal Fugitive Slave Act effectively unenforceable in Vermont and caused a storm of controversy nationally. It was considered a nullification of federal law, a concept popular among slave states that wanted to nullify other aspects of federal law, and was part of highly charged debates over slavery. Noted poet and abolitionist John Greenleaf Whittier had called for such laws, and the Whittier controversy heightened pro-slavery reactions to the Vermont law. Virginia governor John B. Floyd warned that nullification could push the South toward secession. At the same time, President Millard Fillmore threatened to use the army to enforce the Fugitive Slave Act in Vermont. No test events took place in Vermont, but the rhetoric of the incident echoed South Carolina's 1832 nullification crisis and Thomas Jefferson's 1798 Kentucky Resolutions. In February 1855, the Michigan legislature passed a law prohibiting county jails from being used to detain recaptured slaves, directing county prosecutors to defend recaptured slaves and entitling recaptured slaves to habeas corpus and trial by jury. Other states to pass personal liberty laws include Connecticut, Massachusetts, Maine, New Hampshire, Ohio, Pennsylvania and Wisconsin. The Fugitive Slave Law brought the issue home to anti-slavery citizens in the North, as it made them and their institutions responsible for enforcing slavery. "Where before many in the North had little or no opinions or feelings on slavery, this law seemed to demand their direct assent to the practice of human bondage, and it galvanized Northern sentiments against slavery." Moderate abolitionists were faced with the immediate choice of defying what they believed to be an unjust law or breaking with their consciences and beliefs. Harriet Beecher Stowe wrote Uncle Tom's Cabin (1852) in response to the law.: 1 Many abolitionists openly defied the law. Reverend Luther Lee, pastor of the Wesleyan Methodist Church of Syracuse, New York, wrote in 1855: I never would obey it. I had assisted thirty slaves to escape to Canada during the last month. If the authorities wanted anything of me, my residence was at 39 Onondaga Street. I would admit that and they could take me and lock me up in the Penitentiary on the hill; but if they did such a foolish thing as that I had friends enough in Onondaga County to level it to the ground before the next morning. Several years before, in the Jerry Rescue, Syracuse abolitionists freed by force a fugitive slave who was to be sent back to the South and successfully smuggled him to Canada. Thomas Sims and Anthony Burns were both captured fugitives who were part of unsuccessful attempts by opponents of the Fugitive Slave Law to use force to free them. Other famous examples include Shadrach Minkins in 1851 and Lucy Bagby in 1861, whose forcible return has been cited by historians as important and "allegorical". Pittsburgh abolitionists organized groups whose purpose was the seizure and release of any enslaved person passing through the city, as in the case of a free Black servant of the Slaymaker family, erroneously the subject of a rescue by Black waiters in a hotel dining room. If fugitives from slavery were captured and put on trial, abolitionists worked to defend them in trial, and if by chance the recaptured person had their freedom put up for a price, abolitionists worked to pay to free them. Other opponents, such as African-American leader Harriet Tubman, treated the law as just another complication in their activities. In April 1859, a putative freeman named Daniel Webster was arrested in Harrisburg, Pennsylvania, alleged to be Daniel Dangerfield, an escaped slave from Loudoun County, Virginia. At a hearing in Philadelphia, federal commissioner J. Cooke Longstreth ordered Webster's release, arguing the claimants had not proved that he was Dangerfield. Webster promptly left for Canada. One important consequence was that Canada, not the Northern free states, became the foremost destination for escaped slaves. The Black population of Canada increased from 40,000 to 60,000 between 1850 and 1860, and many reached freedom by the Underground Railroad. Notable Black publishers, such as Henry Bibb and Mary Ann Shadd, created publications encouraging emigration to Canada. By 1855, an estimated 3,500 people among Canada's Black population were fugitives from American slavery. In Pittsburgh, for example, during the September following the passage of the law, organized groups of escaped slaves, armed and sworn to "die rather than be taken back into slavery", set out for Canada, with more than 200 men leaving by the end of the month. The Black population in New York City dropped by almost 2,000 from 1850 to 1855. On the other hand, many Northern businessmen supported the law due to their commercial ties with the Southern states. They founded the Union Safety Committee and raised thousands of dollars to promote their cause, which gained sway, particularly in New York City, and caused public opinion to shift somewhat towards supporting the law. End of the Act In the early stages of the American Civil War, the Union had no established policy on people escaping from slavery. Many enslaved people left their plantations heading for Union lines. Still, in the early stages of the war, fugitives from slavery were often returned by Union forces to their enslavers. General Benjamin Butler and some other Union generals, however, refused to recapture fugitives under the law because the Union and the Confederacy were at war. He confiscated enslaved people as contraband of war and set them free, with the justification that the loss of labor would also damage the Confederacy. Lincoln allowed Butler to continue his policy but countermanded broader directives issued by other Union commanders that freed all enslaved people in places under their control. In August 1861, the U.S. Congress enacted the Confiscation Act of 1861, which barred enslavers from re-enslaving captured fugitives who were forced to aid or abet the insurrection. The legislation, sponsored by Lyman Trumbull, was passed on a near-unanimous vote and established military emancipation as official Union policy, but applied only to enslaved people used by rebel enslavers to support the Confederate cause, creating a limited exception to the Fugitive Slave Act. Union Army forces sometimes returned fugitives from slavery to enslavers until March 1862, when Congress enacted the Confiscation Act of 1862, Section 10 of which barred Union officers from returning slaves to their owners on pain of dismissal from the service. James Mitchell Ashley proposed legislation to repeal the Fugitive Slave Act, but the bill did not make it out of committee in 1863. Although the Union policy of confiscation and military emancipation had effectively superseded the operation of the Fugitive Slave Act, the Fugitive Slave Act was only formally repealed in June 1864. The New York Tribune hailed the repeal, writing: "The blood-red stain that has blotted the statute-book of the Republic is wiped out forever." See also References Sources Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Orion_(constellation)#cite_ref-41] | [TOKENS: 4993]
Contents Orion (constellation) Orion is a prominent set of stars visible during winter in the northern celestial hemisphere. It is one of the 88 modern constellations; it was among the 48 constellations listed by the 2nd-century AD/CE astronomer Ptolemy. It is named after a hunter in Greek mythology. Orion is most prominent during winter evenings in the Northern Hemisphere, as are five other constellations that have stars in the Winter Hexagon asterism. Orion's two brightest stars, Rigel (β) and Betelgeuse (α), are both among the brightest stars in the night sky; both are supergiants and slightly variable. There are a further six stars brighter than magnitude 3.0, including three making the short straight line of the Orion's Belt asterism. Orion also hosts the radiant of the annual Orionids, the strongest meteor shower associated with Halley's Comet, and the Orion Nebula, one of the brightest nebulae in the sky. Characteristics Orion is bordered by Taurus to the northwest, Eridanus to the southwest, Lepus to the south, Monoceros to the east, and Gemini to the northeast. Covering 594 square degrees, Orion ranks 26th of the 88 constellations in size. The constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 26 sides. In the equatorial coordinate system, the right ascension coordinates of these borders lie between 04h 43.3m and 06h 25.5m , while the declination coordinates are between 22.87° and −10.97°. The constellation's three-letter abbreviation, as adopted by the International Astronomical Union in 1922, is "Ori". Orion is most visible in the evening sky from January to April, winter in the Northern Hemisphere, and summer in the Southern Hemisphere. In the tropics (less than about 8° from the equator), the constellation transits at the zenith. From May to July (summer in the Northern Hemisphere, winter in the Southern Hemisphere), Orion is in the daytime sky and thus invisible at most latitudes. However, for much of Antarctica in the Southern Hemisphere's winter months, the Sun is below the horizon even at midday. Stars (and thus Orion, but only the brightest stars) are then visible at twilight for a few hours around local noon, just in the brightest section of the sky low in the North where the Sun is just below the horizon. At the same time of day at the South Pole itself (Amundsen–Scott South Pole Station), Rigel is only 8° above the horizon, and the Belt sweeps just along it. In the Southern Hemisphere's summer months, when Orion is normally visible in the night sky, the constellation is actually not visible in Antarctica because the Sun does not set at that time of year south of the Antarctic Circle. In countries close to the equator (e.g. Kenya, Indonesia, Colombia, Ecuador), Orion appears overhead in December around midnight and in the February evening sky. Navigational aid Orion is very useful as an aid to locating other stars. By extending the line of the Belt southeastward, Sirius (α CMa) can be found; northwestward, Aldebaran (α Tau). A line eastward across the two shoulders indicates the direction of Procyon (α CMi). A line from Rigel through Betelgeuse points to Castor and Pollux (α Gem and β Gem). Additionally, Rigel is part of the Winter Circle asterism. Sirius and Procyon, which may be located from Orion by following imaginary lines (see map), also are points in both the Winter Triangle and the Circle. Features Orion's seven brightest stars form a distinctive hourglass-shaped asterism, or pattern, in the night sky. Four stars—Rigel, Betelgeuse, Bellatrix, and Saiph—form a large roughly rectangular shape, at the center of which lie the three stars of Orion's Belt—Alnitak, Alnilam, and Mintaka. His head is marked by an additional eighth star called Meissa, which is fairly bright to the observer. Descending from the Belt is a smaller line of three stars, Orion's Sword (the middle of which is in fact not a star but the Orion Nebula), also known as the hunter's sword. Many of the stars are luminous hot blue supergiants, with the stars of the Belt and Sword forming the Orion OB1 association. Standing out by its red hue, Betelgeuse may nevertheless be a runaway member of the same group. Orion's Belt, or The Belt of Orion, is an asterism within the constellation. It consists of three bright stars: Alnitak (Zeta Orionis), Alnilam (Epsilon Orionis), and Mintaka (Delta Orionis). Alnitak is around 800 light-years away from Earth, 100,000 times more luminous than the Sun, and shines with a magnitude of 1.8; much of its radiation is in the ultraviolet range, which the human eye cannot see. Alnilam is approximately 2,000 light-years from Earth, shines with a magnitude of 1.70, and with an ultraviolet light that is 375,000 times more luminous than the Sun. Mintaka is 915 light-years away and shines with a magnitude of 2.21. It is 90,000 times more luminous than the Sun and is a double star: the two orbit each other every 5.73 days. In the Northern Hemisphere, Orion's Belt is best visible in the night sky during the month of January at around 9:00 pm, when it is approximately around the local meridian. Just southwest of Alnitak lies Sigma Orionis, a multiple star system composed of five stars that have a combined apparent magnitude of 3.7 and lying at a distance of 1150 light-years. Southwest of Mintaka lies the quadruple star Eta Orionis. Orion's Sword contains the Orion Nebula, the Messier 43 nebula, Sh 2-279 (also known as the Running Man Nebula), and the stars Theta Orionis, Iota Orionis, and 42 Orionis. Three stars comprise a small triangle that marks the head. The apex is marked by Meissa (Lambda Orionis), a hot blue giant of spectral type O8 III and apparent magnitude 3.54, which lies some 1100 light-years distant. Phi-1 and Phi-2 Orionis make up the base. Also nearby is the young star FU Orionis. Stretching north from Betelgeuse are the stars that make up Orion's club. Mu Orionis marks the elbow, Nu and Xi mark the handle of the club, and Chi1 and Chi2 mark the end of the club. Just east of Chi1 is the Mira-type variable red giant star U Orionis. West from Bellatrix lie six stars all designated Pi Orionis (π1 Ori, π2 Ori, π3 Ori, π4 Ori, π5 Ori, and π6 Ori) which make up Orion's shield. Around 20 October each year, the Orionid meteor shower (Orionids) reaches its peak. Coming from the border with the constellation Gemini, as many as 20 meteors per hour can be seen. The shower's parent body is Halley's Comet. Hanging from Orion's Belt is his sword, consisting of the multiple stars θ1 and θ2 Orionis, called the Trapezium and the Orion Nebula (M42). This is a spectacular object that can be clearly identified with the naked eye as something other than a star. Using binoculars, its clouds of nascent stars, luminous gas, and dust can be observed. The Trapezium cluster has many newborn stars, including several brown dwarfs, all of which are at an approximate distance of 1,500 light-years. Named for the four bright stars that form a trapezoid, it is largely illuminated by the brightest stars, which are only a few hundred thousand years old. Observations by the Chandra X-ray Observatory show both the extreme temperatures of the main stars—up to 60,000 kelvins—and the star forming regions still extant in the surrounding nebula. M78 (NGC 2068) is a nebula in Orion. With an overall magnitude of 8.0, it is significantly dimmer than the Great Orion Nebula that lies to its south; however, it is at approximately the same distance, at 1600 light-years from Earth. It can easily be mistaken for a comet in the eyepiece of a telescope. M78 is associated with the variable star V351 Orionis, whose magnitude changes are visible in very short periods of time. Another fairly bright nebula in Orion is NGC 1999, also close to the Great Orion Nebula. It has an integrated magnitude of 10.5 and is 1500 light-years from Earth. The variable star V380 Orionis is embedded in NGC 1999. Another famous nebula is IC 434, the Horsehead Nebula, near Alnitak (Zeta Orionis). It contains a dark dust cloud whose shape gives the nebula its name. NGC 2174 is an emission nebula located 6400 light-years from Earth. Besides these nebulae, surveying Orion with a small telescope will reveal a wealth of interesting deep-sky objects, including M43, M78, and multiple stars including Iota Orionis and Sigma Orionis. A larger telescope may reveal objects such as the Flame Nebula (NGC 2024), as well as fainter and tighter multiple stars and nebulae. Barnard's Loop can be seen on very dark nights or using long-exposure photography. All of these nebulae are part of the larger Orion molecular cloud complex, which is located approximately 1,500 light-years away and is hundreds of light-years across. Due to its proximity, it is one of the most intense regions of stellar formation visible from Earth. The Orion molecular cloud complex forms the eastern part of an even larger structure, the Orion–Eridanus Superbubble, which is visible in X-rays and in hydrogen emissions. History and mythology The distinctive pattern of Orion is recognized in numerous cultures around the world, and many myths are associated with it. Orion is used as a symbol in the modern world. In Siberia, the Chukchi people see Orion as a hunter; an arrow he has shot is represented by Aldebaran (Alpha Tauri), with the same figure as other Western depictions. In Greek mythology, Orion was a gigantic, supernaturally strong hunter, born to Euryale, a Gorgon, and Poseidon (Neptune), god of the sea. One myth recounts Gaia's rage at Orion, who dared to say that he would kill every animal on Earth. The angry goddess tried to dispatch Orion with a scorpion. This is given as the reason that the constellations of Scorpius and Orion are never in the sky at the same time. However, Ophiuchus, the Serpent Bearer, revived Orion with an antidote. This is said to be the reason that the constellation of Ophiuchus stands midway between the Scorpion and the Hunter in the sky. The constellation is mentioned in Horace's Odes (Ode 3.27.18), Homer's Odyssey (Book 5, line 283) and Iliad, and Virgil's Aeneid (Book 1, line 535). In old Hungarian tradition, Orion is known as "Archer" (Íjász), or "Reaper" (Kaszás). In recently rediscovered myths, he is called Nimrod (Hungarian: Nimród), the greatest hunter, father of the twins Hunor and Magor. The π and o stars (on upper right) form together the reflex bow or the lifted scythe. In other Hungarian traditions, Orion's Belt is known as "Judge's stick" (Bírópálca). In Ireland and Scotland, Orion was called An Bodach, a figure from Irish folklore whose name literally means "the one with a penis [bod]" and was the husband of the Cailleach (hag). In Scandinavian tradition, Orion's Belt was known as "Frigg's Distaff" (friggerock) or "Freyja's distaff". The Finns call Orion's Belt and the stars below it "Väinämöinen's scythe" (Väinämöisen viikate). Another name for the asterism of Alnilam, Alnitak, and Mintaka is "Väinämöinen's Belt" (Väinämöisen vyö) and the stars "hanging" from the Belt as "Kaleva's sword" (Kalevanmiekka). There are claims in popular media that the Adorant from the Geißenklösterle cave, an ivory carving estimated to be 35,000 to 40,000 years old, is the first known depiction of the constellation. Scholars dismiss such interpretations, saying that perceived details such as a belt and sword derive from preexisting features in the grain structure of the ivory. The Babylonian star catalogues of the Late Bronze Age name Orion MULSIPA.ZI.AN.NA,[note 1] "The Heavenly Shepherd" or "True Shepherd of Anu" – Anu being the chief god of the heavenly realms. The Babylonian constellation is sacred to Papshukal and Ninshubur, both minor gods fulfilling the role of "messenger to the gods". Papshukal is closely associated with the figure of a walking bird on Babylonian boundary stones, and on the star map the figure of the Rooster is located below and behind the figure of the True Shepherd—both constellations represent the herald of the gods, in his bird and human forms respectively. In ancient Egypt, the stars of Orion were regarded as a god, called Sah. Because Orion rises before Sirius, the star whose heliacal rising was the basis for the Solar Egyptian calendar, Sah was closely linked with Sopdet, the goddess who personified Sirius. The god Sopdu is said to be the son of Sah and Sopdet. Sah is syncretized with Osiris, while Sopdet is syncretized with Osiris' mythological wife, Isis. In the Pyramid Texts, from the 24th and 23rd centuries BC, Sah is one of many gods whose form the dead pharaoh is said to take in the afterlife. The Armenians identified their legendary patriarch and founder Hayk with Orion. Hayk is also the name of the Orion constellation in the Armenian translation of the Bible. The Bible mentions Orion three times, naming it "Kesil" (כסיל, literally – fool). Though, this name perhaps is etymologically connected with "Kislev", the name for the ninth month of the Hebrew calendar (i.e. November–December), which, in turn, may derive from the Hebrew root K-S-L as in the words "kesel, kisla" (כֵּסֶל, כִּסְלָה, hope, positiveness), i.e. hope for winter rains.: Job 9:9 ("He is the maker of the Bear and Orion"), Job 38:31 ("Can you loosen Orion's belt?"), and Amos 5:8 ("He who made the Pleiades and Orion"). In ancient Aram, the constellation was known as Nephîlā′, the Nephilim are said to be Orion's descendants. In medieval Muslim astronomy, Orion was known as al-jabbar, "the giant". Orion's sixth brightest star, Saiph, is named from the Arabic, saif al-jabbar, meaning "sword of the giant". In China, Orion was one of the 28 lunar mansions Sieu (Xiù) (宿). It is known as Shen (參), literally meaning "three", for the stars of Orion's Belt. The Chinese character 參 (pinyin shēn) originally meant the constellation Orion (Chinese: 參宿; pinyin: shēnxiù); its Shang dynasty version, over three millennia old, contains at the top a representation of the three stars of Orion's Belt atop a man's head (the bottom portion representing the sound of the word was added later). The Rigveda refers to the constellation as Mriga (the Deer). Nataraja, "the cosmic dancer", is often interpreted as the representation of Orion. Rudra, the Rigvedic form of Shiva, is the presiding deity of Ardra nakshatra (Betelgeuse) of Hindu astrology. The Jain Symbol carved in the Udayagiri and Khandagiri Caves, India in 1st century BCE has a striking resemblance with Orion. Bugis sailors identified the three stars in Orion's Belt as tanra tellué, meaning "sign of three". The Seri people of northwestern Mexico call the three stars in Orion's Belt Hapj (a name denoting a hunter) which consists of three stars: Hap (mule deer), Haamoja (pronghorn), and Mojet (bighorn sheep). Hap is in the middle and has been shot by the hunter; its blood has dripped onto Tiburón Island. The same three stars are known in Spain and most of Latin America as "Las tres Marías" (Spanish for "The Three Marys"). In Puerto Rico, the three stars are known as the "Los Tres Reyes Magos" (Spanish for The Three Wise Men). The Ojibwa/Chippewa Native Americans call this constellation Mesabi for Big Man. To the Lakota Native Americans, Tayamnicankhu (Orion's Belt) is the spine of a bison. The great rectangle of Orion is the bison's ribs; the Pleiades star cluster in nearby Taurus is the bison's head; and Sirius in Canis Major, known as Tayamnisinte, is its tail. Another Lakota myth mentions that the bottom half of Orion, the Constellation of the Hand, represented the arm of a chief that was ripped off by the Thunder People as a punishment from the gods for his selfishness. His daughter offered to marry the person who can retrieve his arm from the sky, so the young warrior Fallen Star (whose father was a star and whose mother was human) returned his arm and married his daughter, symbolizing harmony between the gods and humanity with the help of the younger generation. The index finger is represented by Rigel; the Orion Nebula is the thumb; the Belt of Orion is the wrist; and the star Beta Eridani is the pinky finger. The seven primary stars of Orion make up the Polynesian constellation Heiheionakeiki which represents a child's string figure similar to a cat's cradle. Several precolonial Filipinos referred to the belt region in particular as "balatik" (ballista) as it resembles a trap of the same name which fires arrows by itself and is usually used for catching pigs from the bush. Spanish colonization later led to some ethnic groups referring to Orion's Belt as "Tres Marias" or "Tatlong Maria." In Māori tradition, the star Rigel (known as Puanga or Puaka) is closely connected with the celebration of Matariki. The rising of Matariki (the Pleiades) and Rigel before sunrise in midwinter marks the start of the Māori year. In Javanese culture, the constellation is often called Lintang Waluku or Bintang Bajak, referring to the shape of a paddy field plow. The imagery of the Belt and Sword has found its way into popular Western culture, for example in the form of the shoulder insignia of the 27th Infantry Division of the United States Army during both World Wars, probably owing to a pun on the name of the division's first commander, Major General John F. O'Ryan. The film distribution company Orion Pictures used the constellation as its logo. In artistic renderings, the surrounding constellations are sometimes related to Orion: he is depicted standing next to the river Eridanus with his two hunting dogs Canis Major and Canis Minor, fighting Taurus. He is sometimes depicted hunting Lepus the hare. He sometimes is depicted to have a lion's hide in his hand. There are alternative ways to visualise Orion. From the Southern Hemisphere, Orion is oriented south-upward, and the Belt and Sword are sometimes called the saucepan or pot in Australia and New Zealand. Orion's Belt is called Drie Konings (Three Kings) or the Drie Susters (Three Sisters) by Afrikaans speakers in South Africa and are referred to as les Trois Rois (the Three Kings) in Daudet's Lettres de Mon Moulin (1866). The appellation Driekoningen (the Three Kings) is also often found in 17th and 18th-century Dutch star charts and seaman's guides. The same three stars are known in Spain, Latin America, and the Philippines as "Las Tres Marías" (The Three Marys), and as "Los Tres Reyes Magos" (The Three Wise Men) in Puerto Rico. Even traditional depictions of Orion have varied greatly. Cicero drew Orion in a similar fashion to the modern depiction. The Hunter held an unidentified animal skin aloft in his right hand; his hand was represented by Omicron2 Orionis and the skin was represented by the five stars designated Pi Orionis. Saiph and Rigel represented his left and right knees, while Eta Orionis and Lambda Leporis were his left and right feet, respectively. As in the modern depiction, Mintaka, Alnilam, and Alnitak represented his Belt. His left shoulder was represented by Betelgeuse, and Mu Orionis made up his left arm. Meissa was his head, and Bellatrix his right shoulder. The depiction of Hyginus was similar to that of Cicero, though the two differed in a few important areas. Cicero's animal skin became Hyginus's shield (Omicron and Pi Orionis), and instead of an arm marked out by Mu Orionis, he holds a club (Chi Orionis). His right leg is represented by Theta Orionis and his left leg is represented by Lambda, Mu, and Epsilon Leporis. Further Western European and Arabic depictions have followed these two models. Future Orion is located on the celestial equator, but it will not always be so located due to the effects of precession of the Earth's axis. Orion lies well south of the ecliptic, and it only happens to lie on the celestial equator because the point on the ecliptic that corresponds to the June solstice is close to the border of Gemini and Taurus, to the north of Orion. Precession will eventually carry Orion further south, and by AD 14000, Orion will be far enough south that it will no longer be visible from the latitude of Great Britain. Further in the future, Orion's stars will gradually move away from the constellation due to proper motion. However, Orion's brightest stars all lie at a large distance from Earth on an astronomical scale—much farther away than Sirius, for example. Orion will still be recognizable long after most of the other constellations—composed of relatively nearby stars—have distorted into new configurations, with the exception of a few of its stars eventually exploding as supernovae, for example Betelgeuse, which is predicted to explode sometime in the next million years. See also References External links
========================================