text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Martin_Luther_King_Jr._Day] | [TOKENS: 2577] |
Contents Martin Luther King Jr. Day Page version status This is an accepted version of this page Martin Luther King Jr. Day (officially Birthday of Martin Luther King Jr., and often referred to shorthand as MLK Day) is a federal holiday in the United States observed on the third Monday of January each year. King was the chief spokesperson for nonviolent activism in the civil rights movement, which protested legalized racial discrimination in federal and state law and civil society. The movement led to several groundbreaking legislative reforms in the United States. Born in 1929, Martin Luther King Jr.'s actual birthday is January 15 (which in 1929 fell on a Tuesday). The earliest Monday for this holiday is January 15 and the latest is January 21. The Monday observance is similar for those federal holidays which fall under the Uniform Monday Holiday Act. The campaign for a federal holiday in King's honor began soon after his assassination in 1968. President Ronald Reagan signed the holiday into law in 1983, and it was first observed three years later on January 20, 1986. At first, some states resisted observing the holiday as such, giving it alternative names or combining it with other holidays. Official observance in each state's law as well as federal law occurred in 2000. History The initial idea of Martin Luther King Jr. Day as a holiday was promoted by labor unions in contract negotiations. After King's death, Representative John Conyers (a Democrat from Michigan) and Senator Edward Brooke (a Republican from Massachusetts) introduced a bill in Congress to make King's birthday a national/official holiday in 1968. The bill first came to a vote in the U.S. House of Representatives in 1979, the House held a vote to amend the bill so that the holiday would be the third Sunday in January, rather than the Monday. The House voted 207–191 against the amendment, as the bill's original sponsors called the amendment "unacceptable". Two of the main arguments mentioned by opponents were that a paid holiday for federal employees would be too expensive and that a holiday to honor a private citizen would be contrary to longstanding tradition (King had never held public office). Only two other figures have national holidays in the U.S. honoring them: George Washington and Christopher Columbus. Soon after, the King Center turned to support from the corporate community and the general public. The success of this strategy was cemented when musician Stevie Wonder released the single "Happy Birthday" to popularize the campaign in 1980 and hosted the Rally for Peace Press Conference in 1981. Six million signatures were collected for a petition to Congress to pass the law, termed by a 2006 article in The Nation as "the largest petition in favor of an issue in U.S. history". Senators Jesse Helms and John P. East (both North Carolina Republicans) led the opposition to the holiday and questioned whether King was important enough to receive such an honor. Helms criticized King's opposition to the Vietnam War and accused him of espousing "action-oriented Marxism". Helms led a filibuster against the bill and on October 3, 1983, submitted a 300-page document to the Senate alleging that King had associations with communists. Democratic New York Senator Daniel Patrick Moynihan declared Helms' document a "packet of filth", threw it on the Senate floor, and stomped on it. President Ronald Reagan initially opposed the establishment of the holiday, stating in a letter to former New Hampshire governor Meldrim Thomson that he believed the momentum for establishing it to be "based on an image, not reality." When asked to comment on Helms' accusations that King was a communist, the president said "We'll know in thirty-five years, won't we", referring to the eventual release of FBI surveillance tapes that had previously been sealed. But on November 2, 1983, Reagan signed a bill into law, proposed by Representative Katie Hall of Indiana, to create a federal holiday honoring King. The final vote in the House of Representatives on August 2, 1983, was 338–90 (242–4 in the House Democratic Caucus and 89–77 in the House Republican Conference) with 5 members voting present or abstaining, while the final vote in the Senate on October 19, 1983, was 78–22 (41–4 in the Senate Democratic Caucus and 37–18 in the Senate Republican Conference), both veto-proof margins. The holiday was observed for the first time on January 20, 1986. It is observed on the third Monday of January. The bill also established the "Martin Luther King, Jr. Federal Holiday Commission" to oversee observance of the holiday, and Coretta Scott King, King's wife, was made a member of this commission for life by President George H. W. Bush in May 1989. Although the federal holiday honoring King was signed into law in 1983 and took effect three years later, not every U.S. state chose to observe the January holiday at the state level until 1991, when the New Hampshire legislature created "Civil Rights Day" and abolished its April "Fast Day". In 1999, New Hampshire became the last state to name a holiday after King, which they first celebrated in January 2000 – the first nationwide celebration of the day with this name. In 1986, Arizona Governor Bruce Babbitt, a Democrat, created a paid state MLK holiday in Arizona by executive order just before he left office, but in 1987, his Republican successor Evan Mecham, citing an attorney general's opinion that Babbitt's order was illegal, reversed Babbitt's decision days after taking office. Later that year, Mecham proclaimed the third Sunday in January to be "Martin Luther King Jr./Civil Rights Day" in Arizona, albeit as an unpaid holiday. This proposal was rejected by the state Senate the following year. In 1990, Arizona voters were given the opportunity to vote on giving state employees a paid MLK holiday. That same year, the National Football League threatened to move Super Bowl XXVII, which was planned for Arizona in 1993, if the MLK holiday was voted down. In the November 1990 election, the voters were offered two King Day options: Proposition 301, which replaced Columbus Day on the list of paid state holidays, and Proposition 302, which merged Lincoln's and Washington's birthdays into one paid holiday to make room for MLK Day. Both measures failed to pass, with only 49% of voters approving Prop 302, the more popular of the two options; although some who voted "no" on 302 voted "yes" on Prop 301. Consequently, the state lost the chance to host Super Bowl XXVII, which was subsequently held at the Rose Bowl in Pasadena, California. In a 1992 referendum, the voters, this time given only one option for a paid King Day, approved state-level recognition of the holiday. On May 2, 2000, South Carolina governor Jim Hodges signed a bill to make King's birthday an official state holiday. South Carolina was the last state to recognize the day as a paid holiday for all state employees. Before the bill, employees could choose between celebrating Martin Luther King Jr. Day or one of three Confederate holidays. Presidential tradition Many American presidents have come to commemorate this day at Ebenezer Baptist Church in Atlanta, where King served as assistant pastor for eight years. Alternative names While all states now observe the holiday, some did not name the day after King. For example, in New Hampshire, the holiday was known as "Civil Rights Day" until 1999, when the State Legislature voted to change the name of the holiday to Martin Luther King Day. Several additional states have chosen to combine commemorations of King's birthday with other observances: Observance Overall, as of 2019, 45% of employers gave employees the day off.[unreliable source?] The reasons for not providing the day off have varied, ranging from the recent addition of the holiday to its occurrence just two weeks after the week between Christmas and New Year's Day, when many businesses are closed for part or all of it. The New York Stock Exchange and NASDAQ both close for trading, and banks are generally closed. Additionally, many schools and places of higher education are closed for classes; others remain open but may hold seminars or celebrations of King's message. The observance of MLK Day has led to some colleges and universities extending their Christmas break to include the day as part of the break. Some employers use MLK Day as a floating or movable holiday. The national "Martin Luther King, Jr., National Day of Service" was started by former Pennsylvania U.S. Senator Harris Wofford and Atlanta Congressman John Lewis, who co-authored the King Holiday and Service Act. The federal legislation challenges Americans to transform the King Holiday into a day of citizen action volunteer service in honor of King. The federal legislation was signed into law by President Bill Clinton on August 23, 1994. Since 1996, Wofford's former state office director, Todd Bernstein, has been directing the annual Greater Philadelphia King Day of Service, the largest event in the nation honoring King. Since 1994, the day of service has been coordinated nationally by AmeriCorps, a federal agency, which provides grants to organizations that coordinate service activities on MLK Day. The only other official national day of service in the U.S., as designated by the government, is September 11 National Day of Service (9/11 Day). Previously, entry to national parks was free on MLK Day and Juneteenth; however, under a December 2025 directive by the Trump Administration, this was ended, and instead free entry would be granted on Donald Trump's birthday, which coincides with Flag Day. Cesar Chavez campaigned with him to call attention to the economic needs of farmworkers in the United States. Chavez used his speech on this day in 1990 to again call attention to the similarity between his campaign regarding pesticide issues and King's campaigns. He later was honored with the creation of Cesar Chavez Day in imitation of this holiday. The day is not a holiday in Canada. It is commemorated annually by the City of Toronto and City of Ottawa governments in Ontario and Montreal in Quebec. In 1984, during a visit by the U.S. Sixth Fleet, Navy chaplain Rabbi Arnold Resnicoff conducted the first Israeli presidential ceremony in commemoration of Martin Luther King Jr. Day, held in the President's Residence, Jerusalem. Aura Herzog, wife of Israel's then-President Chaim Herzog, noted that she was especially proud to host this special event, because Israel had a national forest in honor of King, and that Israel and King shared the idea of "dreams". Resnicoff continued this theme in his remarks during the ceremony, quoting the verse from Genesis, spoken by the brothers of Joseph when they saw their brother approach, "Behold the dreamer comes; let us slay him and throw him into the pit, and see what becomes of his dreams." Resnicoff noted that, from time immemorial, there have been those who thought they could kill the dream by slaying the dreamer, but – as the example of King's life shows – such people are always wrong. Martin Luther King Jr. Day is observed in the Japanese city of Hiroshima. In January 2005, Mayor Tadatoshi Akiba held a special banquet at the mayor's office as an act of unifying his city's call for peace with King's message of human rights. Every year since 1987, the Dr. Martin Luther King Tribute and Dinner has been held in Wassenaar, The Netherlands. The Tribute includes young people and veterans of the Civil Rights Movement as well as music. It always ends with everyone holding hands in a circle and singing "We Shall Overcome". The Tribute is held on the last Sunday in January. Dates 1986–2103 Observed on the third Monday in January. Dates with a gray background indicate Martin Luther King Jr. Day falling on the same day as the Presidential Inauguration. See also References Further reading External links (federal) = federal holidays, (abbreviation) = state/territorial holidays, (religious) = religious holidays, (cultural) = holiday related to a specific racial/ethnic group or sexual minority, (week) = week-long holidays, (month) = month-long holidays, (36) = Title 36 Observances and Ceremonies |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Red_supergiant] | [TOKENS: 3376] |
Contents Red supergiant Red supergiants (RSGs) are stars with a supergiant luminosity class (Yerkes class I) and a stellar classification K or M. They are the largest stars in the universe in terms of volume, although they are not the most massive or luminous. Betelgeuse and Antares A are the brightest and best known red supergiants (RSGs), indeed the only first magnitude red supergiant stars. Classification Stars are classified as supergiants on the basis of their spectral luminosity class. This system uses certain diagnostic spectral lines to estimate the surface gravity of a star, hence determining its size relative to its mass. Larger stars are more luminous at a given temperature and can now be grouped into bands of differing luminosity. The luminosity differences between stars are most apparent at low temperatures, where giant stars are much brighter than main-sequence stars. Supergiants have the lowest surface gravities and hence are the largest and brightest at a particular temperature. The Yerkes or Morgan-Keenan (MK) classification system is almost universal. It groups stars into five main luminosity groups designated by roman numerals: Specific to supergiants, the luminosity class is further divided into normal supergiants of class Ib and brightest supergiants of class Ia. The intermediate class Iab is also used. Exceptionally bright, low surface gravity, stars with strong indications of mass loss may be designated by luminosity class 0 (zero) although this is rarely seen. More often the designation Ia-0 will be used, and more commonly still Ia+. These hypergiant spectral classifications are very rarely applied to red supergiants, although the term red hypergiant is sometimes used for the most extended and unstable red supergiants like VY Canis Majoris and NML Cygni. The "red" part of "red supergiant" refers to the cool temperature. Red supergiants are the coolest supergiants, M-type, and at least some K-type stars although there is no precise cutoff. K-type supergiants are uncommon compared to M-type because they are a short-lived transition stage and somewhat unstable. The K-type stars, especially early or hotter K types, are sometimes described as orange supergiants (e.g. Zeta Cephei), or even as yellow (e.g. yellow hypergiant HR 5171 Aa).[citation needed] Properties Red supergiants are cool and large. They have spectral types of K and M, hence surface temperatures below 4,100 K. They are typically several hundred to over a thousand times the radius of the Sun, although size is not the primary factor in a star being designated as a supergiant. A bright cool giant star can easily be larger than a hotter supergiant. For example, Alpha Herculis is classified as a giant star with a radius of between 264 and 303 R☉ while Epsilon Pegasi is a K2 supergiant of only 185 R☉. Although red supergiants are much cooler than the Sun, they are so much larger that they are highly luminous, typically tens or hundreds of thousands L☉. There is a theoretical upper limit to the radius of a red supergiant at around 1,500 R☉. In the Hayashi limit, stars above this radius would be too unstable and simply do not form. Red supergiants have masses between about 10 M☉ and 30 or 40 M☉. Main-sequence stars more massive than about 40 M☉ do not expand and cool to become red supergiants. Red supergiants at the upper end of the possible mass and luminosity range are the largest known. Their low surface gravities and high luminosities cause extreme mass loss, millions of times higher than the Sun, producing observable nebulae surrounding the star. By the end of their lives red supergiants may have lost a substantial fraction of their initial mass. The more massive supergiants lose mass much more rapidly and all red supergiants appear to reach a similar mass of the order of 10 M☉ by the time their cores collapse. The exact value depends on the initial chemical makeup of the star and its rotation rate. Most red supergiants show some degree of visual variability, but only rarely with a well-defined period or amplitude. Therefore, they are usually classified as irregular or semiregular variables. They even have their own sub-classes, SRC and LC for slow semi-regular and slow irregular supergiant variables respectively. Variations are typically slow and of small amplitude, but amplitudes up to four magnitudes are known. Statistical analysis of many known variable red supergiants shows a number of likely causes for variation: just a few stars show large amplitudes and strong noise indicating variability at many frequencies, thought to indicate powerful stellar winds that occur towards the end of the life of a red supergiant; more common are simultaneous radial mode variations over a few hundred days and probably non-radial mode variations over a few thousand days; only a few stars appear to be truly irregular, with small amplitudes, likely due to photospheric granulation. Red supergiant photospheres contain a relatively small number of very large convection cells compared to stars like the Sun. This causes variations in surface brightness that can lead to visible brightness variations as the star rotates. The spectra of red supergiants are similar to other cool stars, dominated by a forest of absorption lines of metals and molecular bands. Some of these features are used to determine the luminosity class, for example certain near-infrared cyanogen band strengths and the Ca II triplet. Maser emission is common from the circumstellar material around red supergiants. Most commonly this arises from H2O and SiO, but hydroxyl (OH) emission also occurs from narrow regions. In addition to high resolution mapping of the circumstellar material around red supergiants, VLBI or VLBA observations of masers can be used to derive accurate parallaxes and distances to their sources. Currently this has been applied mainly to individual objects, but it may become useful for analysis of galactic structure and discovery of otherwise obscured red supergiant stars. Surface abundances of red supergiants are dominated by hydrogen even though hydrogen at the core has been completely consumed. In the latest stages of mass loss, before a star explodes, surface helium may become enriched to levels comparable with hydrogen. In theoretical extreme mass loss models, sufficient hydrogen may be lost that helium becomes the most abundant element at the surface. When pre-red supergiant stars leave the main sequence, oxygen is more abundant than carbon at the surface, and nitrogen is less abundant than either, reflecting abundances from the formation of the star. Carbon and oxygen are quickly depleted and nitrogen enhanced as a result of the dredge-up of CNO-processed material from the fusion layers. Red supergiants are observed to rotate slowly or very slowly. Models indicate that even rapidly rotating main-sequence stars should be braked by their mass loss so that red supergiants hardly rotate at all. Those red supergiants such as Betelgeuse that do have modest rates of rotation may have acquired it after reaching the red supergiant stage, perhaps through binary interaction. The cores of red supergiants are still rotating and the differential rotation rate can be very large. Definition Supergiant luminosity classes are easy to determine and apply to large numbers of stars, but they group several very different types of stars into a single category. An evolutionary definition restricts the term supergiant to those massive stars which start core helium fusion without developing a degenerate helium core and without undergoing a helium flash. They will universally go on to burn heavier elements and undergo core-collapse resulting in a supernova. Less massive stars may develop a supergiant spectral luminosity class at relatively low luminosity, around 1,000 L☉ when they are on the asymptotic giant branch (AGB) undergoing helium shell burning. Researchers now prefer to categorize these as AGB stars distinct from supergiants because they are less massive, have different chemical compositions at the surface, undergo different types of pulsation and variability, and will evolve differently, usually producing a planetary nebula and white dwarf. Most AGB stars will not become supernovae although there is interest in a class of super-AGB stars, those almost massive enough to undergo full carbon fusion, which may produce peculiar supernovae although without ever developing an iron core. One notable group of low mass high luminosity stars are the RV Tauri variables, AGB or post-AGB stars lying on the instability strip and showing distinctive semi-regular variations. Evolution Red supergiants develop from main-sequence stars with masses between about 8 M☉ and 30 or 40 M☉. Higher-mass stars never cool sufficiently to become red supergiants. Lower-mass stars develop a degenerate helium core during a red giant phase, undergo a helium flash before fusing helium on the horizontal branch, evolve along the AGB while burning helium in a shell around a degenerate carbon-oxygen core, then rapidly lose their outer layers to become a white dwarf with a planetary nebula. AGB stars may develop spectra with a supergiant luminosity class as they expand to extreme dimensions relative to their small mass, and they may reach luminosities tens of thousands times the sun's. Intermediate "super-AGB" stars, around 7–9 M☉, can undergo carbon fusion and may produce an electron capture supernova through the collapse of an oxygen–neon core. Main-sequence stars, burning hydrogen in their cores, with masses between 10 and 30 or 40 M☉ will have temperatures between about 25,000K and 32,000K and spectral types of early B, possibly very late O. They are already very luminous stars of 10,000–100,000 L☉ due to rapid CNO cycle fusion of hydrogen and they have fully convective cores. In contrast to the Sun, the outer layers of these hot main-sequence stars are not convective. These pre-red supergiant main-sequence stars exhaust the hydrogen in their cores after 5–20 million years. They then start to burn a shell of hydrogen around the now-predominantly helium core, and this causes them to expand and cool into supergiants. Their luminosity increases by a factor of about three. The surface abundance of helium is now up to 40% but there is little enrichment of heavier elements. The supergiants continue to cool and most will rapidly pass through the Cepheid instability strip, although the most massive will spend a brief period as yellow hypergiants. They will reach late K or M class and become a red supergiant. Helium fusion in the core begins smoothly either while the star is expanding or once it is already a red supergiant, but this produces little immediate change at the surface. Red supergiants develop deep convection zones reaching from the surface over halfway to the core and these cause strong enrichment of nitrogen at the surface, with some enrichment of heavier elements. Some red supergiants undergo blue loops where they temporarily increase in temperature before returning to the red supergiant state. This depends on the mass, rate of rotation, and chemical makeup of the star. While many red supergiants will not experience a blue loop, some can have several. Temperatures can reach 10,000K at the peak of the blue loop. The exact reasons for blue loops vary in different stars, but they are always related to the helium core increasing as a proportion of the mass of the star and forcing higher mass-loss rates from the outer layers. All red supergiants will exhaust the helium in their cores within one or two million years and then start to burn carbon. This continues with fusion of heavier elements until an iron core builds up, which then inevitably collapses to produce a supernova. The time from the onset of carbon fusion until the core collapse is no more than a few thousand years. In most cases, core-collapse occurs while the star is still a red supergiant, the large remaining hydrogen-rich atmosphere is ejected, and this produces a Type II supernova spectrum. The opacity of this ejected hydrogen decreases as it cools and this causes an extended delay to the drop in brightness after the initial supernova peak, the characteristic of a Type II-P supernova. The most luminous red supergiants, at near solar metallicity, are expected to lose most of their outer layers before their cores collapse, hence they evolve back to yellow hypergiants and luminous blue variables. Such stars can explode as Type II-L supernovae, still with hydrogen in their spectra but not with sufficient hydrogen to cause an extended brightness plateau in their light curves. Stars with even less hydrogen remaining may produce the uncommon Type IIb supernova, where there is so little hydrogen remaining that the hydrogen lines in the initial Type II spectrum fade to the appearance of a Type Ib supernova. The observed progenitors of Type II-P supernovae all have temperatures between 3,500K and 4,400K and luminosities between 10,000 L☉ and 300,000 L☉. This matches the expected parameters of lower mass red supergiants. A small number of progenitors of Type II-L and Type IIb supernovae have been observed, all having luminosities around 100,000 L☉ and somewhat higher temperatures up to 6,000K. These are a good match for slightly higher mass red supergiants with high mass-loss rates. There are no known supernova progenitors corresponding to the most luminous red supergiants, and it is expected that these evolve to Wolf Rayet stars before exploding. Clusters Red supergiants are necessarily no more than about 25 million years old and such massive stars are expected to form only in relatively large clusters of stars, so they are expected to be found mostly near prominent clusters. However they are fairly short-lived compared to other phases in the life of a star and only form from relatively uncommon massive stars, so there will generally only be small numbers of red supergiants in each cluster at any one time. The massive Hodge 301 cluster in the Tarantula Nebula contains three. Until the 21st century the largest number of red supergiants known in a single cluster was five in NGC 7419. Most red supergiants are found singly, for example Betelgeuse in the Orion OB1 association and Antares in the Scorpius–Centaurus association. Since 2006, a series of massive clusters have been identified near the base of the Crux-Scutum Arm of the galaxy, each containing multiple red supergiants. RSGC1 contains at least 12 red supergiants, RSGC2 (also known as Stephenson 2) contains at least 26, RSGC3 contains at least 8, and RSGC4 (also known as Alicante 8) also contains at least 8. A total of 80 confirmed red supergiants have been identified within a small area of the sky in the direction of these clusters. These four clusters appear to be part of a massive burst of star formation 10–20 million years ago at the near end of the bar at the centre of the galaxy. Similar massive clusters have been found near the far end of the galactic bar, but not such large numbers of red supergiants. Examples Red supergiants are rare stars, but they are visible at great distance and are often variable so there are a number of well-known naked-eye examples: Mira was historically thought to be a red supergiant star, but is now widely accepted to be an asymptotic giant branch star. Some red supergiants are larger and more luminous, with radii exceeding over a thousand times that of the Sun. These are hence also referred to as red hypergiants: A survey expected to capture virtually all Magellanic Cloud red supergiants detected around a dozen M class stars Mv−7 and brighter, around a quarter of a million times more luminous than the Sun, and from about 1,000 times the radius of the Sun upwards. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Tatar_language] | [TOKENS: 4204] |
Contents Tatar language Tatar (/ˈtɑːtər/ TAH-tər; Tatar: татар теле, romanized: tatar tele or татарча, romanized: tatarça) is a Turkic language spoken by the Tatars mainly located in modern Republic of Tatarstan, wider Volga-Ural region, as well as many other regions of Russia. Tatar belongs to the Kipchak branch of Turkic languages, the same branch as Bashkort, Kazakh, Nogai and Kyrgyz. The two main dialects of Tatar are the Central Dialect (urta / qazan; most common), and the Western Dialect (könbatış / mişər). The literary Tatar language is based on the Central Dialect and on a local variant of Türki. Tatar should not be confused with Crimean Tatar or Siberian Tatar, which are different languages, although also part of the Kipchak language group. Like other Turkic languages, Tatar was traditionally written in the Arabic script for most of its history. Since 1939, the alphabet has been Cyrillic, though a number of Latin-based versions have also been used over the years. Geographic distribution The Tatar language is spoken in Russia by about 5.3 million people, and also by communities in Azerbaijan, China, Finland, Georgia, Israel, Kazakhstan, Latvia, Lithuania, Romania, Turkey, Ukraine, the United States, Uzbekistan, and several other countries.[citation needed] Globally, there are more than 7 million speakers of Tatar. Tatar is also the mother tongue for several thousand Mari, a Finnic people;[citation needed] Mordva's Qaratay group also speak a variant of Kazan Tatar. In the 2010 census, 69% of Russian Tatars claimed at least some knowledge of the Tatar language. In Tatarstan, 93% of Tatars and 3.6% of Russians claimed to have at least some knowledge of the Tatar language. In neighbouring Bashkortostan, 67% of Tatars, 27% of Bashkirs, and 1.3% of Russians claimed to understand basic Tatar language. Tatar, along with Russian, is the official language of the Republic of Tatarstan. The official script of the Tatar language is based on the Cyrillic script with some additional letters. The Republic of Tatarstan passed a law in 1999, which came into force in 2001, establishing an official Tatar Latin alphabet. A Russian federal law overrode it in 2002, making Cyrillic the sole official script in Tatarstan since. Unofficially, other scripts are used as well, mostly Latin and Arabic. All official sources in Tatarstan must use Cyrillic on their websites and in publishing. In other cases, where Tatar has no official status, the use of a specific alphabet depends on the preference of the author. The Tatar language was made a de facto official language in Russia in 1917, but only within the Tatar Autonomous Soviet Socialist Republic. Tatar is also considered to have been the official language in the short-lived Idel-Ural State, briefly formed during the Russian Civil War. The usage of Tatar declined during the 20th century. By the 1980s, the study and teaching of Tatar in the public education system was limited to rural schools. However, Tatar-speaking pupils had little chance of entering university because higher education was available in Russian almost exclusively. As of 2001, Tatar was considered a potentially endangered language while Siberian Tatar received "endangered" and "seriously endangered" statuses, respectively. Higher education in Tatar can only be found in Tatarstan, and is restricted to the humanities. In other regions Tatar is primarily a spoken language and the number of speakers as well as their proficiency tends to decrease. Tatar is popular as a written language only in Tatar-speaking areas where schools with Tatar language lessons are situated. On the other hand, Tatar is the only language in use in rural districts of Tatarstan. Since 2017, Tatar language classes are no longer mandatory in the schools of Tatarstan. According to the opponents of this change, it will further endanger the Tatar language and is a violation of the Tatarstan Constitution which stipulates the equality of Russian and Tatar languages in the republic. Dialects There are two main dialects of Tatar: These dialects also have subdivisions. Significant contributions to the study of the Tatar language and its dialects, were made by a scientist Gabdulkhay Akhatov, who is considered to be the founder of the modern Tatar dialectological school. Spoken idioms of Siberian Tatars, which differ significantly from the above two, are often considered as the third dialect group of Tatar by some, but as an independent language on its own by others. The Central or Middle dialectal group is spoken in Kazan and most of Tatarstan and is the basis of the standard literary Tatar language. Middle Tatar includes the Nagaibak dialect. The Western (Mishar) dialect is distinguished from the Central dialect most clearly by the absence of the uvular q and ğ and the rounded å of the first syllable. Letters ç and c are pronounced as affricates. Regional differences exist also. Mishar Dialect, and especially its regional variant in Sergachsky district (Nizhny Novgorod), is said to be "faithfully close" to the ancient Kipchak language. Some linguists, such as Radlov, Samoylovich, think that Mishar traditionally belongs to the Kipchak-Cuman group of languages, rather than to the Kipchak-Bulgar group. Mishar is the dialect spoken by the Tatar minority of Finland. Two main isoglosses that characterize Siberian Tatar are ç as [ts] and c as [j], corresponding to standard [ɕ] and [ʑ]. There are also grammatical differences within the dialect, scattered across Siberia. Many linguists claim the origins of Siberian Tatar dialects are actually independent of Volga–Ural Tatar; these dialects are quite remote both from Standard Tatar and from each other, often preventing mutual comprehension. The claim that this language is part of the modern Tatar language is typically supported by linguists in Kazan, Moscow and by Siberian Tatar linguists and denounced by some Russian and Tatar ethnographs. Over time, some of these dialects were given distinct names and recognized as separate languages (e.g. the Chulym language) after detailed linguistic study. However, the Chulym language was never classified as a dialect of Tatar language. Confusion arose because of the endoethnonym "Tatars" used by the Chulyms. The question of classifying the Chulym language as a dialect of the Khakass language was debatable. A brief linguistic analysis shows that many of these dialects exhibit features which are quite different from the Volga–Ural Tatar varieties, and should be classified as Turkic varieties belonging to several sub-groups of the Turkic languages, distinct from Kipchak languages to which Volga–Ural Tatar belongs.[citation needed] Phonology There exist several interpretations of the Tatar vowel phonemic inventory. In total Tatar has nine or ten native vowels, and three or four loaned vowels (mainly in Russian loanwords). According to Baskakov (1988) Tatar has only two vowel heights, high and low. There are two low vowels, front and back, while there are eight high vowels: front and back, round (R+) and unround (R−), normal and short (or reduced). Poppe (1963) proposed a similar yet slightly different scheme with a third, higher mid, height, and with nine vowels. According to Makhmutova (1969) Tatar has three vowel heights: high, mid and low, and four tongue positions: front, front-central, back-central and back (as they are named when cited). The mid back unrounded vowel ''ë is usually transcribed as ı, though it differs from the corresponding Turkish vowel. The tenth vowel ï is realized as the diphthong ëy (IPA: [ɯɪ]), which only occurs word-finally, but it has been argued to be an independent phoneme. Phonetically, the native vowels are approximately thus (with the Cyrillic letters and the usual Latin romanization in angle brackets): In polysyllabic words, the front-back distinction is lost in reduced vowels: all become mid-central. The mid reduced vowels in an unstressed position are frequently elided, as in кеше keşe [kĕˈʃĕ] > [kʃĕ] 'person', or кышы qışı [qɤ̆ˈʃɤ̆] > [qʃɤ̆] '(his) winter'. Low back /ɑ/ is rounded [ɒ] in the first syllable and after [ɒ], but not in the last, as in бала bala [bɒˈlɑ] 'child', балаларга balalarğa [bɒlɒlɒrˈʁɑ] 'to children'. In Russian loans there are also [ɨ], [ɛ], [ɔ], and [ä], written the same as the native vowels: ы, е/э, о, а respectively. Historically, the Old Turkic mid vowels have raised from mid to high, whereas the Old Turkic high vowels have become the Tatar reduced mid series. (The same shifts have also happened in Bashkir.) Tatar consonants usually undergo slight palatalization before front vowels. However, this allophony is not significant and does not constitute a phonemic status. This differs from Russian where palatalized consonants are not allophones but phonemes on their own. There are a number of Russian loanwords which have palatalized consonants in Russian and are thus written the same in Tatar (often with the "soft sign" ь). The Tatar standard pronunciation also requires palatalization in such loanwords; however, some Tatar may pronounce them non-palatalized. In native words there are six types of syllables (Consonant, Vowel, Sonorant): Loanwords allow other types: CSV (gra-mota), CSVC (käs-trül), etc. Stress is usually on the final syllable. However, some suffixes cannot be stressed, so the stress shifts to the syllable before that suffix, even if the stressed syllable is the third or fourth from the end. A number of Tatar words and grammatical forms have the natural stress on the first syllable. Loanwords, mainly from Russian, usually preserve their original stress (unless the original stress is on the last syllable, in such a case the stress in Tatar shifts to suffixes as usual, e.g. sovét > sovetlár > sovetlarğá). Tatar phonotactics dictate many pronunciation changes which are not reflected in the orthography. Grammar Like other Turkic languages, Tatar is an agglutinative language. Tatar nouns are inflected for cases and numbers. Case suffixes change depending on the last consonants of the noun, while nouns ending in for example p/k (п/к) are voiced to b/g (б/г) when a possessive suffix is added (kitap –> kitabım / китабым, "my book"). Suffixes below are in back vowel, with front variant can be seen at #Phonology section. The declension of possessive suffixes is even more irregular, with the dative suffix -а used in 1st singular and 2nd singular suffixes, and the accusative, dative, locative, and ablative endings -н, -на, -нда, -ннан is used after 3rd person possessive suffix. Nouns ending in -и, -у, or -ү, although phonologically vowels, take consonantic endings. The declension of personal and demonstrative pronouns tends to be irregular. Irregular forms are in bold. The distribution of present tense suffixes is complicated, with the former (also with vowel harmony) is used with verb stems ending in consonants, and the latter is used with verb stem ending in vowels (with the last vowel being deleted, eşläw / эшләү – eşli / эшли; compare Turkish işlemek – continuous işliyor). The distribution of indefinite future tense is more complicated in consonant-ending stems, it is resolved by -арга/-ырга infinitives (yazarga / язарга – yazar / язар). However, because some have verb citation forms in verbal noun (-у), this rule becomes somewhat unpredictable. Tenses are negated with -ма, however in the indefinite future tense and the verbal participle they become -mas / -мас and -mıyça / -мыйча instead, respectively. Alongside vowel-ending stems, the suffix also becomes -мый when negates the present tense. To form interrogatives, the suffix -мы is used. Definite past and conditional tenses use type II personal inflections instead. When in the case of present tense, short ending (-м) is used. After vowels, the first person imperative forms deletes the last vowel, similar to the present tense does (eşläw – eşlim). Like plurals of nouns, the suffix -лар change depending the preceding consonants (-alar, but -ğannar). Some verbs, however, fall into this category. Dozens of them have irregular stems with a final mid vowel, but obscured on the infinitive (uqu – uqı, uqıy; tözü – töze, tözi). The verbs qoru / кору "to build", tanu / тану "to disclaim", taşu / ташу "to spill" have contrastive meanings with verbs with their final vowelled counterparts, meaning "to dry", "to know", "to carry". The verb дию (diyu) "to say" is significantly more irregular than any other verbs: its 2nd person singular imperative is digen (диген), while its expected regular form is repurposed as the present tense forms (dim, diñ, di…). These predicative suffixes have now fallen into disuse, or rarely used. Writing system During its history, Tatar has been written in Arabic, Latin and Cyrillic scripts. Before 1928, Tatar was mostly written in Arabic script (Иске имля/İske imlâ, "Old orthography", to 1920; Яңа имла/Yaña imlâ, "New orthography", 1920–1928). During the 19th century, Russian Christian missionary Nikolay Ilminsky devised the first Cyrillic alphabet for Tatar. This alphabet is still used by Christian Tatars (Kryashens). In the Soviet Union after 1928, Tatar was written with a Latin alphabet called Jaꞑalif. In 1939, in Tatarstan and all other parts of the Soviet Union, a Cyrillic script was adopted and is still used to write Tatar. It is also used in Kazakhstan. The Republic of Tatarstan passed a law in 1999 that came into force in 2001 establishing an official Tatar Latin alphabet. A Russian federal law overrode it in 2002, making Cyrillic the sole official script in Tatarstan since. In 2004, an attempt to introduce a Latin-based alphabet for Tatar was further abandoned when the Constitutional Court ruled that the federal law of 15 November 2002 mandating the use of Cyrillic for the state languages of the republics of the Russian Federation does not contradict the Russian constitution. In accordance with this Constitutional Court ruling, on 28 December 2004, the Tatar Supreme Court overturned the Tatarstani law that made the Latin alphabet official. In 2012 the Tatarstan government adopted a new Latin alphabet but with limited usage (mostly for Romanization). In 2024, the modified Common Turkic Alphabet replaced letter ä with ə, which was already in use in Azerbaijani, as well as among Tatar activists using the Latin alphabet. History The ancestors of Tatar are the extinct Turkic Bulgar and Kipchak languages. The literary Tatar language is based on the Central Tatar (Kazan) dialect and on Türki, also known as Old Tatar Language. Both are members of the Volga-Ural subgroup of the Kipchak group of Turkic languages, although they also partly derive from the ancient Volga Bulgar language. Crimean Tatar, although similar by name, belongs to another subgroup of the Kipchak languages. Unlike Kazan Tatar, Crimean Tatar is heavily influenced by Turkish (mostly its Ottoman variety with Arabic and Persian influences) and Nogai languages. Most of the Uralic languages in the Volga River area have strongly influenced the Tatar language, as have the Arabic, Persian and Russian languages. The Arabic and Persian influence on Tatar can be seen most clearly in loan words but also in specific sounds. For example, Tatar ğ / г is the Arabic ghayn غ. However, in Arabic words and names where there is an ayin ع, Tatar adds the ghayn instead (عبد الله, ’Abdullah; Tatar: Ğabdulla / Габдулла; Yaña imlâ: غابدوللا /ʁabdulla/). In the Mishar Tatar Dialect, ğ is not pronounced, and thus, a word like şiğır (شعر, шигыр, "poem") is şigır or şiyır for Mishars (who in Finland use the Latin alphabet). When it comes to Arabic and Persian loanwords, in the Tatar Latin script, alif is realised as the letter a, and when there is no alif, it is ä (ə) (عيسى, Ğəysə; آزاد, Azat). When the alif has hamza on top (أ), it is also ä (ə), but Tatar İske imlâ spells it without (امين / أمين, Əmin). Vowel harmony as well is a deciding factor (عبد الله, Ğabdulla; عبد الرشيد, Ğəbderrəşit). Similarly with ö/o (عمر, Ğömər; عثمان, Ğosman). However, this rule is often inconsistent when transliterating from Cyrillic to Latin. During the Golden Horde (1242–1502), the ancestors of modern Tatars used Persian in addition to their Turkic language to a relatively significant extent, especially in poetry and even after the Golden Horde. For example, the long-serving Khan of the Kazan Khanate (1438–1552), Möxəmməd-Əmin, wrote poetry in Persian. In religious and legal matters Arabic was used. Many Persian and Arabic works are considered part of Tatar literature today. Sample text Article 1 of the Universal Declaration of Human Rights in Tatar (Cyrillic): Article 1 of the Universal Declaration of Human Rights in Tatar (Latin): International Phonetic Alphabet transcription: Article 1 of the Universal Declaration of Human Rights in English: See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/BeanShell] | [TOKENS: 741] |
Contents BeanShell BeanShell is a small, free, embeddable Java source interpreter with object scripting language features, written in Java. It runs in the Java Runtime Environment (JRE), dynamically executes standard Java syntax and extends it with common scripting conveniences such as loose types, commands, and method closures, like those in Perl and JavaScript. Features While BeanShell allows its users to define functions that can be called from within a script, its underpinning philosophy has been to not pollute its syntax with too many extensions and "syntactic sugar", thereby ensuring that code written for a Java compiler can usually be executed interpretively by BeanShell without any changes and, almost just as much, vice versa. This makes BeanShell a popular testing and debugging tool for the Java virtual machine (JVM) platform. BeanShell supports scripted objects as simple method closures like those in Perl and JavaScript. BeanShell is an open source project and has been incorporated into many applications, such as Apache OpenOffice, Apache Ant, WebLogic Server Application Server, Apache JMeter, jEdit, ImageJ, JUMP GIS, Apache Taverna, and many others. BeanShell provides an easy to integrate application programming interface (API). It can also be run in command-line mode or within its own graphical environment. History The first versions of BeanShell (0.96, 1.0) were released by Patrick Niemeyer in 1999, followed by a series of versions. BeanShell 1.3.0 was released in August 2003. Version 2.0b1 was released in September 2003, culminating with version 2.0b4 in May 2005, which as of January 2015 is the newest release posted on the official webpage. BeanShell has been included in the Linux distribution Debian since 1999. BeanShell was undergoing standardization through the Java Community Process (JCP) under JSR 274. Following the JCP approval of the BeanShell JSR Review Ballot in June 2005, no visible activity was taking place around BeanShell. The JSR 274 status is "Dormant". Since Java 9, Java instead includes JShell, a different read–eval–print loop (REPL) shell based on Java syntax, indicating that BeanShell will not be continued. A fork of BeanShell, BeanShell2, was created in May 2007 on the now-defunct Google Code Web site. The beanshell2 project has made a number of fixes and enhancements to BeanShell and multiple releases. As of January 2020[update], the latest version of BeanShell2 is v2.1.9, released March 2018. This fork was merged back into the original tree in 2018, retaining all the independent changes from both, and the official project has been hosted at GitHub. In December 2012, following a proposal to accept BeanShell as an Apache Incubator project, BeanShell was licensed to The Apache Software Foundation and migrated to the Apache Extras, changing the license to Apache License 2.0. The project was not accepted but instead projected to become part of the Apache Commons at a future time. Due to changes in the developers' personal circumstances, the BeanShell community did not, however, complete the move to Apache, but remained at Apache Extras. The project has since released BeanShell 2.0b5, which is used by Apache OpenOffice and Apache Taverna. A Windows automated installer, BeanShell Double-Click, was created in 2013. It includes desktop integration features. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Jews#cite_note-DigitalSamaritans-11] | [TOKENS: 15852] |
Contents Jews Jews (Hebrew: יְהוּדִים, ISO 259-2: Yehudim, Israeli pronunciation: [jehuˈdim]), or the Jewish people, are an ethnoreligious group and nation, originating from the Israelites of ancient Israel and Judah. They traditionally adhere to Judaism. Jewish ethnicity, religion, and community are highly interrelated, as Judaism is an ethnic religion, though many ethnic Jews do not practice it. Religious Jews regard converts to Judaism as members of the Jewish nation, pursuant to the long-standing conversion process. The Israelites emerged from the pre-existing Canaanite peoples to establish Israel and Judah in the Southern Levant during the Iron Age. Originally, Jews referred to the inhabitants of the kingdom of Judah and were distinguished from the gentiles and the Samaritans. According to the Hebrew Bible, these inhabitants predominately originate from the tribe of Judah, who were descendants of Judah, the fourth son of Jacob. The tribe of Benjamin were another significant demographic in Judah and were considered Jews too. By the late 6th century BCE, Judaism had evolved from the Israelite religion, dubbed Yahwism (for Yahweh) by modern scholars, having a theology that religious Jews believe to be the expression of the Mosaic covenant between God and the Jewish people. After the Babylonian exile, Jews referred to followers of Judaism, descendants of the Israelites, citizens of Judea, or allies of the Judean state. Jewish migration within the Mediterranean region during the Hellenistic period, followed by population transfers, caused by events like the Jewish–Roman wars, gave rise to the Jewish diaspora, consisting of diverse Jewish communities that maintained their sense of Jewish history, identity, and culture. In the following millennia, Jewish diaspora communities coalesced into three major ethnic subdivisions according to where their ancestors settled: the Ashkenazim (Central and Eastern Europe), the Sephardim (Iberian Peninsula), and the Mizrahim (Middle East and North Africa). While these three major divisions account for most of the world's Jews, there are other smaller Jewish groups outside of the three. Prior to World War II, the global Jewish population reached a peak of 16.7 million, representing around 0.7% of the world's population at that time. During World War II, approximately six million Jews throughout Europe were systematically murdered by Nazi Germany in a genocide known as the Holocaust. Since then, the population has slowly risen again, and as of 2021[update], was estimated to be at 15.2 million by the demographer Sergio Della Pergola or less than 0.2% of the total world population in 2012.[b] Today, over 85% of Jews live in Israel or the United States. Israel, whose population is 73.9% Jewish, is the only country where Jews comprise more than 2.5% of the population. Jews have significantly influenced and contributed to the development and growth of human progress in many fields, both historically and in modern times, including in science and technology, philosophy, ethics, literature, governance, business, art, music, comedy, theatre, cinema, architecture, food, medicine, and religion. Jews founded Christianity and had an indirect but profound influence on Islam. In these ways and others, Jews have played a significant role in the development of Western culture. Name and etymology The term "Jew" is derived from the Hebrew word יְהוּדִי Yehudi, with the plural יְהוּדִים Yehudim. Endonyms in other Jewish languages include the Ladino ג׳ודיו Djudio (plural ג׳ודיוס, Djudios) and the Yiddish ייִד Yid (plural ייִדן Yidn). Though Genesis 29:35 and 49:8 connect "Judah" with the verb yada, meaning "praise", scholars generally agree that "Judah" most likely derives from the name of a Levantine geographic region dominated by gorges and ravines. The gradual ethnonymic shift from "Israelites" to "Jews", regardless of their descent from Judah, although not contained in the Torah, is made explicit in the Book of Esther (4th century BCE) of the Tanakh. Some modern scholars disagree with the conflation, based on the works of Josephus, Philo and Apostle Paul. The English word "Jew" is a derivation of Middle English Gyw, Iewe. The latter was loaned from the Old French giu, which itself evolved from the earlier juieu, which in turn derived from judieu/iudieu which through elision had dropped the letter "d" from the Medieval Latin Iudaeus, which, like the New Testament Greek term Ioudaios, meant both "Jew" and "Judean" / "of Judea". The Greek term was a loan from Aramaic *yahūdāy, corresponding to Hebrew יְהוּדִי Yehudi. Some scholars prefer translating Ioudaios as "Judean" in the Bible since it is more precise, denotes the community's origins and prevents readers from engaging in antisemitic eisegesis. Others disagree, believing that it erases the Jewish identity of Biblical characters such as Jesus. Daniel R. Schwartz distinguishes "Judean" and "Jew". Here, "Judean" refers to the inhabitants of Judea, which encompassed southern Palestine. Meanwhile, "Jew" refers to the descendants of Israelites that adhere to Judaism. Converts are included in the definition. But Shaye J.D. Cohen argues that "Judean" is inclusive of believers of the Judean God and allies of the Judean state. Another scholar, Jodi Magness, wrote the term Ioudaioi refers to a "people of Judahite/Judean ancestry who worshipped the God of Israel as their national deity and (at least nominally) lived according to his laws." The etymological equivalent is in use in other languages, e.g., يَهُودِيّ yahūdī (sg.), al-yahūd (pl.), in Arabic, "Jude" in German, "judeu" in Portuguese, "Juif" (m.)/"Juive" (f.) in French, "jøde" in Danish and Norwegian, "judío/a" in Spanish, "jood" in Dutch, "żyd" in Polish etc., but derivations of the word "Hebrew" are also in use to describe a Jew, e.g., in Italian (Ebreo), in Persian ("Ebri/Ebrani" (Persian: عبری/عبرانی)) and Russian (Еврей, Yevrey). The German word "Jude" is pronounced [ˈjuːdə], the corresponding adjective "jüdisch" [ˈjyːdɪʃ] (Jewish) is the origin of the word "Yiddish". According to The American Heritage Dictionary of the English Language, fourth edition (2000), It is widely recognized that the attributive use of the noun Jew, in phrases such as Jew lawyer or Jew ethics, is both vulgar and highly offensive. In such contexts Jewish is the only acceptable possibility. Some people, however, have become so wary of this construction that they have extended the stigma to any use of Jew as a noun, a practice that carries risks of its own. In a sentence such as There are now several Jews on the council, which is unobjectionable, the substitution of a circumlocution like Jewish people or persons of Jewish background may in itself cause offense for seeming to imply that Jew has a negative connotation when used as a noun. Identity Judaism shares some of the characteristics of a nation, an ethnicity, a religion, and a culture, making the definition of who is a Jew vary slightly depending on whether a religious or national approach to identity is used.[better source needed] Generally, in modern secular usage, Jews include three groups: people who were born to a Jewish family regardless of whether or not they follow the religion, those who have some Jewish ancestral background or lineage (sometimes including those who do not have strictly matrilineal descent), and people without any Jewish ancestral background or lineage who have formally converted to Judaism and therefore are followers of the religion. In the context of biblical and classical literature, Jews could refer to inhabitants of the Kingdom of Judah, or the broader Judean region, allies of the Judean state, or anyone that followed Judaism. Historical definitions of Jewish identity have traditionally been based on halakhic definitions of matrilineal descent, and halakhic conversions. These definitions of who is a Jew date back to the codification of the Oral Torah into the Babylonian Talmud, around 200 CE. Interpretations by Jewish sages of sections of the Tanakh – such as Deuteronomy 7:1–5, which forbade intermarriage between their Israelite ancestors and seven non-Israelite nations: "for that [i.e. giving your daughters to their sons or taking their daughters for your sons,] would turn away your children from following me, to serve other gods"[failed verification] – are used as a warning against intermarriage between Jews and gentiles. Leviticus 24:10 says that the son in a marriage between a Hebrew woman and an Egyptian man is "of the community of Israel." This is complemented by Ezra 10:2–3, where Israelites returning from Babylon vow to put aside their gentile wives and their children. A popular theory is that the rape of Jewish women in captivity brought about the law of Jewish identity being inherited through the maternal line, although scholars challenge this theory citing the Talmudic establishment of the law from the pre-exile period. Another argument is that the rabbis changed the law of patrilineal descent to matrilineal descent due to the widespread rape of Jewish women by Roman soldiers. Since the anti-religious Haskalah movement of the late 18th and 19th centuries, halakhic interpretations of Jewish identity have been challenged. According to historian Shaye J. D. Cohen, the status of the offspring of mixed marriages was determined patrilineally in the Bible. He brings two likely explanations for the change in Mishnaic times: first, the Mishnah may have been applying the same logic to mixed marriages as it had applied to other mixtures (Kil'ayim). Thus, a mixed marriage is forbidden as is the union of a horse and a donkey, and in both unions the offspring are judged matrilineally. Second, the Tannaim may have been influenced by Roman law, which dictated that when a parent could not contract a legal marriage, offspring would follow the mother. Rabbi Rivon Krygier follows a similar reasoning, arguing that Jewish descent had formerly passed through the patrilineal descent and the law of matrilineal descent had its roots in the Roman legal system. Origins The prehistory and ethnogenesis of the Jews are closely intertwined with archaeology, biology, historical textual records, mythology, and religious literature. The ethnic origin of the Jews lie in the Israelites, a confederation of Iron Age Semitic-speaking tribes that inhabited a part of Canaan during the tribal and monarchic periods. Modern Jews are named after and also descended from the southern Israelite Kingdom of Judah. Gary A. Rendsburg links the early Canaanite nomadic pastoralists confederation to the Shasu known to the Egyptians around the 15th century BCE. According to the Hebrew Bible narrative, Jewish history begins with the Biblical patriarchs such as Abraham, his son Isaac, Isaac's son Jacob, and the Biblical matriarchs Sarah, Rebecca, Leah, and Rachel, who lived in Canaan. The twelve sons of Jacob subsequently gave birth to the Twelve Tribes. Jacob and his family migrated to Ancient Egypt after being invited to live with Jacob's son Joseph by the Pharaoh himself. Jacob's descendants were later enslaved until the Exodus, led by Moses. Afterwards, the Israelites conquered Canaan under Moses' successor Joshua, and went through the period of the Biblical judges after the death of Joshua. Through the mediation of Samuel, the Israelites were subject to a king, Saul, who was succeeded by David and then Solomon, after whom the United Monarchy ended and was split into a separate Kingdom of Israel and a Kingdom of Judah. The Kingdom of Judah is described as comprising the tribes of Judah, Benjamin and partially, Levi. They later assimilated remnants of other tribes who migrated there from the northern Kingdom of Israel. In the extra-biblical record, the Israelites become visible as a people between 1200 and 1000 BCE. There is well accepted archeological evidence referring to "Israel" in the Merneptah Stele, which dates to about 1200 BCE, and in the Mesha stele from 840 BCE. It is debated whether a period like that of the Biblical judges occurred and if there ever was a United Monarchy. There is further disagreement about the earliest existence of the Kingdoms of Israel and Judah and their extent and power. Historians agree that a Kingdom of Israel existed by c. 900 BCE,: 169–95 there is a consensus that a Kingdom of Judah existed by c. 700 BCE at least, and recent excavations in Khirbet Qeiyafa have provided strong evidence for dating the Kingdom of Judah to the 10th century BCE. In 587 BCE, Nebuchadnezzar II, King of the Neo-Babylonian Empire, besieged Jerusalem, destroyed the First Temple and deported parts of the Judahite population. Scholars disagree regarding the extent to which the Bible should be accepted as a historical source for early Israelite history. Rendsburg states that there are two approximately equal groups of scholars who debate the historicity of the biblical narrative, the minimalists who largely reject it, and the maximalists who largely accept it, with the minimalists being the more vocal of the two. Some of the leading minimalists reframe the biblical account as constituting the Israelites' inspiring national myth narrative, suggesting that according to the modern archaeological and historical account, the Israelites and their culture did not overtake the region by force, but instead branched out of the Canaanite peoples and culture through the development of a distinct monolatristic—and later monotheistic—religion of Yahwism centered on Yahweh, one of the gods of the Canaanite pantheon. The growth of Yahweh-centric belief, along with a number of cultic practices, gradually gave rise to a distinct Israelite ethnic group, setting them apart from other Canaanites. According to Dever, modern archaeologists have largely discarded the search for evidence of the biblical narrative surrounding the patriarchs and the exodus. According to the maximalist position, the modern archaeological record independently points to a narrative which largely agrees with the biblical account. This narrative provides a testimony of the Israelites as a nomadic people known to the Egyptians as belonging to the Shasu. Over time these nomads left the desert and settled on the central mountain range of the land of Canaan, in simple semi-nomadic settlements in which pig bones are notably absent. This population gradually shifted from a tribal lifestyle to a monarchy. While the archaeological record of the ninth century BCE provides evidence for two monarchies, one in the south under a dynasty founded by a figure named David with its capital in Jerusalem, and one in the north under a dynasty founded by a figure named Omri with its capital in Samaria. It also points to an early monarchic period in which these regions shared material culture and religion, suggesting a common origin. Archaeological finds also provide evidence for the later cooperation of these two kingdoms in their coalition against Aram, and for their destructions by the Assyrians and later by the Babylonians. Genetic studies on Jews show that most Jews worldwide bear a common genetic heritage which originates in the Middle East, and that they share certain genetic traits with other Gentile peoples of the Fertile Crescent. The genetic composition of different Jewish groups shows that Jews share a common gene pool dating back four millennia, as a marker of their common ancestral origin. Despite their long-term separation, Jewish communities maintained their unique commonalities, propensities, and sensibilities in culture, tradition, and language. History The earliest recorded evidence of a people by the name of Israel appears in the Merneptah Stele, which dates to around 1200 BCE. The majority of scholars agree that this text refers to the Israelites, a group that inhabited the central highlands of Canaan, where archaeological evidence shows that hundreds of small settlements were constructed between the 12th and 10th centuries BCE. The Israelites differentiated themselves from neighboring peoples through various distinct characteristics including religious practices, prohibition on intermarriage, and an emphasis on genealogy and family history. In the 10th century BCE, two neighboring Israelite kingdoms—the northern Kingdom of Israel and the southern Kingdom of Judah—emerged. Since their inception, they shared ethnic, cultural, linguistic and religious characteristics despite a complicated relationship. Israel, with its capital mostly in Samaria, was larger and wealthier, and soon developed into a regional power. In contrast, Judah, with its capital in Jerusalem, was less prosperous and covered a smaller, mostly mountainous territory. However, while in Israel the royal succession was often decided by a military coup d'état, resulting in several dynasty changes, political stability in Judah was much greater, as it was ruled by the House of David for the whole four centuries of its existence. Scholars also describe Biblical Jews as a 'proto-nation', in the modern nationalist sense, comparable to classical Greeks, the Gauls and the British Celts. Around 720 BCE, Kingdom of Israel was destroyed when it was conquered by the Neo-Assyrian Empire, which came to dominate the ancient Near East. Under the Assyrian resettlement policy, a significant portion of the northern Israelite population was exiled to Mesopotamia and replaced by immigrants from the same region. During the same period, and throughout the 7th century BCE, the Kingdom of Judah, now under Assyrian vassalage, experienced a period of prosperity and witnessed a significant population growth. This prosperity continued until the Neo-Assyrian king Sennacherib devastated the region of Judah in response to a rebellion in the area, ultimately halting at Jerusalem. Later in the same century, the Assyrians were defeated by the rising Neo-Babylonian Empire, and Judah became its vassal. In 587 BCE, following a revolt in Judah, the Babylonian king Nebuchadnezzar II besieged and destroyed Jerusalem and the First Temple, putting an end to the kingdom. The majority of Jerusalem's residents, including the kingdom's elite, were exiled to Babylon. According to the Book of Ezra, the Persian Cyrus the Great ended the Babylonian exile in 538 BCE, the year after he captured Babylon. The exile ended with the return under Zerubbabel the Prince (so called because he was a descendant of the royal line of David) and Joshua the Priest (a descendant of the line of the former High Priests of the Temple) and their construction of the Second Temple circa 521–516 BCE. As part of the Persian Empire, the former Kingdom of Judah became the province of Judah (Yehud Medinata), with a smaller territory and a reduced population. Judea was under control of the Achaemenids until the fall of their empire in c. 333 BCE to Alexander the Great. After several centuries under foreign imperial rule, the Maccabean Revolt against the Seleucid Empire resulted in an independent Hasmonean kingdom, under which the Jews once again enjoyed political independence for a period spanning from 110 to 63 BCE. Under Hasmonean rule the boundaries of their kingdom were expanded to include not only the land of the historical kingdom of Judah, but also the Galilee and Transjordan. In the beginning of this process the Idumeans, who had infiltrated southern Judea after the destruction of the First Temple, were converted en masse. In 63 BCE, Judea was conquered by the Romans. From 37 BCE to 6 CE, the Romans allowed the Jews to maintain some degree of independence by installing the Herodian dynasty as vassal kings. However, Judea eventually came directly under Roman control and was incorporated into the Roman Empire as the province of Judaea. The Jewish–Roman wars, a series of failed uprisings against Roman rule during the first and second centuries CE, had profound and devastating consequences for the Jewish population of Judaea. The First Jewish–Roman War (66–73/74 CE) culminated in the destruction of Jerusalem and the Second Temple, after which the significantly diminished Jewish population was stripped of political autonomy. A few generations later, the Bar Kokhba revolt (132–136 CE) erupted in response to Roman plans to rebuild Jerusalem as a Roman colony, and, possibly, to restrictions on circumcision. Its violent suppression by the Romans led to the near-total depopulation of Judea, and the demographic and cultural center of Jewish life shifted to Galilee. Jews were subsequently banned from residing in Jerusalem and the surrounding area, and the province of Judaea was renamed Syria Palaestina. These developments effectively ended Jewish efforts to restore political sovereignty in the region for nearly two millennia. Similar upheavals impacted the Jewish communities in the empire's eastern provinces during the Diaspora Revolt (115–117 CE), leading to the near-total destruction of Jewish diaspora communities in Libya, Cyprus and Egypt, including the highly influential community in Alexandria. The destruction of the Second Temple in 70 CE brought profound changes to Judaism. With the Temple's central place in Jewish worship gone, religious practices shifted towards prayer, Torah study (including Oral Torah), and communal gatherings in synagogues. Judaism also lost much of its sectarian nature.: 69 Two of the three main sects that flourished during the late Second Temple period, namely the Sadducees and Essenes, eventually disappeared, while Pharisaic beliefs became the foundational, liturgical, and ritualistic basis of Rabbinic Judaism, which emerged as the prevailing form of Judaism since late antiquity. The Jewish diaspora existed well before the destruction of the Second Temple in 70 CE and had been ongoing for centuries, with the dispersal driven by both forced expulsions and voluntary migrations. In Mesopotamia, a testimony to the beginnings of the Jewish community can be found in Joachin's ration tablets, listing provisions allotted to the exiled Judean king and his family by Nebuchadnezzar II, and further evidence are the Al-Yahudu tablets, dated to the 6th–5th centuries BCE and related to the exiles from Judea arriving after the destruction of the First Temple, though there is ample evidence for the presence of Jews in Babylonia even from 626 BCE. In Egypt, the documents from Elephantine reveal the trials of a community founded by a Persian Jewish garrison at two fortresses on the frontier during the 5th–4th centuries BCE, and according to Josephus the Jewish community in Alexandria existed since the founding of the city in the 4th century BCE by Alexander the Great. By 200 BCE, there were well established Jewish communities both in Egypt and Mesopotamia ("Babylonia" in Jewish sources) and in the two centuries that followed, Jewish populations were also present in Asia Minor, Greece, Macedonia, Cyrene, and, beginning in the middle of the first century BCE, in the city of Rome. Later, in the first centuries CE, as a result of the Jewish-Roman Wars, a large number of Jews were taken as captives, sold into slavery, or compelled to flee from the regions affected by the wars, contributing to the formation and expansion of Jewish communities across the Roman Empire as well as in Arabia and Mesopotamia. After the Bar Kokhba revolt, the Jewish population in Judaea—now significantly reduced— made efforts to recover from the revolt's devastating effects, but never fully regained its former strength. Between the second and fourth centuries CE, the region of Galilee emerged as the primary center of Jewish life in Syria Palaestina, experiencing both demographic growth and cultural development. It was during this period that two central rabbinic texts, the Mishnah and the Jerusalem Talmud, were composed. The Romans recognized the patriarchs—rabbinic sages such as Judah ha-Nasi—as representatives of the Jewish people, granting them a certain degree of autonomy. However, as the Roman Empire gave way to the Christianized Byzantine Empire under Constantine, Jews began to face persecution by both the Church and imperial authorities, Jews came to be persecuted by the church and the authorities, and many immigrated to communities in the diaspora. By the fourth century CE, Jews are believed to have lost their demographic majority in Syria Palaestina. The long-established Jewish community of Mesopotamia, which had been living under Parthian and later Sasanian rule, beyond the confines of the Roman Empire, became an important center of Jewish study as Judea's Jewish population declined. Estimates often place the Babylonian Jewish community of the 3rd to 7th centuries at around one million, making it the largest Jewish diaspora community of that period. Under the political leadership of the exilarch, who was regarded as a royal heir of the House of David, this community had an autonomous status and served as a place of refuge for the Jews of Syria Palaestina. A number of significant Talmudic academies, such as the Nehardea, Pumbedita, and Sura academies, were established in Mesopotamia, and many important Amoraim were active there. The Babylonian Talmud, a centerpiece of Jewish religious law, was compiled in Babylonia in the 3rd to 6th centuries. Jewish diaspora communities are generally described to have coalesced into three major ethnic subdivisions according to where their ancestors settled: the Ashkenazim (initially in the Rhineland and France), the Sephardim (initially in the Iberian Peninsula), and the Mizrahim (Middle East and North Africa). Romaniote Jews, Tunisian Jews, Yemenite Jews, Egyptian Jews, Ethiopian Jews, Bukharan Jews, Mountain Jews, and other groups also predated the arrival of the Sephardic diaspora. During the same period, Jewish communities in the Middle East thrived under Islamic rule, especially in cities like Baghdad, Cairo, and Damascus. In Babylonia, from the 7th to 11th centuries the Pumbedita and Sura academies led the Arab and to an extent the entire Jewish world. The deans and students of said academies defined the Geonic period in Jewish history. Following this period were the Rishonim who lived from the 11th to 15th centuries. Like their European counterparts, Jews in the Middle East and North Africa also faced periods of persecution and discriminatory policies, with the Almohad Caliphate in North Africa and Iberia issuing forced conversion decrees, causing Jews such as Maimonides to seek safety in other regions. Despite experiencing repeated waves of persecution, Ashkenazi Jews in Western Europe worked in a variety of fields, making an impact on their communities' economy and societies. In Francia, for example, figures like Isaac Judaeus and Armentarius occupied prominent social and economic positions. Francia also witnessed the development of a sophisticated tradition of biblical commentary, as exemplified by Rashi and the tosafists. In 1144, the first documented blood libel occurred in Norwich, England, marking an escalation in the pattern of discrimination and violence that Jews had already been subjected to throughout medieval Europe. During the 12th and 13th centuries, Jews faced frequent antisemitic legislation - including laws prescribing distinctive dress - alongside segregation, repeated blood libels, pogroms, and massacres such as the Rhineland Massacres (1066). The Jews of the Holy Roman Empire were designated Servi camerae regis (“servants of the imperial chamber”) by Frederick II, a status that afforded limited protection while simultaneously entangling them in the political struggles between the emperor and the German principalities and cities. Persecution intensified during the Black Death in the mid-14th century, when Jews were accused of poisoning wells and many communities were destroyed. These pressures, combined with major expulsions such as that from England in 1290, gradually pushed Ashkenazi Jewish populations eastward into Poland, Lithuania, and Russia. One of the largest Jewish communities of the Middle Ages was in the Iberian Peninsula, which for a time contained the largest Jewish population in Europe. Iberian Jewry endured discrimination under the Visigoths but saw its fortunes improve under Umayyad rule and later the Taifa kingdoms. During this period, the Jews of Muslim Spain entered a "Golden Age" marked by achievements in Hebrew poetry and literature, religious scholarship, grammar, medicine and science, with leading figures including Hasdai ibn Shaprut, Judah Halevi, Moses ibn Ezra and Solomon ibn Gabirol. Jews also rose to high office, most notably Samuel ibn Naghrillah, a scholar and poet who served as grand vizier and military commander of Granada. The Golden Age ended with the rise of the radical Almoravid and Almohad dynasties, whose persecutions drove many Jews from Iberia (including Maimonides), together with the advancing Reconquista. In 1391, widespread pogroms swept across Spain, leaving thousands dead and forcing mass conversions. The Spanish Inquisition was later established to pursue, torture and execute conversos who continued to practice Judaism in secret, while public disputations were staged to discredit Judaism. In 1492, after the Reconquista, Isabella I of Castile and Ferdinand II of Aragon decreed the expulsion of all Jews who refused conversion, sending an estimated 200,000 into exile in Portugal, Italy, North Africa, and the Ottoman Empire. In 1497, Portugal's Jews, about 30,000, were formally ordered expelled but instead were forcibly converted to retain their economic role. In 1498, some 3,500 Jews were expelled from Navarre. Many converts outwardly adopted Christianity while secretly preserving Jewish practices, becoming crypto-Jews (also known as marranos or anusim), who remained targets of the various Inquisitions for centuries. Following the expulsions from Spain and Portugal in the 1490s, Jewish exiles dispersed across the Mediterranean, Europe, and North Africa. Many settled in the Ottoman Empire—which, replacing the Iberian Peninsula, became home to the world's largest Jewish population—where new communities developed in Anatolia, the Balkans, and the Land of Israel. Cities such as Istanbul and Thessaloniki grew into major Jewish centers, while in 16th-century Safed a flourishing spiritual life took shape. There, Solomon Alkabetz, Moses Cordovero, and Isaac Luria developed influential new schools of Kabbalah, giving powerful impetus to Jewish mysticism, and Joseph Karo composed the Shulchan Aruch, which became a cornerstone of Jewish law. In the 17th century, Portuguese conversos who returned to Judaism and engaged in trade and banking helped establish Amsterdam as a prosperous Jewish center, while also forming communities in cities such as Antwerp and London. This period also witnessed waves of messianic fervor, most notably the rise of the Sabbatean movement in the 1660s, led by Sabbatai Zvi of İzmir, which reverberated throughout the Jewish world. In Eastern Europe, Poland–Lithuania became the principal center of Ashkenazi Jewry, eventually becoming home to the largest Jewish population in the world. Jewish life flourished there from in the early modern era, supported by relative stability, economic opportunity, and strong communal institutions. The mid-17th century brought devastation with the Cossack uprisings in Ukraine, which reversed migration flows and sent refugees westward, yet Poland–Lithuania remained the demographic and cultural heartland of Ashkenazic Jewry. Following the partitions of Poland, most of its Jews came under Russian rule and were confined to the "Pale of Settlement." The 18th century also witnessed new religious and intellectual currents. Hasidism, founded by Baal Shem Tov, emphasized mysticism and piety, while its opponents, the Misnagdim ("opponents") led by the Vilna Gaon, defended rabbinic scholarship and tradition. In Western Europe, during the 1760s and 1770s, the Haskalah (Jewish Enlightenment) emerged in German-speaking lands, where figures such as Moses Mendelssohn promoted secular learning, vernacular literacy, and integration into European society. Elsewhere, Jews began to be re-admitted to Western Europe, including England, where Menasseh ben Israel petitioned Oliver Cromwell for their return. In the Americas, Jews of Sephardic descent first arrived as conversos in Spanish and Portuguese colonies, where many faced trial by Inquisition tribunals for "judaizing." A more durable presence began in Dutch Brazil, where Jews openly practiced their religion and established the first synagogues in the New World, before the Portuguese reconquest forced their dispersal to Amsterdam, the Caribbean, and North America. Sephardic communities took root in Curaçao, Suriname, Jamaica, and Barbados, later joined by Ashkenazi migrants. In North America, Jews were present from the mid-17th century, with New Amsterdam hosting the first organized congregation in 1654. By the time of the American Revolution, small communities in New York, Newport, Philadelphia, Savannah, and Charleston played an active role in the struggle for independence. In the late 19th century, Jews in Western Europe gradually achieved legal emancipation, though social acceptance remained limited by persistent antisemitism and rising nationalism. In Eastern Europe, particularly within the Russian Empire's Pale of Settlement, Jews faced mounting legal restrictions and recurring pogroms. From this environment emerged Zionism, a national revival movement originating in Central and Eastern Europe that sought to re-establish a Jewish polity in the Land of Israel as a means of returning the Jewish people to their ancestral homeland and ending centuries of exile and persecution. This led to waves of Jewish migration to Ottoman-controlled Palestine. Theodor Herzl, who is considered the father of political Zionism, offered his vision of a future Jewish state in his 1896 book Der Judenstaat (The Jewish State); a year later, he presided over the First Zionist Congress. The antisemitism that inflicted Jewish communities in Europe also triggered a mass exodus of 2.8 million Jews to the United States between 1881 and 1924. Despite this, some Jews of Europe and the United States were able to make great achievements in various fields of science and culture. Among the most influential from this period are Albert Einstein in physics, Sigmund Freud in psychology, Franz Kafka in literature, and Irving Berlin in music. Many Nobel Prize winners at this time were Jewish, as is still the case. When Adolf Hitler and the Nazi Party came to power in Germany in 1933, the situation for Jews deteriorated rapidly as a direct result of Nazi policies. Many Jews fled from Europe to Mandatory Palestine, the United States, and the Soviet Union as a result of racial anti-Semitic laws, economic difficulties, and the fear of an impending war. World War II started in 1939, and by 1941, Hitler occupied almost all of Europe. Following the German invasion of the Soviet Union in 1941, the Final Solution—an extensive, organized effort with an unprecedented scope intended to annihilate the Jewish people—began, and resulted in the persecution and murder of Jews in Europe and North Africa. In Poland, three million were murdered in gas chambers in all concentration camps combined, with one million at the Auschwitz camp complex alone. The Holocaust is the name given to this genocide, in which six million Jews in total were systematically murdered. Before and during the Holocaust, enormous numbers of Jews immigrated to Mandatory Palestine. In 1944, the Jewish insurgency in Mandatory Palestine began with the aim of gaining full independence from the United Kingdom. On 14 May 1948, upon the termination of the mandate, David Ben-Gurion declared the creation of the State of Israel, a Jewish and democratic state. Immediately afterwards, all neighboring Arab states invaded, and were resisted by the newly formed Israel Defense Forces. In 1949, the war ended and Israel started building its state and absorbing waves of Aliyah, granting citizenship to Jews all over the world via the Law of Return passed in 1950. However, both the Israeli–Palestinian conflict and wider Arab–Israeli conflict continue to this day. Culture The Jewish people and the religion of Judaism are strongly interrelated. Converts to Judaism have a status within the Jewish people equal to those born into it. However, converts who go on to practice no Judaism are likely to be viewed with skepticism. Mainstream Judaism does not proselytize, and conversion is considered a difficult task. A significant portion of conversions are undertaken by children of mixed marriages, or would-be or current spouses of Jews. The Hebrew Bible, a religious interpretation of the traditions and early history of the Jews, established the first of the Abrahamic religions, which are now practiced by 54 percent of the world. Judaism guides its adherents in both practice and belief, and has been called not only a religion, but also a "way of life," which has made drawing a clear distinction between Judaism, Jewish culture, and Jewish identity rather difficult. Throughout history, in eras and places as diverse as the ancient Hellenic world, in Europe before and after The Age of Enlightenment (see Haskalah), in Islamic Spain and Portugal, in North Africa and the Middle East, India, China, or the contemporary United States and Israel, cultural phenomena have developed that are in some sense characteristically Jewish without being at all specifically religious. Some factors in this come from within Judaism, others from the interaction of Jews or specific communities of Jews with their surroundings, and still others from the inner social and cultural dynamics of the community, as opposed to from the religion itself. This phenomenon has led to considerably different Jewish cultures unique to their own communities. Hebrew is the liturgical language of Judaism (termed lashon ha-kodesh, "the holy tongue"), the language in which most of the Hebrew scriptures (Tanakh) were composed, and the daily speech of the Jewish people for centuries. By the 5th century BCE, Aramaic, a closely related tongue, joined Hebrew as the spoken language in Judea. By the 3rd century BCE, some Jews of the diaspora were speaking Greek. Others, such as in the Jewish communities of Asoristan, known to Jews as Babylonia, were speaking Hebrew and Aramaic, the languages of the Babylonian Talmud. Dialects of these same languages were also used by the Jews of Syria Palaestina at that time.[citation needed] For centuries, Jews worldwide have spoken the local or dominant languages of the regions they migrated to, often developing distinctive dialectal forms or branches that became independent languages. Yiddish is the Judaeo-German language developed by Ashkenazi Jews who migrated to Central Europe. Ladino is the Judaeo-Spanish language developed by Sephardic Jews who migrated to the Iberian Peninsula. Due to many factors, including the impact of the Holocaust on European Jewry, the Jewish exodus from Arab and Muslim countries, and widespread emigration from other Jewish communities around the world, ancient and distinct Jewish languages of several communities, including Judaeo-Georgian, Judaeo-Arabic, Judaeo-Berber, Krymchak, Judaeo-Malayalam and many others, have largely fallen out of use. For over sixteen centuries Hebrew was used almost exclusively as a liturgical language, and as the language in which most books had been written on Judaism, with a few speaking only Hebrew on the Sabbath. Hebrew was revived as a spoken language by Eliezer ben Yehuda, who arrived in Palestine in 1881. It had not been used as a mother tongue since Tannaic times. Modern Hebrew is designated as the "State language" of Israel. Despite efforts to revive Hebrew as the national language of the Jewish people, knowledge of the language is not commonly possessed by Jews worldwide and English has emerged as the lingua franca of the Jewish diaspora. Although many Jews once had sufficient knowledge of Hebrew to study the classic literature, and Jewish languages like Yiddish and Ladino were commonly used as recently as the early 20th century, most Jews lack such knowledge today and English has by and large superseded most Jewish vernaculars. The three most commonly spoken languages among Jews today are Hebrew, English, and Russian. Some Romance languages, particularly French and Spanish, are also widely used. Yiddish has been spoken by more Jews in history than any other language, but it is far less used today following the Holocaust and the adoption of Modern Hebrew by the Zionist movement and the State of Israel. In some places, the mother language of the Jewish community differs from that of the general population or the dominant group. For example, in Quebec, the Ashkenazic majority has adopted English, while the Sephardic minority uses French as its primary language. Similarly, South African Jews adopted English rather than Afrikaans. Due to both Czarist and Soviet policies, Russian has superseded Yiddish as the language of Russian Jews, but these policies have also affected neighboring communities. Today, Russian is the first language for many Jewish communities in a number of Post-Soviet states, such as Ukraine and Uzbekistan,[better source needed] as well as for Ashkenazic Jews in Azerbaijan, Georgia, and Tajikistan. Although communities in North Africa today are small and dwindling, Jews there had shifted from a multilingual group to a monolingual one (or nearly so), speaking French in Algeria, Morocco, and the city of Tunis, while most North Africans continue to use Arabic or Berber as their mother tongue.[citation needed] There is no single governing body for the Jewish community, nor a single authority with responsibility for religious doctrine. Instead, a variety of secular and religious institutions at the local, national, and international levels lead various parts of the Jewish community on a variety of issues. Today, many countries have a Chief Rabbi who serves as a representative of that country's Jewry. Although many Hasidic Jews follow a certain hereditary Hasidic dynasty, there is no one commonly accepted leader of all Hasidic Jews. Many Jews believe that the Messiah will act a unifying leader for Jews and the entire world. A number of modern scholars of nationalism support the existence of Jewish national identity in antiquity. One of them is David Goodblatt, who generally believes in the existence of nationalism before the modern period. In his view, the Bible, the parabiblical literature and the Jewish national history provide the base for a Jewish collective identity. Although many of the ancient Jews were illiterate (as were their neighbors), their national narrative was reinforced through public readings. The Hebrew language also constructed and preserved national identity. Although it was not widely spoken after the 5th century BCE, Goodblatt states: the mere presence of the language in spoken or written form could invoke the concept of a Jewish national identity. Even if one knew no Hebrew or was illiterate, one could recognize that a group of signs was in Hebrew script. ... It was the language of the Israelite ancestors, the national literature, and the national religion. As such it was inseparable from the national identity. Indeed its mere presence in visual or aural medium could invoke that identity. Anthony D. Smith, an historical sociologist considered one of the founders of the field of nationalism studies, wrote that the Jews of the late Second Temple period provide "a closer approximation to the ideal type of the nation [...] than perhaps anywhere else in the ancient world." He adds that this observation "must make us wary of pronouncing too readily against the possibility of the nation, and even a form of religious nationalism, before the onset of modernity." Agreeing with Smith, Goodblatt suggests omitting the qualifier "religious" from Smith's definition of ancient Jewish nationalism, noting that, according to Smith, a religious component in national memories and culture is common even in the modern era. This view is echoed by political scientist Tom Garvin, who writes that "something strangely like modern nationalism is documented for many peoples in medieval times and in classical times as well," citing the ancient Jews as one of several "obvious examples", alongside the classical Greeks and the Gaulish and British Celts. Fergus Millar suggests that the sources of Jewish national identity and their early nationalist movements in the first and second centuries CE included several key elements: the Bible as both a national history and legal source, the Hebrew language as a national language, a system of law, and social institutions such as schools, synagogues, and Sabbath worship. Adrian Hastings argued that Jews are the "true proto-nation", that through the model of ancient Israel found in the Hebrew Bible, provided the world with the original concept of nationhood which later influenced Christian nations. However, following Jerusalem's destruction in the first century CE, Jews ceased to be a political entity and did not resemble a traditional nation-state for almost two millennia. Despite this, they maintained their national identity through collective memory, religion and sacred texts, even without land or political power, and remained a nation rather than just an ethnic group, eventually leading to the rise of Zionism and the establishment of Israel. Steven Weitzman suggests that Jewish nationalist sentiment in antiquity was encouraged because under foreign rule (Persians, Greeks, Romans) Jews were able to claim that they were an ancient nation. This claim was based on the preservation and reverence of their scriptures, the Hebrew language, the Temple and priesthood, and other traditions of their ancestors. Doron Mendels further observes that the Hasmonean kingdom, one of the few examples of indigenous statehood at its time, significantly reinforced Jewish national consciousness. The memory of this period of independence contributed to the persistent efforts to revive Jewish sovereignty in Judea, leading to the major revolts against Roman rule in the 1st and 2nd centuries CE. Demographics Within the world's Jewish population there are distinct ethnic divisions, most of which are primarily the result of geographic branching from an originating Israelite population, and subsequent independent evolutions. An array of Jewish communities was established by Jewish settlers in various places around the Old World, often at great distances from one another, resulting in effective and often long-term isolation. During the millennia of the Jewish diaspora the communities would develop under the influence of their local environments: political, cultural, natural, and populational. Today, manifestations of these differences among the Jews can be observed in Jewish cultural expressions of each community, including Jewish linguistic diversity, culinary preferences, liturgical practices, religious interpretations, as well as degrees and sources of genetic admixture. Jews are often identified as belonging to one of two major groups: the Ashkenazim and the Sephardim. Ashkenazim are so named in reference to their geographical origins (their ancestors' culture coalesced in the Rhineland, an area historically referred to by Jews as Ashkenaz). Similarly, Sephardim (Sefarad meaning "Spain" in Hebrew) are named in reference their origins in Iberia. The diverse groups of Jews of the Middle East and North Africa are often collectively referred to as Sephardim together with Sephardim proper for liturgical reasons having to do with their prayer rites. A common term for many of these non-Spanish Jews who are sometimes still broadly grouped as Sephardim is Mizrahim (lit. 'easterners' in Hebrew). Nevertheless, Mizrahis and Sepharadim are usually ethnically distinct. Smaller groups include, but are not restricted to, Indian Jews such as the Bene Israel, Bnei Menashe, Cochin Jews, and Bene Ephraim; the Romaniotes of Greece; the Italian Jews ("Italkim" or "Bené Roma"); the Teimanim from Yemen; various African Jews, including most numerously the Beta Israel of Ethiopia; and Chinese Jews, most notably the Kaifeng Jews, as well as various other distinct but now almost extinct communities. The divisions between all these groups are approximate and their boundaries are not always clear. The Mizrahim for example, are a heterogeneous collection of North African, Central Asian, Caucasian, and Middle Eastern Jewish communities that are no closer related to each other than they are to any of the earlier mentioned Jewish groups. In modern usage, however, the Mizrahim are sometimes termed Sephardi due to similar styles of liturgy, despite independent development from Sephardim proper. Thus, among Mizrahim there are Egyptian Jews, Iraqi Jews, Lebanese Jews, Kurdish Jews, Moroccan Jews, Libyan Jews, Syrian Jews, Bukharian Jews, Mountain Jews, Georgian Jews, Iranian Jews, Afghan Jews, and various others. The Teimanim from Yemen are sometimes included, although their style of liturgy is unique and they differ in respect to the admixture found among them to that found in Mizrahim. In addition, there is a differentiation made between Sephardi migrants who established themselves in the Middle East and North Africa after the expulsion of the Jews from Spain and Portugal in the 1490s and the pre-existing Jewish communities in those regions. Ashkenazi Jews represent the bulk of modern Jewry, with at least 70 percent of Jews worldwide (and up to 90 percent prior to World War II and the Holocaust). As a result of their emigration from Europe, Ashkenazim also represent the overwhelming majority of Jews in the New World continents, in countries such as the United States, Canada, Argentina, Australia, and Brazil. In France, the immigration of Jews from Algeria (Sephardim) has led them to outnumber the Ashkenazim. Only in Israel is the Jewish population representative of all groups, a melting pot independent of each group's proportion within the overall world Jewish population. Y DNA studies tend to imply a small number of founders in an old population whose members parted and followed different migration paths. In most Jewish populations, these male line ancestors appear to have been mainly Middle Eastern. For example, Ashkenazi Jews share more common paternal lineages with other Jewish and Middle Eastern groups than with non-Jewish populations in areas where Jews lived in Eastern Europe, Germany, and the French Rhine Valley. This is consistent with Jewish traditions in placing most Jewish paternal origins in the region of the Middle East. Conversely, the maternal lineages of Jewish populations, studied by looking at mitochondrial DNA, are generally more heterogeneous. Scholars such as Harry Ostrer and Raphael Falk believe this indicates that many Jewish males found new mates from European and other communities in the places where they migrated in the diaspora after fleeing ancient Israel. In contrast, Behar has found evidence that about 40 percent of Ashkenazi Jews originate maternally from just four female founders, who were of Middle Eastern origin. The populations of Sephardi and Mizrahi Jewish communities "showed no evidence for a narrow founder effect." Subsequent studies carried out by Feder et al. confirmed the large portion of non-local maternal origin among Ashkenazi Jews. Reflecting on their findings related to the maternal origin of Ashkenazi Jews, the authors conclude "Clearly, the differences between Jews and non-Jews are far larger than those observed among the Jewish communities. Hence, differences between the Jewish communities can be overlooked when non-Jews are included in the comparisons." However, a 2025 genetic study on the Ashkenazi Jewish founder population supports the presence of a substantial Near Eastern component in the maternal lineages. Analyses of mitochondrial DNA (mtDNA) indicate that the core founder lineages, estimated at around 54, likely originated from the Near East, with these founder signatures appearing in multiple copies across the population. While later admixture introduced additional mtDNA lineages, these absorbed lineages are distinguishable from the original founders. The findings are consistent with genome-wide Identity-by-Descent and Lineage Extinction analyses, reinforcing the Near Eastern origin of the Ashkenazi maternal founders. A study showed that 7% of Ashkenazi Jews have the haplogroup G2c, which is mainly found in Pashtuns and on lower scales all major Jewish groups, Palestinians, Syrians, and Lebanese. Studies of autosomal DNA, which look at the entire DNA mixture, have become increasingly important as the technology develops. They show that Jewish populations have tended to form relatively closely related groups in independent communities, with most in a community sharing significant ancestry in common. For Jewish populations of the diaspora, the genetic composition of Ashkenazi, Sephardic, and Mizrahi Jewish populations show a predominant amount of shared Middle Eastern ancestry. According to Behar, the most parsimonious explanation for this shared Middle Eastern ancestry is that it is "consistent with the historical formulation of the Jewish people as descending from ancient Hebrew and Israelite residents of the Levant" and "the dispersion of the people of ancient Israel throughout the Old World". North African, Italian and others of Iberian origin show variable frequencies of admixture with non-Jewish historical host populations among the maternal lines. In the case of Ashkenazi and Sephardi Jews (in particular Moroccan Jews), who are closely related, the source of non-Jewish admixture is mainly Southern European, while Mizrahi Jews show evidence of admixture with other Middle Eastern populations. Behar et al. have remarked on a close relationship between Ashkenazi Jews and modern Italians. A 2001 study found that Jews were more closely related to groups of the Fertile Crescent (Kurds, Turks, and Armenians) than to their Arab neighbors, whose genetic signature was found in geographic patterns reflective of Islamic conquests. The studies also show that Sephardic Bnei Anusim (descendants of the "anusim" who were forced to convert to Catholicism), which comprise up to 19.8 percent of the population of today's Iberia (Spain and Portugal) and at least 10 percent of the population of Ibero-America (Hispanic America and Brazil), have Sephardic Jewish ancestry within the last few centuries. The Bene Israel and Cochin Jews of India, Beta Israel of Ethiopia, and a portion of the Lemba people of Southern Africa, despite more closely resembling the local populations of their native countries, have also been thought to have some more remote ancient Jewish ancestry. Views on the Lemba have changed and genetic Y-DNA analyses in the 2000s have established a partially Middle-Eastern origin for a portion of the male Lemba population but have been unable to narrow this down further. Although historically, Jews have been found all over the world, in the decades since World War II and the establishment of Israel, they have increasingly concentrated in a small number of countries. In 2021, Israel and the United States together accounted for over 85 percent of the global Jewish population, with approximately 45.3% and 39.6% of the world's Jews, respectively. More than half (51.2%) of world Jewry resides in just ten metropolitan areas. As of 2021, these ten areas were Tel Aviv, New York, Jerusalem, Haifa, Los Angeles, Miami, Philadelphia, Paris, Washington, and Chicago. The Tel Aviv metro area has the highest percent of Jews among the total population (94.8%), followed by Jerusalem (72.3%), Haifa (73.1%), and Beersheba (60.4%), the balance mostly being Israeli Arabs. Outside Israel, the highest percent of Jews in a metropolitan area was in New York (10.8%), followed by Miami (8.7%), Philadelphia (6.8%), San Francisco (5.1%), Washington (4.7%), Los Angeles (4.7%), Toronto (4.5%), and Baltimore (4.1%). As of 2010, there were nearly 14 million Jews around the world, roughly 0.2% of the world's population at the time. According to the 2007 estimates of The Jewish People Policy Planning Institute, the world's Jewish population is 13.2 million. This statistic incorporates both practicing Jews affiliated with synagogues and the Jewish community, and approximately 4.5 million unaffiliated and secular Jews.[citation needed] According to Sergio Della Pergola, a demographer of the Jewish population, in 2021 there were about 6.8 million Jews in Israel, 6 million in the United States, and 2.3 million in the rest of the world. Israel, the Jewish nation-state, is the only country in which Jews make up a majority of the citizens. Israel was established as an independent democratic and Jewish state on 14 May 1948. Of the 120 members in its parliament, the Knesset, as of 2016[update], 14 members of the Knesset are Arab citizens of Israel (not including the Druze), most representing Arab political parties. One of Israel's Supreme Court judges is also an Arab citizen of Israel. Between 1948 and 1958, the Jewish population rose from 800,000 to two million. Currently, Jews account for 75.4 percent of the Israeli population, or 6 million people. The early years of the State of Israel were marked by the mass immigration of Holocaust survivors in the aftermath of the Holocaust and Jews fleeing Arab lands. Israel also has a large population of Ethiopian Jews, many of whom were airlifted to Israel in the late 1980s and early 1990s. Between 1974 and 1979 nearly 227,258 immigrants arrived in Israel, about half being from the Soviet Union. This period also saw an increase in immigration to Israel from Western Europe, Latin America, and North America. A trickle of immigrants from other communities has also arrived, including Indian Jews and others, as well as some descendants of Ashkenazi Holocaust survivors who had settled in countries such as the United States, Argentina, Australia, Chile, and South Africa. Some Jews have emigrated from Israel elsewhere, because of economic problems or disillusionment with political conditions and the continuing Arab–Israeli conflict. Jewish Israeli emigrants are known as yordim. The waves of immigration to the United States and elsewhere at the turn of the 19th century, the founding of Zionism and later events, including pogroms in Imperial Russia (mostly within the Pale of Settlement in present-day Ukraine, Moldova, Belarus and eastern Poland), the massacre of European Jewry during the Holocaust, and the founding of the state of Israel, with the subsequent Jewish exodus from Arab lands, all resulted in substantial shifts in the population centers of world Jewry by the end of the 20th century. More than half of the Jews live in the Diaspora (see Population table). Currently, the largest Jewish community outside Israel, and either the largest or second-largest Jewish community in the world, is located in the United States, with 6 million to 7.5 million Jews by various estimates. Elsewhere in the Americas, there are also large Jewish populations in Canada (315,000), Argentina (180,000–300,000), and Brazil (196,000–600,000), and smaller populations in Mexico, Uruguay, Venezuela, Chile, Colombia and several other countries (see History of the Jews in Latin America). According to a 2010 Pew Research Center study, about 470,000 people of Jewish heritage live in Latin America and the Caribbean. Demographers disagree on whether the United States has a larger Jewish population than Israel, with many maintaining that Israel surpassed the United States in Jewish population during the 2000s, while others maintain that the United States still has the largest Jewish population in the world. Currently, a major national Jewish population survey is planned to ascertain whether or not Israel has overtaken the United States in Jewish population. Western Europe's largest Jewish community, and the third-largest Jewish community in the world, can be found in France, home to between 483,000 and 500,000 Jews, the majority of whom are immigrants or refugees from North African countries such as Algeria, Morocco, and Tunisia (or their descendants). The United Kingdom has a Jewish community of 292,000. In Eastern Europe, the exact figures are difficult to establish. The number of Jews in Russia varies widely according to whether a source uses census data (which requires a person to choose a single nationality among choices that include "Russian" and "Jewish") or eligibility for immigration to Israel (which requires that a person have one or more Jewish grandparents). According to the latter criteria, the heads of the Russian Jewish community assert that up to 1.5 million Russians are eligible for aliyah. In Germany, the 102,000 Jews registered with the Jewish community are a slowly declining population, despite the immigration of tens of thousands of Jews from the former Soviet Union since the fall of the Berlin Wall. Thousands of Israelis also live in Germany, either permanently or temporarily, for economic reasons. Prior to 1948, approximately 800,000 Jews were living in lands which now make up the Arab world (excluding Israel). Of these, just under two-thirds lived in the French-controlled Maghreb region, 15 to 20 percent in the Kingdom of Iraq, approximately 10 percent in the Kingdom of Egypt and approximately 7 percent in the Kingdom of Yemen. A further 200,000 lived in Pahlavi Iran and the Republic of Turkey. Today, around 26,000 Jews live in Muslim-majority countries, mainly in Turkey (14,200) and Iran (9,100), while Morocco (2,000), Tunisia (1,000), and the United Arab Emirates (500) host the largest communities in the Arab world. A small-scale exodus had begun in many countries in the early decades of the 20th century, although the only substantial aliyah came from Yemen and Syria. The exodus from Arab and Muslim countries took place primarily from 1948. The first large-scale exoduses took place in the late 1940s and early 1950s, primarily in Iraq, Yemen and Libya, with up to 90 percent of these communities leaving within a few years. The peak of the exodus from Egypt occurred in 1956. The exodus in the Maghreb countries peaked in the 1960s. Lebanon was the only Arab country to see a temporary increase in its Jewish population during this period, due to an influx of refugees from other Arab countries, although by the mid-1970s the Jewish community of Lebanon had also dwindled. In the aftermath of the exodus wave from Arab states, an additional migration of Iranian Jews peaked in the 1980s when around 80 percent of Iranian Jews left the country.[citation needed] Outside Europe, the Americas, the Middle East, and the rest of Asia, there are significant Jewish populations in Australia (112,500) and South Africa (70,000). There is also a 6,800-strong community in New Zealand. Since at least the time of the Ancient Greeks, a proportion of Jews have assimilated into the wider non-Jewish society around them, by either choice or force, ceasing to practice Judaism and losing their Jewish identity. Assimilation took place in all areas, and during all time periods, with some Jewish communities, for example the Kaifeng Jews of China, disappearing entirely. The advent of the Jewish Enlightenment of the 18th century (see Haskalah) and the subsequent emancipation of the Jewish populations of Europe and America in the 19th century, accelerated the situation, encouraging Jews to increasingly participate in, and become part of, secular society. The result has been a growing trend of assimilation, as Jews marry non-Jewish spouses and stop participating in the Jewish community. Rates of interreligious marriage vary widely: In the United States, it is just under 50 percent; in the United Kingdom, around 53 percent; in France, around 30 percent; and in Australia and Mexico, as low as 10 percent. In the United States, only about a third of children from intermarriages affiliate with Jewish religious practice. The result is that most countries in the Diaspora have steady or slightly declining religiously Jewish populations as Jews continue to assimilate into the countries in which they live.[citation needed] The Jewish people and Judaism have experienced various persecutions throughout their history. During Late Antiquity and the Early Middle Ages, the Roman Empire (in its later phases known as the Byzantine Empire) repeatedly repressed the Jewish population, first by ejecting them from their homelands during the pagan Roman era and later by officially establishing them as second-class citizens during the Christian Roman era. According to James Carroll, "Jews accounted for 10% of the total population of the Roman Empire. By that ratio, if other factors had not intervened, there would be 200 million Jews in the world today, instead of something like 13 million." Later in medieval Western Europe, further persecutions of Jews by Christians occurred, notably during the Crusades—when Jews all over Germany were massacred—and in a series of expulsions from the Kingdom of England, Germany, and France. Then there occurred the largest expulsion of all, when Spain and Portugal, after the Reconquista (the Catholic Reconquest of the Iberian Peninsula), expelled both unbaptized Sephardic Jews and the ruling Muslim Moors. In the Papal States, which existed until 1870, Jews were required to live only in specified neighborhoods called ghettos. Islam and Judaism have a complex relationship. Traditionally Jews and Christians living in Muslim lands, known as dhimmis, were allowed to practice their religions and administer their internal affairs, but they were subject to certain conditions. They had to pay the jizya (a per capita tax imposed on free adult non-Muslim males) to the Islamic state. Dhimmis had an inferior status under Islamic rule. They had several social and legal disabilities such as prohibitions against bearing arms or giving testimony in courts in cases involving Muslims. Many of the disabilities were highly symbolic. The one described by Bernard Lewis as "most degrading" was the requirement of distinctive clothing, not found in the Quran or hadith but invented in early medieval Baghdad; its enforcement was highly erratic. On the other hand, Jews rarely faced martyrdom or exile, or forced compulsion to change their religion, and they were mostly free in their choice of residence and profession. Notable exceptions include the massacre of Jews and forcible conversion of some Jews by the rulers of the Almohad dynasty in Al-Andalus in the 12th century, as well as in Islamic Persia, and the forced confinement of Moroccan Jews to walled quarters known as mellahs beginning from the 15th century and especially in the early 19th century. In modern times, it has become commonplace for standard antisemitic themes to be conflated with anti-Zionist publications and pronouncements of Islamic movements such as Hezbollah and Hamas, in the pronouncements of various agencies of the Islamic Republic of Iran, and even in the newspapers and other publications of Turkish Refah Partisi."[better source needed] Throughout history, many rulers, empires and nations have oppressed their Jewish populations or sought to eliminate them entirely. Methods employed ranged from expulsion to outright genocide; within nations, often the threat of these extreme methods was sufficient to silence dissent. The history of antisemitism includes the First Crusade which resulted in the massacre of Jews; the Spanish Inquisition (led by Tomás de Torquemada) and the Portuguese Inquisition, with their persecution and autos-da-fé against the New Christians and Marrano Jews; the Bohdan Chmielnicki Cossack massacres in Ukraine; the Pogroms backed by the Russian Tsars; as well as expulsions from Spain, Portugal, England, France, Germany, and other countries in which the Jews had settled. According to a 2008 study published in the American Journal of Human Genetics, 19.8 percent of the modern Iberian population has Sephardic Jewish ancestry, indicating that the number of conversos may have been much higher than originally thought. The persecution reached a peak in Nazi Germany's Final Solution, which led to the Holocaust and the slaughter of approximately 6 million Jews. Of the world's 16 million Jews in 1939, almost 40% were murdered in the Holocaust. The Holocaust—the state-led systematic persecution and genocide of European Jews (and certain communities of North African Jews in European controlled North Africa) and other minority groups of Europe during World War II by Germany and its collaborators—remains the most notable modern-day persecution of Jews. The persecution and genocide were accomplished in stages. Legislation to remove the Jews from civil society was enacted years before the outbreak of World War II. Concentration camps were established in which inmates were used as slave labour until they died of exhaustion or disease. Where the Third Reich conquered new territory in Eastern Europe, specialized units called Einsatzgruppen murdered Jews and political opponents in mass shootings. Jews and Roma were crammed into ghettos before being transported hundreds of kilometres by freight train to extermination camps where, if they survived the journey, the majority of them were murdered in gas chambers. Virtually every arm of Germany's bureaucracy was involved in the logistics of the mass murder, turning the country into what one Holocaust scholar has called "a genocidal nation." Throughout Jewish history, Jews have repeatedly been directly or indirectly expelled from both their original homeland, the Land of Israel, and many of the areas in which they have settled. This experience as refugees has shaped Jewish identity and religious practice in many ways, and is thus a major element of Jewish history. In summary, the pogroms in Eastern Europe, the rise of modern antisemitism, the Holocaust, as well as the rise of Arab nationalism, all served to fuel the movements and migrations of huge segments of Jewry from land to land and continent to continent until they arrived back in large numbers at their original historical homeland in Israel. In the Bible, the patriarch Abraham is described as a migrant to the land of Canaan from Ur of the Chaldees. His descendants, the Children of Israel, undertook the Exodus (meaning "departure" or "exit" in Greek) from ancient Egypt, as described in the Book of Exodus. The first movement documented in the historical record occurred with the resettlement policy of the Neo-Assyrian Empire, which mandated the deportation of conquered peoples, and it is estimated some 4,500,000 among its captive populations suffered this dislocation over three centuries of Assyrian rule. With regard to Israel, Tiglath-Pileser III claims he deported 80% of the population of Lower Galilee, some 13,520 people. Some 27,000 Israelites, 20 to 25% of the population of the Kingdom of Israel, were described as being deported by Sargon II, and were replaced by other deported populations and sent into permanent exile by Assyria, initially to the Upper Mesopotamian provinces of the Assyrian Empire. Between 10,000 and 80,000 people from the Kingdom of Judah were similarly exiled by Babylonia, but these people were then returned to Judea by Cyrus the Great of the Persian Achaemenid Empire. Many Jews were exiled again by the Roman Empire. The 2,000 year dispersion of the Jewish diaspora beginning under the Roman Empire, as Jews were spread throughout the Roman world and, driven from land to land, settled wherever they could live freely enough to practice their religion. Over the course of the diaspora the center of Jewish life moved from Babylonia to the Iberian Peninsula to Poland to the United States and, as a result of Zionism, back to Israel. There were also many expulsions of Jews during the Middle Ages and Enlightenment in Europe, including: 1290, 16,000 Jews were expelled from England, (see the Statute of Jewry); in 1396, 100,000 from France; in 1421, thousands were expelled from Austria. Many of these Jews settled in East-Central Europe, especially Poland. Following the Spanish Inquisition in 1492, the Spanish population of around 200,000 Sephardic Jews were expelled by the Spanish crown and Catholic church, followed by expulsions in 1493 in Sicily (37,000 Jews) and Portugal in 1496. The expelled Jews fled mainly to the Ottoman Empire, the Netherlands, and North Africa, others migrating to Southern Europe and the Middle East. During the 19th century, France's policies of equal citizenship regardless of religion led to the immigration of Jews (especially from Eastern and Central Europe). This contributed to the arrival of millions of Jews in the New World. Over two million Eastern European Jews arrived in the United States from 1880 to 1925. In the latest phase of migrations, the Islamic Revolution of Iran caused many Iranian Jews to flee Iran. Most found refuge in the US (particularly Los Angeles, California, and Long Island, New York) and Israel. Smaller communities of Persian Jews exist in Canada and Western Europe. Similarly, when the Soviet Union collapsed, many of the Jews in the affected territory (who had been refuseniks) were suddenly allowed to leave. This produced a wave of migration to Israel in the early 1990s. Israel is the only country with a Jewish population that is consistently growing through natural population growth, although the Jewish populations of other countries, in Europe and North America, have recently increased through immigration. In the Diaspora, in almost every country the Jewish population in general is either declining or steady, but Orthodox and Haredi Jewish communities, whose members often shun birth control for religious reasons, have experienced rapid population growth. Orthodox and Conservative Judaism discourage proselytism to non-Jews, but many Jewish groups have tried to reach out to the assimilated Jewish communities of the Diaspora in order for them to reconnect to their Jewish roots. Additionally, while in principle Reform Judaism favours seeking new members for the faith, this position has not translated into active proselytism, instead taking the form of an effort to reach out to non-Jewish spouses of intermarried couples. There is also a trend of Orthodox movements reaching out to secular Jews in order to give them a stronger Jewish identity so there is less chance of intermarriage. As a result of the efforts by these and other Jewish groups over the past 25 years, there has been a trend (known as the Baal teshuva movement) for secular Jews to become more religiously observant, though the demographic implications of the trend are unknown. Additionally, there is also a growing rate of conversion to Jews by Choice of gentiles who make the decision to head in the direction of becoming Jews. Contributions Jewish individuals have played a significant role in the development and growth of Western culture, advancing many fields of thought, science and technology, both historically and in modern times, including through discrete trends in Jewish philosophy, Jewish ethics and Jewish literature, as well as specific trends in Jewish culture, including in Jewish art, Jewish music, Jewish humor, Jewish theatre, Jewish cuisine and Jewish medicine. Jews have established various Jewish political movements, religious movements, and, through the authorship of the Hebrew Bible and parts of the New Testament, provided the foundation for Christianity and Islam. More than 20 percent of the awarded Nobel Prize have gone to individuals of Jewish descent. Philanthropic giving is a widespread core function among Jewish organizations. Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Harvey_Sacks] | [TOKENS: 700] |
Contents Harvey Sacks Harvey Sacks (July 19, 1935 – November 14, 1975) was an American sociologist influenced by the ethnomethodology tradition. He pioneered extremely detailed studies of the way people use language in everyday life. Despite his early death in a car crash and the fact that he did not publish widely, he is recognized as the founder of conversation analysis. His work has had significant influence on fields such as linguistics, discourse analysis, and discursive psychology. Life and academic career Sacks received his doctoral degree in sociology at the University of California, Berkeley (1966), an LL.B. at Yale Law School (1959), and a B.A. at Columbia College (1955). He lectured at the University of California, Los Angeles and Irvine from 1964 to 1975. In 1975 Sacks died in a car accident. He was survived by his wife, two siblings, and his parents. Work Sacks became interested in the structure of conversation while working at a suicide counseling hotline in Los Angeles in the 1960s. The calls to the hotline were recorded, and Sacks was able to gain access to the tapes and study them. In the 1960s, prominent linguists like Noam Chomsky believed that conversation was too disorganized to be worthy of any kind of in-depth structural analysis.[citation needed] Sacks strongly disagreed, since he saw structure in every conversation, and developed conversation analysis as a result. Sacks's recorded lectures were transcribed (by Gail Jefferson who also edited them posthumously) but the tapes were not saved. The duplicated copies of the transcribed lectures were made freely available by Sacks and achieved international circulation and recognition during his lifetime and subsequently[citation needed] . He treated such topics as: the organization of person-reference; topic organization and stories in conversation; speaker selection preferences; pre-sequences; the organization of turn-taking; conversational openings and closings; and puns, jokes, stories and repairs in conversation among many others. Legacy Emanuel Schegloff, one of Sacks's close collaborators, colleagues and co-authors, became his literary executor. The subsequent handling of the literary estate (nachlass, to use the academic term) has attracted some controversy.[citation needed] Sacks's major work, Lectures on Conversation, is composed of edited revisions of transcribed lectures held from Spring 1964 through to 1972, and comprises about 1200 pages in a two-volume work published by Basil Blackwel in 1992. This publication project was instigated largely by David Sudnow and Gail Jefferson, colleagues and students of Sacks at Berkeley, UCLA and Irvine, and includes an introduction by Emanuel Schegloff. In her acknowledgements in these volumes, Jefferson mentioned the help of Sudnow in dealing with Sacks's literary estate. The Harvey Sacks Memorial Association, registered as a not-for-profit Association, was formed by Sudnow.[citation needed] These Lectures have been important for Sacks's later influence and for the field of Conversation Analysis. Sudnow was a follower of Alfred Schutz in phenomenology, and Harold Garfinkel in ethnomethodology. Sudnow regards the work of Sacks as outside the ethnomethodological mainstream.[citation needed] By contrast Garfinkel lists Sacks as one of 'Ethnomethodology's Authors' Works References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/BETA_(programming_language)] | [TOKENS: 591] |
Contents BETA (programming language) BETA is a pure object-oriented language originating within the "Scandinavian School" in object-orientation where the first object-oriented language Simula was developed. Among its notable features, it introduced nested classes, and unified classes with procedures into so called patterns. It has been in development since 1976, with implementations known since 1986, by Kristen Nygaard together with Bent Bruun Kristensen, Ole Lehrmann Madsen, and Birger Møller-Pedersen, at the University of Oslo. The project is inactive as of October 2020. Features From a technical perspective, BETA provides several unique features. Classes and Procedures are unified to one concept, a Pattern. Also, classes are defined as properties/attributes of objects. This means that a class cannot be instantiated without an explicit object context. A consequence of this is that BETA supports nested classes. Classes can be virtually defined, much like virtual methods can be in most object-oriented programming languages. Virtual entities (such as methods and classes) are never overwritten; instead they are redefined or specialized. BETA supports the object-oriented perspective on programming and has comprehensive facilities for procedural and functional programming. It has powerful abstraction mechanisms to support identification of objects, classification and composition. BETA is a statically typed language like Simula, Eiffel and C++, with most type checking done at compile-time. BETA aims to achieve an optimal balance between compile-time type checking and run-time type checking. A major and peculiar feature of the language is the concept of patterns. In another programming language, such as C++, one would have several classes and procedures. BETA expresses both of these concepts using patterns. For example, a simple class in C++ would have the form In BETA, the same class could be represented by the pattern That is, a class called point will have two fields, x and y, of type integer. The symbols (# and #) introduce patterns. The colon is used to declare patterns and variables. The @ sign before the integer type in the field definitions specifies that these are integer fields, and not, by contrast, references, arrays or other patterns. As another comparison, a procedure in C++ could have the form In BETA, such a function could be written using a pattern The x, y and z are local variables. The enter keyword specifies the input parameters to the pattern, while the exit keyword specifies the result of the function. Between the two, the do keyword prefixes the sequence of operations to be made. The conditional block is delimited by (if and if), that is the if keyword becomes part of the opening and closing parenthesis. Truth is checked through // True within an if block. Finally, the assignment operator -> assigns the value on its left hand side to the variable on its right hand side. This snippet prints the standard line "Hello world!": Further reading References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/List_of_Internet_pioneers] | [TOKENS: 12605] |
Contents List of Internet pioneers Instead of having a single inventor, the Internet was developed by many people over many years. The following people are Internet pioneers who have been recognized for their contribution to its early and ongoing development. These contributions include theoretical foundations, building early networks, specifying protocols, and expansion beyond a research tool to wide deployment. This list includes people who were: Among the pioneers, along with Cerf and Kahn, Bob Metcalfe, Donald Davies, Louis Pouzin, Steve Crocker and Ray Tomlinson meet three out of the four criteria above; as well as Jon Postel, considering the 2003 IEEE Internet award on which he is posthumously cited. Davies and Kahn are featured in the 1972 documentary film Computer Networks: The Heralds of Resource Sharing along with several early pioneers. Other Internet pioneers, who made notable contributions to the development of the Internet but do not meet any of the four criteria above, are listed in the final section of the article. The pioneers are listed in rough chronological order, reflecting the process through which the Internet developed. Birth of the Internet plaque A plaque commemorating the "Birth of the Internet" was dedicated at a conference on the history and future of the Internet on 28 July 2005 and is displayed at Stanford University. The seminal paper on internetworking, "A Protocol for Packet Network Intercommunication", published in 1974 by Vint Cerf, at Stanford University, and Bob Kahn, at ARPA, acknowledges a number of early members of the International Network Working Group (INWG): "The authors wish to thank a number of colleagues for helpful comments during early discussions of international network protocols, especially R. Metcalfe, R. Scantlebury, D. Walden, and H. Zimmerman; D. Davies and L. Pouzin who constructively commented on the fragmentation and accounting issues; and S. Crocker who commented on the creation and destruction of associations". The first version of TCP, RFC 675, was written later that year by Cerf with Yogen Dalal and Carl Sunshine. The introduction states "The authors would like to acknowledge the contributions of R. Tomlinson ..., D. Belsnes, J. Burchfiel, M. Galland, R. Kahn, D. Lloyd, W. Plummer, and J. Postel all of whose good ideas and counsel have had a beneficial effect (we hope) on this protocol design. In the early phases of the design work, R. Metcalfe, A. McKenzie, H. Zimmerman, G. LeLann, and M. Elie were most helpful in explicating the various issues to be resolved." Subsequently, ARPA funded another working group to develop TCP for use for internetworking. Over two hundred Internet Experiment Notes (IEN) were produced, documenting the group's work. Only a few of the people who authored notes, or who participated in the work or whose work was referenced in the notes are named on the "Birth of the Internet" plaque. Robert Metclafe, Yogen Dalal and John Shoch contributed to discussions leading up to the splitting of TCP, which influenced the work of Jon Postel at the Information Sciences Institute at the University of Southern California (USC-ISI) published as the second Internet Experiment Note (IEN2). TCP version 2, published in 1977 (IEN5), authored by Cerf, states that "Although the list of participants in the TCP work is very long (see ... the final TCP project report), special acknowledgements are due to R. Kahn, R. Tomlinson, T. Dalal, R. Karp and C. Sunshine for their active participation In the design of TCP." At that time, the "Final Report of the Internetwork TCP Project" was to be written by: Cerf, who led the work at Stanford University and had moved to ARPA to manage the program with Kahn; Peter T. Kirstein, who led the work at University College London (UCL); and Paal Spilling, who led the work at the Norwegian Defence Research Establishment (NDRE); and, in addition, three of their team members who were Stephen Edge and Andrew Hinchley at UCL, who authored the first IEN (along with their colleague Chris Bennett who authored several other IENs), and Richard Karp at Stanford. The preface of version 3, published in 1978 by Cerf and Postel, states that "The evolution from TCP version 2 to version 3 was influenced by many people, but special mention should be made of the work at MIT's Laboratory for Computer Science on the Data Stream Protocol (DSP) by Dave Clark and Dave Reed. Many of the specific changes introduced in version 3 were first described by Ray Tomlinson of BBN." It goes on to add that "This edition of the specification benefited from the comments of the following reviewers: Michael Padlipsky, Carl Sunshine, John Day, Gary Grossman, and Ray Tomlinson". Postel published version 4 later that year, in which TCP and IP were split into separate protocols, the preface of which notes "This revised edition of the version 4 specification was influenced by the comments of the following: Vint Cerf, Dick Watson, Carl Sunshine, Danny Cohen, Dave Clark, John Day, Gary Grossman, Jim Mathis, Bill Plummer, Jack Haverty, and the whole TCP Working Group." The bibliography of each of the TCP versions references papers published by many more researchers active in the field at the time. The final report of the project, which was orchestrated and funded by ARPA, was eventually published by Cerf alone in 1980 under the title "Final Report of the Stanford University TCP Project" (IEN151); several of the people mentioned in the report are named on the plaque. Cerf has discussed the role of some of the participants in his oral history, including Roger Scantlebury and Donald Davies; his graduate students at Stanford, Judy Estrin, Richard Karp, Yogan Dalal, and Carl Sunshine; and visiting researchers Gérard Le Lann, Dag Belsnes, James Mathis, Darryl Rubin, and Ronald Crane. The text printed and embossed in black into the brushed bronze surface of the Stanford plaque reads: BIRTH OF THE INTERNET THE ARCHITECTURE OF THE INTERNET AND THE DESIGN OF THE CORE NETWORKING PROTOCOL TCP (WHICH LATER BECAME TCP/IP) WERE CONCEIVED BY VINTON G. CERF AND ROBERT E. KAHN DURING 1973 WHILE CERF WAS AT STANFORD'S DIGITAL SYSTEMS LABORATORY AND KAHN WAS AT ARPA (LATER DARPA). IN THE SUMMER OF 1976, CERF LEFT STANFORD TO MANAGE THE PROGRAM WITH KAHN AT ARPA. THEIR WORK BECAME KNOWN IN SEPTEMBER 1973 AT A NETWORKING CONFERENCE IN ENGLAND. CERF AND KAHN'S SEMINAL PAPER WAS PUBLISHED IN MAY 1974. CERF, YOGEN K. DALAL, AND CARL SUNSHINE WROTE THE FIRST FULL TCP SPECIFICATION IN DECEMBER 1974. WITH THE SUPPORT OF DARPA, EARLY IMPLEMENTATIONS OF TCP (AND IP LATER) WERE TESTED BY BOLT BERANEK AND NEWMAN (BBN), STANFORD, AND UNIVERSITY COLLEGE LONDON DURING 1975. BBN BUILT THE FIRST INTERNET GATEWAY, NOW KNOWN AS A ROUTER, TO LINK NETWORKS TOGETHER. IN SUBSEQUENT YEARS, RESEARCHERS AT MIT AND USC-ISI, AMONG MANY OTHERS, PLAYED KEY ROLES IN THE DEVELOPMENT OF THE SET OF INTERNET PROTOCOLS. KEY STANFORD RESEARCH ASSOCIATES AND FOREIGN VISITORS VINTON CERF DAG BELSNES JAMES MATHIS RONALD CRANE JUNIOR BOB METCALFE YOGEN DALAL DARRYL RUBIN JUDITH ESTRIN JOHN SHOCH RICHARD KARP CARL SUNSHINE GERARD LE LANN KUNINOBU TANNO DARPA ROBERT KAHN COLLABORATING GROUPS BOLT BERANEK AND NEWMAN WILLIAM PLUMMER • GINNY STRAZISAR • RAY TOMLINSON MIT NOEL CHIAPPA • DAVID CLARK • STEPHEN KENT • DAVID P. REED NDRE YNGVAR LUNDH • PAAL SPILLING UNIVERSITY COLLEGE LONDON FRANK DEIGNAN • MARTINE GALLAND • PETER HIGGINSON ANDREW HINCHLEY • PETER KIRSTEIN • ADRIAN STOKES USC-ISI ROBERT BRADEN • DANNY COHEN • DANIEL LYNCH • JON POSTEL ULTIMATELY, THOUSANDS IF NOT TENS TO HUNDREDS OF THOUSANDS HAVE CONTRIBUTED THEIR EXPERTISE TO THE EVOLUTION OF THE INTERNET. DEDICATED 28 July 2005 J. C. R. Licklider Joseph Carl Robnett Licklider (1915–1990) was a faculty member of Massachusetts Institute of Technology (MIT), and researcher at Bolt, Beranek and Newman. He developed the idea of a universal computer network at the Information Processing Techniques Office (IPTO) of the United States Department of Defense Advanced Research Projects Agency (ARPA). He headed the IPTO from 1962 to 1963, and again from 1974 to 1975. His 1960 paper "Man-Computer Symbiosis" envisions that mutually-interdependent, "living together", tightly coupled human brains and computing machines would prove to complement each other's strengths. In 2013, Licklider was inducted into the Internet Hall of Fame "pioneers" award by the Internet Society. Paul Baran Paul Baran (1926–2011) developed the field of redundant distributed networks while conducting research at RAND Corporation starting in 1960 when Baran began investigating the development of large-scale survivable communication networks. This led to a series of papers titled "On Distributed Communications" that in 1964 described a detailed architecture for distributed adaptive message block switching. The proposal was composed of three key ideas: use of a decentralized network with multiple paths between any two points; dividing user messages into message blocks; and delivery of these messages by store and forward switching. Baran's network design was never built; it was intended for voice communication using low-cost electronics and did not feature software switches. Baran provided input to the ARPANET project on distributed communications and dynamic routing. Baran received the inaugural SIGCOMM Award in 1989, the inaugural IEEE Internet Award in 2000 and the inaugural Internet Hall of Fame "pioneers" award from the Internet Society in 2012. Donald Davies Donald Davies (1924–2000) independently invented and named the concept of packet switching for data communications in 1965 at the United Kingdom's National Physical Laboratory (NPL). In the same year, he proposed a national commercial data network in the UK employing high-speed switching nodes. He refined his ideas in a paper written in 1966, which included the first description of an interface computer to act as a router. Later in 1966, he established a team which produced a design for a local-area network to serve the needs of NPL and prove the feasibility of packet switching while developing a more formal design proposal for a national network based on a high-level network connected to local networks. Davies built the local-area NPL network, the first implementation of packet switching in early 1969 and the first to use high-speed links. His work influenced the ARPANET and research in Europe and Japan. He carried out simulation work on datagram networks on a scale to provide data communication to much of the United Kingdom and designed an adaptive method of congestion control, which he called isarithmic. In the 1970s, Davies worked on internetworking and secure communication. He was acknowledged by Vint Cerf and Bob Kahn in their seminal 1974 paper on internetworking, A Protocol for Packet Network Intercommunication. Davies received the inaugural IEEE Internet Award in 2000 and the inaugural Internet Hall of Fame "pioneers" award from the Internet Society in 2012. Roger Scantlebury Roger Scantlebury (born 1936) led the pioneering work to implement packet switching and associated communication protocols at the NPL in the late 1960s. Scantlebury and his colleague Keith Bartlett were the first to describe the term protocol in a modern data-communications context in an April 1967 memorandum entitled A Protocol for Use in the NPL Data Communications Network. He proposed the use of packet switching in the ARPANET at the inaugural Symposium on Operating Systems Principles in October 1967 and convinced Larry Roberts the economics were favorable to message switching. During the 1970s, he was a major figure in the International Network Working Group (INWG) through which he was an early contributor to concepts used in the Transmission Control Program, which became part of the Internet protocol suite. He was acknowledged by Cerf and Kahn in their seminal 1974 paper on internetworking. Bob Taylor Robert W. Taylor (1932–2017) was director of ARPA's Information Processing Techniques Office (IPTO) from 1966 through 1969, where he convinced ARPA to fund a computer network. The 1968 paper, "The Computer as a Communication Device", that he wrote together with J.C.R. Licklider starts out: "In a few years, men will be able to communicate more effectively through a machine than face to face." And while their vision would take more than "a few years", the paper lays out the future of what the Internet would eventually become. From 1970 to 1983, he managed the Computer Science Laboratory of the Xerox Palo Alto Research Center (PARC), where technologies such as Ethernet and the Xerox Alto were developed. He was the founder and manager of Digital Equipment Corporation's Systems Research Center until 1996. Larry Roberts Lawrence G. "Larry" Roberts (1937–2018) was an American computer scientist. After earning his PhD in electrical engineering from MIT in 1963, Roberts continued to work at MIT's Lincoln Laboratory where in 1965 he connected Lincoln Lab's TX-2 computer to the SDC Q-32 computer in Santa Monica. In 1967, he became a program manager in the ARPA Information Processing Techniques Office (IPTO), where he managed the development of the ARPANET, the first wide area packet switching network. Roberts applied Donald Davies' concepts of packet switching in the ARPANET, and sought input from Paul Baran and other researchers on network design. After Robert Taylor left ARPA in 1969, Roberts became director of the IPTO. In 1973, he left ARPA to commercialize the nascent technology in the form of Telenet, which became one of the first public data networks in the world, and served as its CEO from 1973 to 1980. In 2012, Roberts was inducted into the Internet Hall of Fame by the Internet Society. Leonard Kleinrock Leonard Kleinrock (born 1934) studied the optimization of message delays in communication networks using queueing theory in his Ph.D. thesis, Message Delay in Communication Nets with Storage, at MIT in 1962. After this, he moved to UCLA. Kleinrock became involved in the ARPANET project in early 1967. In 1969, under his contract with ARPA to run the Network Measurement Center, a team at UCLA connected a computer to an Interface Message Processor (IMP), becoming the first node on the network. Building on his earlier work on queueing theory, during the 1970s, Kleinrock carried out theoretical work to measure, simulate and mathematically model the performance of the ARPANET. Kleinrock published hundreds of research papers, which ultimately launched a new field of research on the theory and application of queuing theory to computer networks. His work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today. In 2012, Kleinrock was inducted into the Internet Hall of Fame by the Internet Society. Frank Heart Frank Heart (1929–2018) worked for Bolt, Beranek and Newman (BBN) from 1966 to 1994, during which time he managed the team that designed and implemented the Interface Message Processors (IMPs), the routing computers for the ARPANET. Bob Kahn Robert E. "Bob" Kahn (born 1938) is an American engineer and computer scientist. After earning a Ph.D. degree from Princeton University in 1964, he worked for AT&T Bell Laboratories, as an assistant professor at MIT. He moved to Bolt Beranek & Newman (BBN) where he was the principal designer of the IMP subnetwork and the IMP-Host protocol for the ARPANET. In 1972, he joined the IPTO within ARPA, where he worked on both satellite packet networks (which led to SATNET) and ground-based radio packet networks (which led to PRNET), and recognized the value of being able to communicate across heterogenous networks. Along with Vint Cerf, he authored the seminal paper on internetworking, A Protocol for Packet Network Intercommunication, in 1974. Kahn left ARPA in 1986 to found the Corporation for National Research Initiatives (CNRI), a nonprofit organization providing leadership and funding for research and development of the National Information Infrastructure. David Walden David Walden (1942–2022) worked for BBN where he implemented the packet switching and routing software for the ARPANET IMP. He proposed what became known as the Walden message switching protocol, and was acknowledged by Cerf and Kahn in their seminal 1974 paper on internetworking. He wrote INWG Note 10 with Will Crowther, and co-authored the ARPANET Completion Report with Frank Heart, Alex Mckenzie, and J. McQuillian. Ray Tomlinson Ray Tomlinson (1941–2016) worked for BBN. He carried out the first experimental message transfer between separate computer systems on the ARPANET in 1971. His message was sent from one DEC PDP-10 computer to another PDP-10, placed next to each other. Tomlinson initiated the use of the @ sign to separate the names of the user and the user's machine. Tomlinson's idea for network mail was adopted on the ARPANET, which significantly increased network traffic. As a result, he has been called "the inventor of modern email". The use of the File Transfer Protocol (FTP) for network mail on the ARPANET was proposed in RFC 469 in March 1973. Through RFC 561, 680, 724, and finally 733 in November 1977, a standardized framework was developed for "electronic mail" using FTP mail servers on the ARPANET. Tomlinson discussed a network mail protocol among the International Network Working Group in INWG Protocol note 2, in September 1974, although it was never adopted. Furthermore, he participated in the initial design of TCP during 1973–74, was acknowledged in the specification of TCP version 2 in March 1977, and version 3 in January 1978, which says that many of the changes introduced in that version were first described by Tomlinson the previous year when he put forward a "Proposal for TCP 3". Tomlinson received the IEEE Internet Award in 2004, with David H. Crocker, for networked email. Steve Crocker Steve Crocker (born 1944) has worked in the ARPANET and Internet communities since their inception. As a UCLA graduate student in the 1960s, he led the creation of the ARPANET Network Control Protocol. He also created the Request for Comments (RFC) series, authoring the very first RFC and many more. He was instrumental in creating the ARPA Network Working Group, the forerunner of the modern Internet Engineering Task Force. In 1972, Crocker moved to the Advanced Research Projects Agency (ARPA) to become a program manager. He formed the International Network Working Group (INWG), then his research interests shifted to artificial intelligence. He was acknowledged by Cerf and Kahn in their seminal 1974 paper on internetworking. He was a senior researcher at USC's Information Sciences Institute (ISI) where he contributed to discussions on the Transmission Control Program in August 1977. He was a founder and director of the Computer Science Laboratory at The Aerospace Corporation and a vice president at Trusted Information Systems. In 1994, Crocker was one of the founders and chief technology officer of CyberCash, Inc. He has also been an IETF security area director, a member of the Internet Architecture Board, chair of the Internet Corporation for Assigned Names and Numbers (ICANN) Security and Stability Advisory Committee, a board member of the Internet Society and numerous other Internet-related volunteer positions. Crocker is chair of the board of ICANN. For this work, Crocker was awarded the 2002 IEEE Internet Award "for leadership in creation of key elements in open evolution of Internet protocols". In 2012, Crocker was inducted into the Internet Hall of Fame by the Internet Society. Jon Postel Jon Postel (1943–1998) was a researcher at the Information Sciences Institute (ISI) at the University of Southern California (USC) . He was editor of much of the early the RFC series as well as versions 3 and 4 of TCP/IP in January 1978 and February 1979, and the final version of TCP and Internet Protocol, which were published in January 1980 by DARPA on behalf of the Defense Communications Agency. He was the creator of the Simple Mail Transfer Protocol (SMTP) and the co-creator and longtime administrator of the Internet Assigned Numbers Authority (IANA). His beard and sandals made him "the most recognizable archetype of an Internet pioneer". The International Network Working Group (INWG) discussed protocols for electronic mail in 1979, which was referenced by Postel in his early work on Internet email. Postel first proposed an Internet Message Protocol in 1979 as part of the Internet Experiment Note (IEN) series. In September 1980, Postel and Suzanne Sluizer published RFC 772 which proposed the Mail Transfer Protocol to enable servers to transmit computer mail on the ARPANET as a replacement for FTP. RFC 780 published in May 1981 removed all references to FTP. In November 1981, Postel published RFC 788 describing the Simple Mail Transfer Protocol (SMTP) protocol, which was updated by RFC 821 in August 1982. Addresses were extended to username@host.domain by RFC 805 in February 1982. RFC 822, written by David H. Crocker, defined the format for messages. The Internet Society's Postel Award is named in his honor, as is the Postel Center at the Information Sciences Institute. His obituary was written by Vint Cerf and published as RFC 2468 in remembrance of Postel and his work. In 2012, Postel was inducted into the Internet Hall of Fame by the Internet Society. Vint Cerf Vinton G. "Vint" Cerf (born 1943) is an American computer scientist. He is recognized as one of "the fathers of the Internet", sharing this title with Bob Kahn. He earned his Ph.D. from UCLA in 1972. At UCLA he worked in Professor Leonard Kleinrock's networking group that connected the first two nodes of the ARPANET and contributed to the ARPANET host-to-host protocol, the Network Control Program. Cerf was an assistant professor at Stanford University from 1972 to 1976, where he conducted research on packet network interconnection protocols and co-designed the DoD TCP/IP protocol suite. He authored the seminal paper on internetworking, A Protocol for Packet Network Intercommunication, in May 1974 with Bob Kahn; the first specification of TCP with Yogen Dalal and Carl Sunshine in December that year; and edited the second version of TCP in March 1977. He was a program manager for the Advanced Research Projects Agency (ARPA) from 1976 to 1982, overseeing the first internetworking experiments with SATNET and PRNET. Cerf was instrumental in the formation of both the Internet Society and Internet Corporation for Assigned Names and Numbers (ICANN), serving as founding president of the Internet Society from 1992 to 1995 and in 1999 as chairman of the board and as ICANN Chairman from 2000 to 2007. His many awards include the National Medal of Technology, the Turing Award, the Presidential Medal of Freedom, and membership in the National Academy of Engineering and the Internet Society's Internet Hall of Fame. Douglas Engelbart Douglas Engelbart (1925–2013) was an early researcher at the Stanford Research Institute. His Augmentation Research Center laboratory became the second node on the ARPANET in October 1969, and SRI became the early Network Information Center, which evolved into the domain name registry. Engelbart was a committed, vocal proponent of the development and use of computers and computer networks to help cope with the world's increasingly urgent and complex problems. He is best known for his work on the challenges of human–computer interaction, resulting in the invention of the computer mouse, and the development of hypertext, networked computers, and precursors to graphical user interfaces. John Klensin John Klensin's involvement with Internet began in 1969, when he worked on the File Transfer Protocol. Klensin was involved in the early procedural and definitional work for DNS administration and top-level domain definitions and was part of the committee that worked out the transition of DNS-related responsibilities between USC-ISI and what became ICANN. His career includes 30 years as a principal research scientist at MIT, a stint as INFOODS Project Coordinator for the United Nations University, Distinguished Engineering Fellow at MCI WorldCom, and Internet Architecture Vice President at AT&T; he is now an independent consultant. In 1992 Randy Bush and John Klensin created the Network Startup Resource Center, helping dozens of countries to establish connections with FidoNet, UseNet, and when possible the Internet. In 2003, he received an International Committee for Information Technology Standards Merit Award. In 2007, he was inducted as a Fellow of the Association for Computing Machinery for contributions to networking standards and Internet applications. In 2012, Klensin was inducted into the Internet Hall of Fame by the Internet Society. Elizabeth Feinler Elizabeth J. "Jake" Feinler (born 1931) was a staff member of Doug Engelbart's Augmentation Research Center (ARC) at SRI and PI for the Network Information Center (NIC) for the ARPANET and the Defense Data Network (DDN) from 1972 until 1989. In 2012, Feinler was inducted into the Internet Hall of Fame by the Internet Society. Louis Pouzin Louis Pouzin (born 1931) is a French computer scientist. He built the first implementation of a wide-area datagram packet-communications network, CYCLADES, that demonstrated the feasibility of internetworking, which he called a "catenet". He was acknowledged by Vint Cerf and Bob Kahn in their seminal 1974 paper on internetworking. Further concepts from his work were reflected in the development of TCP/IP. In 1997, Pouzin received the ACM SIGCOMM Award for "pioneering work on connectionless packet communication". He was named a Chevalier of the Legion of Honor by the French government on 19 March 2003. In 2012, Pouzin was inducted into the Internet Hall of Fame by the Internet Society. Hubert Zimmermann Hubert Zimmerman (1941–2012) was a French software engineer who pioneered internetworking with Louis Pouzin. He contributed to early discussions on the Transmission Control Program, and was acknowledged by Cerf and Kahn in their seminal 1974 paper on internetworking. Gérard Le Lann Gérard Le Lann proposed the sliding window scheme for achieving reliable error and flow control on end-to-end connections. He joined Vint Cerf's research team at Stanford University during 1973-4 and Cerf incorporated his sliding window scheme into the research work for the Transmission Control Program (TCP). Le Lann is included on the Stanford University "Birth of the Internet" plaque and mentioned in the Stanford TCP project completion report. Bob Metcalfe Bob Metcalfe (born 1946) designed and began implementing Ethernet and the PARC Universal Packet for internetworking while studying for his PhD at Harvard University and working at Xerox Parc. He contributed to early discussions on the Transmission Control Program at the International Network Working Group (INWG) meeting in June 1973, and participated in the initial design of TCP, worked out at Stanford during 1973–74. He was acknowledged by Cerf and Kahn in their seminal 1974 paper on internetworking. In addition, along with Yogen Dalal, he contributed to discussions leading up to the splitting of TCP, which influenced the work of Jon Postel, published in the Internet Experiment Note series. John Shoch John Shoch worked on internetworking at Xerox Parc. He contributed to early discussions on the Transmission Control Program at the June 1973 INWG meeting, as well as discussions in August 1977 about splitting TCP from IP, and was acknowledged in an early version of TCP version v4 in September 1978 in which the split was implemented. He published several Internet Experiment Notes in the late 1970s and 1980, and his work was referenced in the final IP version 4 that would be standardized in RFC 760 (1980) and RFC 791 (1981). Yogen Dalal Yogen K. Dalal, is an Indian-born electrical engineer and computer scientist. He was an ARPANET pioneer, and a key contributor to the development of internetworking protocols. Dalal co-authored the first Transmission Control Program specification, with Vint Cerf and Carl Sunshine between 1973 and 1974. It was published as RFC 675 (Specification of Internet Transmission Control Program) in December 1974. It first used the term internet as a shorthand for internetworking, and later RFCs repeated this use. Between 1976 and 1977 Dalal, Bob Metcalfe and others proposed splitting Transmission Control Program into Transmission Control Protocol and Internet Protocol, leading to the development of TCP/IP. After receiving a B.Tech in Electrical Engineering at the Indian Institute of Technology Bombay, he went to the United States to study for a master's degree at Stanford University in 1972 and then a PhD in 1973. His interest in data communication as a graduate student led him to working with new professor Vint Cerf as a teaching assistant in 1973, and then as a research assistant while studying for his PhD. In Summer 1973, while Cerf and Bob Kahn were attempting to formulate an internetworking protocol, Dalal joined their research team to assist them on developing what eventually became Transmission Control Program. After co-authoring the first internet protocol with Cerf and Sunshine in 1974, Dalal received his PhD in Electrical Engineering and Computer Science, in 1977 while remaining active in the development of TCP/IP at Stanford. Due to his experience in communication protocols, several key research centers were greatly interested in recruiting him, but in early 1977, Dalal joined Robert Metcalfe's team at Xerox PARC, where he contributed the development of the Xerox Network Systems (XNS). and the Xerox Star. He also worked on the 10 Mbps Ethernet Specification along with DEC and Intel, leading to the IEEE 802.3 LAN standard. He later left Xerox, and became a founding member of the startup tech companies Metaphor Computer Systems and Claris in the 1980s. He later became a managing partner of Mayfield, and joined the Board of Directors at several tech companies including Tibco and Nuance. In 2005, he was recognized by Stanford as one of the pioneers of the Internet. Carl Sunshine Carl Sunshine completed his PhD under Vint Cerf at the Digital Systems Laboratory, Stanford University. He worked on the first full TCP specification in December 1974 with Cerf and Yogen Dalal. He later worked for RAND and The Aerospace Corporation. Sunshine published a notable paper on internetworking in 1977, among many papers on networking. During the 1980s, he chaired the International Network Working Group, and edited two books on communication protocols. Peter Kirstein Peter T. Kirstein (1933–2020) was a British computer scientist and a leader in the international development of the Internet. In 1973, he established one of the first two international nodes of the ARPANET. In 1978 he co-authored "Issues in packet-network interconnection" with Vint Cerf, one of the early technical papers on the internet concept. His research group at University College London adopted TCP/IP in 1982, ahead of ARPANET, and played a significant role in the very earliest experimental Internet work. Starting in 1983 he chaired the International Collaboration Board, which involved six NATO countries, served on the Networking Panel of the NATO Science Committee (serving as chair in 2001), and on Advisory Committees for the Australian Research Council, the Canadian Department of Communications, the German GMD, and the Indian Education and Research Network (ERNET) Project. He led the Silk Project, which provides satellite-based Internet access to the Newly Independent States in the Southern Caucasus and Central Asia. In 2012, Kirstein was inducted into the Internet Hall of Fame by the Internet Society. Adrian Stokes Adrian Stokes (1945–2020) was a researcher at UCL's Institute of Computer Science working for Peter Kirstein in 1973. He worked on the first implementation of email in the United Kingdom in 1974 as well as the early monitoring software for the interconnection of the ARPANET with British academic networks, the first international heterogenous computer network. He contributed to a number of books on communication protocols and computer networking from the late 1970s to the early 1990s. Danny Cohen Danny Cohen (1937–2019) led several projects on real-time interactive applications over the ARPANet and the Internet starting in 1973. After serving on the computer science faculty at Harvard University (1969–1973) and Caltech (1976), he joined the Information Sciences Institute (ISI) at University of Southern California (USC). At ISI (1973–1993) he started many network related projects including, one to allow interactive, real-time speech over the ARPANet, packet-voice, packet-video, and Internet Concepts. He was acknowledged in the specification of TCP version 3 in January 1978. In 1981 he adapted his visual flight simulator to run over the ARPANet, the first application of packet switching networks to real-time applications. In 1993, he worked on Distributed Interactive Simulation through several projects funded by United States Department of Defense. He is probably best known for his 1980 paper "On Holy Wars and a Plea for Peace" which adopted the terminology of endianness for computing. Cohen was elected to the National Academy of Engineering in 2006 for contributions to the advanced design, graphics, and real-time network protocols of computer systems and as an IEEE Fellow in 2010 for contributions to protocols for packet switching in real-time applications. In 1993 he received a United States Air Force Meritorious Civilian Service Award. And in 2012, Cohen was inducted into the Internet Hall of Fame by the Internet Society. Judith Estrin Judith Estrin worked with Vinton Cerf on the Transmission Control Protocol project at Stanford University in the 1970s. Her role within the research team was to help with the initial implementation tests of TCP with University College London. David Clark David D. Clark (born 1944) is an American computer scientist. He was acknowledged in the specification of TCP version 4 in September 1978. During the period of tremendous growth and expansion of the Internet from 1981 to 1989, he acted as chief protocol architect in the development of the Internet, and chaired the Internet Activities Board, which later became the Internet Architecture Board. He is currently a senior research scientist at the MIT Computer Science and Artificial Intelligence Laboratory. In 1990 Clark was awarded the ACM SIGCOMM Award "in recognition of his major contributions to Internet protocol and architecture." In 1998 he received the IEEE Richard W. Hamming Medal "for leadership and major contributions to the architecture of the Internet as a universal information medium". In 2001 he was inducted as a Fellow of the Association for Computing Machinery for "his preeminent role in the development of computer communication and the Internet, including architecture, protocols, security, and telecommunications policy". In 2001, he was awarded the Telluride Tech Festival Award of Technology in Telluride, Colorado, and in 2011 the Lifetime Achievement Award from the Oxford Internet Institute, University of Oxford "in recognition of his intellectual and institutional contributions to the advance of the Internet." J. Farber Starting in the 1980s Dave Farber (1934–2026) helped conceive and organize the major American research networks CSNET, NSFNET, and the National Research and Education Network (NREN). He helped create the NSF/DARPA-funded Gigabit Network Test bed Initiative and served as the chairman of the Gigabit Test bed Coordinating Committee. He also served as chief technologist at the US Federal Communications Commission (2000–2001) and is a founding editor of ICANNWatch. Farber is an IEEE Fellow, ACM Fellow, recipient of the 1995 SIGCOMM Award for vision and breadth of contributions to and inspiration of others in computer networks, distributed computing, and network infrastructure development, and the 1996 John Scott Award for seminal contributions to the field of computer networks and distributed computer systems. He served on the board of directors of the Electronic Frontier Foundation, the Electronic Privacy Information Center advisory board, the board of trustees of the Internet Society, and as a member of the Presidential Advisory Committee on High Performance Computing and Communications, Information Technology and Next Generation Internet. On 3 August 2013, Farber was inducted into the Pioneers Circle of the Internet Hall of Fame for his key role in many systems that converged into today's Internet. Paul Mockapetris Paul V. Mockapetris (born 1948), while working with Jon Postel at the Information Sciences Institute (ISI) in 1983, proposed the Domain Name System (DNS) architecture. He was IETF chair from 1994 to 1996. Mockapetris received the 1997 John C. Dvorak Telecommunications Excellence Award "Personal Achievement - Network Engineering" for DNS design and implementation, the 2003 IEEE Internet Award for his contributions to DNS, and the Distinguished Alumnus award from the University of California, Irvine. In May 2005, he received the ACM Sigcomm lifetime award. In 2012, Mockapetris was inducted into the Internet Hall of Fame by the Internet Society. Joyce K. Reynolds Joyce K. Reynolds (1952–2015) was an American computer scientist and served as part of the editorial team of the RFC series from 1987 to 2006. She performed the IANA function with Jon Postel until this was transferred to ICANN, then worked with ICANN in this role until 2001, while remaining an employee of ISI. As Area Director of the User Services area, she was a member of the Internet Engineering Steering Group of the IETF from 1990 to March 1998. Together with Bob Braden, she received the 2006 Postel Award in recognition of her services to the Internet. She is mentioned, along with a brief biography, in RFC 1336, Who's Who in the Internet (1992). In 2025, Reynolds was inducted into the Internet Hall of Fame by the Internet Society. Dave Crocker The younger brother of Steve was awarded the IEEE Internet Award in 2004, together with Ray Tomlinson for their work on network messaging – the invention of email. Dave started networking with Arpanet and is still active in development. Susan Estrada Susan Estrada founded CERFnet, one of the original regional IP networks, in 1988. Through her leadership and collaboration with PSINet and UUnet, Estrada helped form the interconnection enabling the first commercial Internet traffic via the Commercial Internet Exchange. She wrote Connecting to the Internet in 1993 and she was inducted to the Internet Hall of Fame in 2014. She is on the board of trustees of the Internet Society. Dave Mills David L. Mills (1938–2024) was an American computer engineer. Mills earned his PhD in Computer and Communication Sciences from the University of Michigan in 1971. While at Michigan he worked on the ARPA sponsored Conversational Use of Computers (CONCOMP) project and developed DEC PDP-8 based hardware and software to allow terminals to be connected over phone lines to an IBM System/360 mainframe computer. Mills was the chairman of the Gateway Algorithms and Data Structures Task Force (GADS) and the first chairman of the Internet Architecture Task Force. He invented the Network Time Protocol (1981), the DEC LSI-11 based fuzzball router that was used for the 56 kbit/s NSFNET (1985), the Exterior Gateway Protocol (1984), and inspired the author of ping (1983). He was an emeritus professor at the University of Delaware following his retirement in 2008 after 22 years of teaching for the university. In 1999 he was inducted as a Fellow of the Association for Computing Machinery, and in 2002, as a Fellow of the Institute of Electrical and Electronics Engineers (IEEE). In 2008, Mills was elected to the National Academy of Engineering (NAE). In 2013 he received the IEEE Internet Award "For significant leadership and sustained contributions in the research, development, standardization, and deployment of quality time synchronization capabilities for the Internet." Radia Perlman Radia Joy Perlman (born 1951) is the software designer and network engineer who developed the Spanning Tree Protocol which is fundamental to the operation of network bridges. She also played an important role in the development of link-state routing protocols such as IS-IS (which had a significant influence on OSPF). In 2010 she received the ACM SIGCOMM Award "for her fundamental contributions to the Internet routing and bridging protocols that we all use and take for granted every day." Dennis M. Jennings Dennis M. Jennings is an Irish physicist, academic, Internet pioneer, and venture capitalist. In 1984, the National Science Foundation (NSF) began construction of several regional supercomputing centers to provide very high-speed computing resources for the US research community. In 1985 NSF hired Jennings to lead the establishment of the National Science Foundation Network (NSFNET) to link five of the super-computing centers to enable sharing of resources and information. Jennings made three critical decisions that shaped the subsequent development of NSFNET: Jennings was also actively involved in the start-up of research networks in Europe (European Academic Research Network, EARN - President; EBONE - Board member) and Ireland (HEAnet - initial proposal and later board member). He chaired the Board and General Assembly of the Council of European National Top Level Domain Registries (CENTR) from 1999 to early 2001 and was actively involved in the start-up of the Internet Corporation for Assigned Names and Numbers (ICANN). He was a member of the ICANN Board from 2007 to 2010, serving as vice-chair in 2009–2010. In April 2014 Jennings was inducted into the Internet Hall of Fame. Steve Wolff Stephen "Steve" Wolff participated in the development of ARPANET while working for the U.S. Army. In 1986 he became Division Director for Networking and Communications Research and Infrastructure at the National Science Foundation (NSF) where he managed the development of NSFNET. He also conceived the Gigabit Testbed, a joint NSF-DARPA project to prove the feasibility of IP networking at gigabit speeds. His work at NSF transformed the fledgling internet from a narrowly focused U.S. government project into the modern Internet with scholarly and commercial interest for the entire world. In 1994 he left NSF to join Cisco as a technical manager in Corporate Consulting Engineering. In 2011 he became the CTO at Internet2. In 2002 the Internet Society recognized Wolff with its Postel Award. When presenting the award, Internet Society (ISOC) President and CEO Lynn St. Amour said "…Steve helped transform the Internet from an activity that served the specific goals of the research community to a worldwide enterprise which has energized scholarship and commerce throughout the world." The Internet Society also recognized Wolff in 1994 for his courage and leadership in advancing the Internet. Sally Floyd Sally Floyd (1950–2019) was an American engineer recognized for her extensive contributions to Internet architecture and her work in identifying practical ways to control and stabilize Internet congestion. She invented the random early detection active queue management scheme, which has been implemented in nearly all commercially available routers, and devised the now-common method of adding delay jitter to message timers to avoid synchronization collisions. Floyd, with Vern Paxson, in 1997 identified the lack of knowledge of network topology as the major obstacle in understanding how the Internet works. This paper, "Why We Don't Know How to Simulate the Internet", was re-published as "Difficulties in Simulating the Internet" in 2001 and won the IEEE Communication Society's William R. Bennett Prize Paper Award. Floyd was also a co-author on the standard for TCP Selective acknowledgement (SACK), Explicit Congestion Notification (ECN), the Datagram Congestion Control Protocol (DCCP) and TCP Friendly Rate Control (TFRC). She received the IEEE Internet Award in 2005 and the ACM SIGCOMM Award in 2007 for her contributions to congestion control. She has been involved in the Internet Advisory Board, and, as of 2007, was one of the top-ten most cited researchers in computer science. Van Jacobson Van Jacobson is an American computer scientist, best known for his work on TCP/IP network performance and scaling. His work redesigning TCP/IP's flow control algorithms (Jacobson's algorithm) to better handle congestion is said to have saved the Internet from collapsing in the late 1980s and early 1990s. He is also known for the TCP/IP Header Compression protocol described in RFC 1144: Compressing TCP/IP Headers for Low-Speed Serial Links, popularly known as Van Jacobson TCP/IP Header Compression. He is co-author of several widely used network diagnostic tools, including traceroute, tcpdump, and pathchar. He was a leader in the development of the multicast backbone (MBone) and the multimedia tools vic, vat, and wb. For his work, Jacobson received the 2001 ACM SIGCOMM Award for Lifetime Achievement, the 2003 IEEE Koji Kobayashi Computers and Communications Award, and was elected to the National Academy of Engineering in 2006. In 2012, Jacobson was inducted into the Internet Hall of Fame by the Internet Society. Tim Berners-Lee Timothy John "Tim" Berners-Lee (born 1955) is a British physicist and computer scientist. In 1980, while working at CERN, he proposed a project using hypertext to facilitate sharing and updating information among researchers. While there, he built a prototype system named ENQUIRE. Back at CERN in 1989 he conceived of and, in 1990, together with Robert Cailliau, created the first client and server implementations for what became the World Wide Web. Berners-Lee is the director of the World Wide Web Consortium (W3C), a standards organization which oversees and encourages the Web's continued development, co-director of the Web Science Trust, and founder of the World Wide Web Foundation. In 1994, Berners-Lee became one of only six members of the World Wide Web Hall of Fame. In 2004, Berners-Lee was knighted by Queen Elizabeth II for his pioneering work. In April 2009, he was elected a foreign associate of the United States National Academy of Sciences, based in Washington, D.C. In 2012, Berners-Lee was inducted into the Internet Hall of Fame by the Internet Society. Robert Cailliau Robert Cailliau (French: [kaˈjo], born 1947), is a Belgian informatics engineer and computer scientist who, working with Tim Berners-Lee and Nicola Pellow at CERN, developed the World Wide Web. In 2012 he was inducted into the Internet Hall of Fame by the Internet Society. Simon S. Lam Simon S. Lam (born 1947) is an American computer scientist. He was inducted into the Internet Hall of Fame (2023) by the Internet Society for “inventing secure sockets in 1991 and implementing the first secure sockets layer, named SNP, in 1993.” In 1990, while a professor at University of Texas at Austin, he was inspired after writing a paper on formal semantics of upper and lower interfaces of a protocol layer and he conceived the idea of a new security sublayer in the Internet protocol stack. The new sublayer, at the bottom of the Application layer, would make use of transport layer sockets for data transfer and offer corresponding secure sockets to application processes. This way, application programmers do not need to know much about implementation details for security. Also, the upper interface of the sublayer would enable implementation changes in the future. Lam's idea of a sublayer which offers a “secure sockets interface” to applications was novel and a radical departure from contemporary security research for Internet applications (e.g., MIT's Kerberos, 1988–1992). Lam wrote a proposal to the NSA University Research Program, which was funded for two years. By early 1993, Lam, with the help of 3 graduate students (Woo, Bindignavle, and Su), designed and implemented the first secure sockets layer, named Secure Network Programming (SNP). They demonstrated SNP to their NSA program manager when he visited UT-Austin in June 1993. They also published and presented SNP in the USENIX Summer Technical Conference on June 8, 1994, including its architecture, system design, and performance evaluation results to demonstrate its efficiency and practicality SNP was created for Internet applications in general, concurrently and independently of the invention and development of WWW, which had only dozens of servers worldwide in early 1993. Subsequent secure sockets layers, SSL and TLS, developed years later, follow the same architecture and key ideas of SNP. Today's TLS 1.3 is used for all e-commerce applications (banking, shopping, etc.), for email, and many other Internet applications. Lam and his students won the 2004 ACM Software System Award for SNP. He received the 2004 ACM SIGCOMM Award for lifetime contribution to the field of communication networks. He was inducted into the National Academy of Engineering in 2007. Marc Andreessen Marc L. Andreessen (born 1971) is an American software engineer, entrepreneur, and investor. Working with Eric Bina while at NCSA, he co-authored Mosaic, the first widely used web browser. He is also co-founder of Netscape Communications Corporation. Eric Bina Eric J. Bina (born 1964) is an American computer programmer. In 1993, together with Marc Andreessen, he authored the first version of Mosaic while working at NCSA at the University of Illinois at Urbana–Champaign. Mosaic is famed as the first killer application that popularized the Internet. He is also a co-founder of Netscape Communications Corporation. Noel Chiappa Stephen Kent David Reed Yngvar Lundh Pål Spilling Bob Braden Scott Shenker Scott Shenker received the IEEE Internet Award in 2006 for contributions to the study of resource sharing. Lixia Zhang Lixia Zhang received the IEEE Internet Award in 2009 for Internet architecture and modeling. Stephen Deering Stephen Deering received the IEEE Internet Award in 2010 for IP multicasting and IPv6. Jun Murai Jun Murai is a professor at Keio University. He is the founder of JUNET and the WIDE Project. Murai received the IEEE Internet Award in 2011 for leadership in the development of the global Internet, especially in Asia. He was inducated into the Internet Hall of Fame in 2013, recognizing his administrative and co-ordination efforts in establishing Internet connectivity in Japan, and serving as President of Japan Network Information Center. Mark Handley Mark Handley is Professor of Networked Systems in the Department of Computer Science of University College London, where he leads the Networks Research Group. He received the IEEE Internet Award in 2012 for exceptional contributions to the advancement of Internet technology for network architecture, mobility, and/or end-use applications. Jon Crowcroft Jon Crowcroft is the Marconi Professor of Communications Systems in the Department of Computer Science and Technology, University of Cambridge. He received the IEEE Internet Award in 2014 for contributions to research in and teaching of Internet protocols, including multicast, transport, quality of service, security, mobility, and opportunistic networking. KC Claffy KC Claffy s director of the Center for Applied Internet Data Analysis at the University of California, San Diego. She received the IEEE Internet Award in 2015 for seminal contributions to the field of Internet measurement, including security and network data analysis, and for distinguished leadership in and service to the Internet community by providing open-access data and tools. In 2017 she was awarded the Jonathan B. Postel Service Award and inducted into the Internet Hall of Fame in 2019. Vern Paxson Vern Paxson is a professor of computer science at the University of California, Berkeley. He is an active member of the Internet Engineering Task Force (IETF) community and served as the chair of the IRTF from 2001 until 2005. From 1998 to 1999 he served on the IESG as Transport Area Director for the IETF. In 2006 Paxson was inducted as a Fellow of the Association for Computing Machinery (ACM). The ACM's Special Interest Group on Data Communications (SIGCOMM) gave Paxson its 2011 award, "for his seminal contributions to the fields of Internet measurement and Internet security, and for distinguished leadership and service to the Internet community." The annual SIGCOMM Award recognizes lifetime contribution to the field of communication networks. He received the IEEE Internet Award in 2015 for seminal contributions to the field of Internet measurement, including security and network data analysis, and for distinguished leadership in and service to the Internet community by providing open-access data and tools. Henning Schulzrinne Henning Schulzrinne received the IEEE Internet Award in 2016. Deborah Estrin Deborah Estrin received the IEEE Internet Award in 2017. Ramesh Govindan Ramesh Govindan received the IEEE Internet Award in 2018. Jennifer Rexford Jennifer Rexford received the IEEE Internet Award in 2019. Eve Schooler Eve Schooler and Stephen Casner received the IEEE Internet Award in 2020 for contributions to Internet multimedia standards and protocols. Ian Foster Ian Foster received the IEEE Internet Award in 2023. Carl Kesselman Carl Kesselman received the IEEE Internet Award in 2023. Other Internet pioneers Some other people, who have made notable contributions to the development of Internet but do not meet the criteria defined at the top of the article, include the following. Wesley Clark (1927–2016) had a key insight in the planning for the ARPANET. In April 1967, he suggested to Bob Taylor and Larry Roberts the idea of using separate small computers (later named Interface Message Processors) as a way of forming a message switching network and reducing load on the local computers. Barry Wessler (1943–2018) was hired by Larry Roberts in 1968 to work for him at ARPA. Roberts and Wessler wrote the "IMP Specification" for the ARPANET, which was discussed at the June 1968 meeting of principal investigators. Wessler approved the Network Control Program for the ARPANET in 1970, after ordering certain more exotic elements to be dropped. In 1972, Barry Wessler updated Larry Roberts' RD program for network mail and called it NRD. Later that year, Wessler co-authored a report the second sub-group of INWG which considered host-to-host protocol requirements for an international protocol. Along with Larry Roberts, he left ARPA in 1973 to found Telenet, a commercial packet-switched network in the US. They both joined the international effort to standardize a protocol for packet switching based on virtual circuits shortly before it was finalized as X.25. He was recognized for his role as an Internet pioneer. Severo Ornstein (born 1930) was part of the Bolt, Beranek and Newman (BBN) team that wrote the winning proposal submitted in 1968 to ARPA for the ARPANET. He was responsible for the design of the communication interfaces and other special hardware for the Interface Message Processor (IMP). William Crowther (born 1936) was part of the original BBN IMP team. He implemented a distributed distance vector routing system for the ARPAnet. He wrote INWG Note 10 with Dave Walden. Michel Elie (born 1961) was a research assistant at UCLA who participated in the original Arpanet project. He later worked on Louis Pouzin's CYCLADES project, as well as co-authoring a number of early publications and INWG notes on internetworking. Alex Mckenzie worked at BBN from 1967 to 1996. He was involved with development of the ARPANET host-to-host protocol, starting in 1970, and later in the development of Internet protocols through INWG and the IEN series. He co-authored the ARPANET Completion Report with Frank Heart, Dave Walden, and J. McQuillian. Later, he headed a project consulting to the US National Bureau of Standards (now NIST) to present the ideas developed in the Internet to the OSI project. David Boggs (1950–2022) he co-invented Ethernet and worked on internetworking at XEROX PARC. He participated in the initial design of TCP during 1973–74. Sylvia B. Wilbur (born 1938) was a British computer scientist at University College London who programed the local node for the ARPANET connection to British academic networks, was one of the first to exchange email in Britain in 1974, and became a leading researcher on computer-supported cooperative work. Mark P. McCahill (born 1956) is an American programmer and systems architect. While working at the University of Minnesota he led the development of the Gopher protocol (1991), the effective predecessor of the World Wide Web, and contributed to the development and popularization of a number of other Internet technologies from the 1980s. Nicola Pellow, one of the nineteen members of the WWW Project at CERN working with Tim Berners-Lee, is recognized for developing the first cross-platform web browser, Line Mode Browser, that displayed web-pages on dumb terminals and was released in May 1991. She joined the project in November 1990, while an undergraduate math student enrolled in a sandwich course at Leicester Polytechnic (now De Montfort University). She left CERN at the end of August 1991, but returned after graduating in 1992, and worked with Robert Cailliau on MacWWW, the first web browser for the classic Mac OS. See also References Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Joke#cite_note-FOOTNOTESacks1974337–353-21] | [TOKENS: 8460] |
Contents Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Google_Gemini_image_generation_controversy] | [TOKENS: 5222] |
Contents Google Gemini Gemini (also known as Google Gemini and formerly known as Bard) is a generative artificial intelligence chatbot and virtual assistant developed by Google. It is powered by the large language model (LLM) of the same name, after previously being based on LaMDA and PaLM 2. The Gemini architecture is trained natively on multiple data types, allowing the models to process and generate text, computer code, images, audio, and video simultaneously. Google distributes the technology in varying capacities, ranging from efficient on-device versions ("Nano") and cost-effective, high-throughput variants ("Flash") to high-compute models designed for complex reasoning ("Pro" and "Ultra"). The 1.5 and 3 model generations introduced extended context windows, enabling the analysis of large datasets such as entire codebases, long-form videos, or extensive document archives in a single prompt. Gemini was first announced, on December 6, 2023, and replaced existing Google branding, for AI services. In February 2024, the Bard chatbot was renamed Gemini, and the "Duet AI" branding for Google Cloud and Workspace was retired in favor of the Gemini identifier. The models integrate into the Google ecosystem through the Gemini mobile app, which functions as an overlay assistant on Android devices, and through the Vertex AI platform for third-party developers. The release of Gemini has generated technical praise and public controversy. Commentators have highlighted the models' benchmarks in coding and retrieval tasks as competitive with OpenAI's GPT-4 and GPT-5. However, the product launch faced criticism regarding the reliability of its outputs. In early 2024, Google suspended the model's ability to generate images of people after users reported historical inaccuracies and bias in its depictions of human subjects. Subsequent updates, including the Gemini 1.5 and 3 series released throughout 2025, focused on reducing hallucinations, improving latency, and enhancing agentic capabilities for autonomous research and software development. History In November 2022, OpenAI launched ChatGPT, a chatbot based on the GPT-3 family of large language models (LLMs). ChatGPT gained worldwide attention, becoming a viral Internet sensation. Alarmed by ChatGPT's potential threat to Google Search, Google executives issued a "code red" alert, reassigning several teams to assist in the company's artificial intelligence (AI) efforts. Sundar Pichai, the CEO of Google and parent company Alphabet, was widely reported to have issued the alert, but Pichai later denied this to The New York Times. In a rare move, Google co-founders Larry Page and Sergey Brin, who had stepped down from their roles as co-CEOs of Alphabet in 2019, attended emergency meetings with company executives to discuss Google's response to ChatGPT. Brin requested access to Google's code in February 2023, for the first time in years. Google had unveiled LaMDA, a prototype LLM, earlier in 2021, but it was not released to the public. When asked by employees at an all-hands meeting whether LaMDA was a missed opportunity for Google to compete with ChatGPT, Pichai and Google AI chief Jeff Dean said that while the company's chatbot had similar capabilities to ChatGPT, there were risks to introducing an LLM that might spread false information, so they decided to wait. In January 2023, Google Brain's sister company DeepMind CEO Demis Hassabis hinted at plans for a ChatGPT rival, and Google employees were instructed to accelerate progress on a ChatGPT competitor, intensively testing "Apprentice Bard" and other chatbots. Pichai assured investors during Google's quarterly earnings investor call in February that the company had plans to expand LaMDA's availability and applications. On February 6, 2023, Google announced Bard, a generative artificial intelligence chatbot powered by LaMDA. Bard was first rolled out to a select group of 10,000 "trusted testers", before a wide release scheduled at the end of the month. The project was overseen by product lead Jack Krawczyk, who described the product as a "collaborative AI service" rather than a search engine, while Pichai detailed how Bard would be integrated into Google Search. Reuters calculated that adding ChatGPT-like features to Google Search could cost the company $6 billion in additional expenses by 2024, while research and consulting firm SemiAnalysis calculated that it would cost Google $3 billion. The technology was developed under the codename "Atlas", with the name "Bard" in reference to the Celtic term for a storyteller and chosen to "reflect the creative nature of the algorithm underneath". Multiple media outlets and financial analysts described Google as "rushing" Bard's announcement to preempt rival Microsoft's planned February 7 event unveiling its partnership with OpenAI to integrate ChatGPT into its Bing search engine in the form of Bing AI (later rebranded as Microsoft Copilot), as well as to avoid playing "catch-up" to Microsoft. Microsoft CEO Satya Nadella told The Verge: "I want people to know that we made them dance." Tom Warren of The Verge and Davey Alba of Bloomberg News noted that this marked the beginning of another clash between the two Big Tech companies over "the future of search", after their six-year "truce" expired in 2021; Chris Stokel-Walker of The Guardian, Sara Morrison of Recode, and analyst Dan Ives of investment firm Wedbush Securities labeled this an AI arms race between the two. After an "underwhelming" February 8 livestream in Paris showcasing Bard, Google's stock fell eight percent, equivalent to a $100 billion loss in market value, and the YouTube video of the livestream was made private. Many viewers also pointed out an error during the demo in which Bard gives inaccurate information about the James Webb Space Telescope in response to a query. Google employees criticized Pichai's "rushed" and "botched" announcement of Bard on Memegen, the company's internal forum, while Maggie Harrison of Futurism called the rollout "chaos". Pichai defended his actions by saying that Google had been "deeply working on AI for a long time", rejecting the notion that Bard's launch was a knee-jerk reaction. A week after the Paris livestream, Pichai had 80,000 employees dedicate two to four hours to dogfood testing Bard, while Google executive Prabhakar Raghavan had employees correct any errors Bard made. In the following weeks, Google employees criticized Bard in internal messages, citing safety and ethical concerns and calling on company leaders not to launch the service. Google executives launched the product, overruling a negative risk assessment report conducted by its AI ethics team. After Pichai suddenly laid off 12,000 employees later that month due to slowing revenue growth, remaining workers shared memes and snippets of their humorous exchanges with Bard soliciting its "opinion" on the layoffs. Google employees began testing a more sophisticated version of Bard with larger parameters, dubbed "Big Bard", in mid-March. Google opened up early access for Bard on March 21, 2023, in a limited capacity, allowing users in the US and the UK to join a waitlist. Unlike Microsoft's approach with Bing Chat, Bard was launched as a standalone web application featuring a text box and a disclaimer that the chatbot "may display inaccurate or offensive information that doesn't represent Google's views". Three responses are then provided to each question, with users prompted to submit feedback on the usefulness of each answer. Google vice presidents Sissie Hsiao and Eli Collins framed Bard as a complement to Google Search and stated that the company had not determined how to make the service profitable. Among those granted early access were those enrolled in Google's "Pixel Superfans" loyalty program, users of its Pixel and Nest devices, and Google One subscribers. Bard is trained by third-party contractors hired by Google, including Appen and Accenture workers, whom Business Insider and Bloomberg News reported were placed under extreme pressure, overworked, and underpaid. Bard is also trained on data from publicly available sources, which Google disclosed by amending its privacy policy. Shortly after Bard's initial launch, Google reorganized the team behind Google Assistant, the company's virtual assistant, to focus on Bard instead. Google researcher Jacob Devlin resigned from the company after claiming that Bard had surreptitiously leveraged data from ChatGPT; Google denied the allegations. Meanwhile, a senior software engineer at the company published an internal memo warning that Google was falling behind in the AI "arms race", not to OpenAI but to independent researchers in open-source communities. Pichai revealed on March 31 that the company intended to "upgrade" Bard by basing it on PaLM, a newer and more powerful LLM from Google, rather than LaMDA. The same day, Krawczyk announced that Google had added "math and logic capabilities" to Bard. Bard gained the ability to assist in coding in April, being compatible with more than 20 programming languages at launch. Microsoft also began running advertisements in the address bar of a developer build of the Edge browser, urging users to try Bing whenever they visit the Bard web app. 9to5Google reported that Google was working to integrate Bard into its ChromeOS operating system and Pixel devices. Bard took center stage during the annual Google I/O keynote in May 2023, with Pichai and Hsiao announcing a series of updates to Bard, including the adoption of PaLM 2, integration with other Google products and third-party services, expansion to 180 countries, support for additional languages, and new features. In stark contrast to previous years, the Google Assistant was barely mentioned during the event. The expanded rollout did not include any nations in the European Union (EU), possibly reflecting concerns about compliance with the General Data Protection Regulation. Those with Google Workspace accounts also gained access to the service. In June, Google attempted to launch Bard in the EU but was blocked by the Irish Data Protection Commission, who requested a "data protection impact assessment" from the company. In July, Bard was launched in the EU and Brazil, adding support for dozens of new languages and introducing personalization and productivity features. An invite-only chatroom ("server") on Discord was created in July, consisting of users who heavily used Bard. Over the next few months, the chatroom was flooded with comments questioning the usefulness of Bard. Google released a major update to the chatbot in September, integrating it into many of its products through "extensions", adding a button to attempt to fact-check AI-generated responses through Google Search, and allowing users to share conversation threads. Google also introduced the "Google-Extended" web crawler as part of its search engine's robots.txt indexing file to allow web publishers to opt-out of allowing Bard to scan them for training. Online users later discovered that Google Search was indexing Bard conversation threads on which users had enabled sharing; Google stated that this was an error which was corrected. In October, during the company's annual Made by Google event, Hsiao unveiled "Assistant with Bard", an upgraded version of the Google Assistant which was integrated with Bard. When the U.S. Copyright Office solicited public comment on potential new regulation on generative AI technologies, Google joined with OpenAI and Microsoft in arguing that the responsibility for generating copyrighted material lay with the user, not the developer. Accenture contractors voted to join the Alphabet Workers Union in November, in protest of suboptimal working conditions, while the company filed a lawsuit in the U.S. District Court for the Northern District of California against a group of unidentified scammers who had been advertising malware disguised as a downloadable version of Bard. On December 6, 2023, Google announced Gemini, a larger, multimodal LLM. A specially tuned version of the mid-tier Gemini Pro was integrated into Bard, while the larger Gemini Ultra was used for "Bard Advanced" in 2024. The Wall Street Journal reported that Bard was then averaging around 220 million monthly visitors. Google ended its contract with Appen in January 2024, while image generation was added to Bard the next month, using Google Brain's Imagen 2 text-to-image model. On February 8, 2024, Bard and Duet AI were unified under the Gemini brand, with a mobile app launched on Android and the service integrated into the Google app on iOS. On Android, users who downloaded the app saw Gemini replace Assistant as their device's default virtual assistant, though Assistant remained a standalone service. Google also launched "Gemini Advanced with Ultra 1.0", available via a "Google One AI Premium" subscription, incorporated Gemini into its Messages app on Android, and announced a partnership with Stack Overflow. Gemini again took center stage at the 2024 Google I/O keynote. Google announced that Gemini would be integrated into several products, including Android, Chrome, Photos, and Workspace. Gemini Advanced was upgraded to the "Gemini 1.5 Pro" language model, with Google previewing Gemini Live, a voice chat mode, and Gems, the ability to create custom chatbots. Following the Gemini 1.0 launch in December 2023, Google released successive generations of the underlying model at an accelerating pace. In February 2024, Google introduced Gemini 1.5 Pro in a limited preview, notable for its one-million-token context window, which allowed it to process roughly an hour of video, 11 hours of audio, or 30,000 lines of code in a single prompt. Gemini 1.5 Flash, a faster and more cost-efficient variant aimed at developers, was announced at Google I/O in May 2024 alongside the expansion of Gemini Advanced. On December 11, 2024, Google announced Gemini 2.0 Flash as the first model in the Gemini 2.0 generation, framing it as the beginning of an "agentic era" in which AI models could take autonomous multi-step actions. Unlike previous Gemini models, 2.0 Flash supported native multimodal outputs, including generated images and text-to-speech audio, in addition to multimodal inputs. It also introduced Deep Research for Gemini Advanced users, enabling the model to autonomously browse and synthesize information across sources. Gemini 2.0 Flash became generally available on February 5, 2025, alongside an experimental release of Gemini 2.0 Pro and the budget-oriented Gemini 2.0 Flash-Lite. Gemini 2.5 Pro Experimental was released on March 25, 2025, as Google's first explicitly designated "thinking model", capable of reasoning through steps before responding using chain-of-thought techniques. It debuted at the top of the LMArena leaderboard, a benchmark measuring human preference for AI responses, where it remained for several months. Gemini 2.5 Flash was unveiled at Google I/O in May 2025 and reached general availability in June 2025 alongside the cost-optimized Gemini 2.5 Flash-Lite. On November 18, 2025, Google launched Gemini 3 Pro, describing it as its most intelligent model to date and marking a departure from the company's previous staged release patterns, with the model made immediately available across the Gemini app, Google Search, Google AI Studio, and Vertex AI on launch day. A more capable "Deep Think" reasoning mode began rolling out to Google AI Ultra subscribers in the weeks that followed. Gemini 3 Flash, a speed-optimized variant, followed in December 2025, becoming the new default model in the Gemini app. In February 2026, Google released a major upgrade to Gemini 3 Deep Think, which it described as targeting practical applications in science, research, and engineering. On February 19, 2026, Google launched Gemini 3.1 Pro in preview, characterizing it as a step forward in core reasoning. The model achieved an ARC-AGI-2 score of 77.1 percent, more than double that of Gemini 3 Pro, and 80.6 percent on SWE-Bench Verified, a benchmark for autonomous software engineering tasks. Originally introduced by Google in August 2024, Gemini Live debuted on the Pixel 9 series as the default virtual assistant, replacing the Google Assistant on those devices. It was later rolled out to Samsung Galaxy devices beginning with the Galaxy S25 series in July 2025, and subsequently expanded to models including the Z Fold 7 and Z Flip 7, where it became part of the Galaxy AI suite. In February 2025, Google introduced a feature for Gemini Advanced subscribers that enables the assistant to recall and reference past conversations. In June 2025, Google announced Gemini CLI, an open-source AI agent for use in computer terminals. At the same time, Google introduced a new Gemini logo with more rounded sparkles and a color scheme aligned with the updated Google logo. Reception Gemini received mixed reviews upon its initial release in March 2023. James Vincent of The Verge found it faster than ChatGPT and Bing Chat, but noted that the lack of Bing-esque footnotes was "both a blessing and a curse", encouraging Google to be bolder when experimenting with AI. His colleague David Pierce was unimpressed by its uninteresting and sometimes inaccurate responses, adding that despite Google's insistence that Gemini was not a search engine, its user interface resembled that of one, which could cause problems for Google. Cade Metz of The New York Times described Gemini as "more cautious" than ChatGPT, while Shirin Ghaffary of Vox called it "dry and uncontroversial" due to the reserved nature of its responses. The Washington Post columnist Geoffrey A. Fowler found Gemini a mixed bag, noting that it acted cautiously but could show Internet-influenced bias. Writing for ZDNET, Sabrina Ortiz believed ChatGPT and Bing Chat were "more capable overall" in comparison to Gemini, while Wired journalist Lauren Goode found her conversation with Gemini "the most bizarre" of the three. After the introduction of extensions, The New York Times' Kevin Roose found the update underwhelming and "a bit of a mess", while Business Insider's Lakshmi Varanasi found that Gemini often leaned more into flattery than facts. In a 60 Minutes conversation with Hsiao, Google senior vice president James Manyika, and Pichai, CBS News correspondent Scott Pelley found Gemini "unsettling". Associate professor Ethan Mollick of the Wharton School of the University of Pennsylvania was underwhelmed by its artistic ineptitude. The New York Times conducted a test with ChatGPT and Gemini regarding their ability to handle tasks expected of human assistants, and concluded that ChatGPT's performance was vastly superior to that of Gemini. NewsGuard, a tool that rates the credibility of news articles, found that Gemini was more skilled at debunking known conspiracy theories than ChatGPT. A report published by the Associated Press cautioned that Gemini and other chatbots were prone to generate "false and misleading information that threaten[ed] to disenfranchise voters". In February 2024, social media users reported that Gemini was generating images that featured people of color and women in historically inaccurate contexts—such as Vikings, Nazi soldiers, and the Founding Fathers—and refusing prompts to generate images of white people. These images were derided on social media, including by conservatives who cited them as evidence of Google's "wokeness". The business magnate Elon Musk, whose company xAI operates the chatbot Grok, was among those who criticized Google, denouncing its suite of products as biased and racist. Musk and other users targeted Krawczyk, resurfacing his past comments discussing race, leading Krawczyk to withdraw from X and LinkedIn. The conservative-leaning tabloid New York Post ran a cover story on the incident in the print edition of its newspaper. In response, Krawczyk said that Google was "working to improve these kinds of depictions immediately", and Google paused Gemini's ability to generate images of people. Raghavan released a lengthy statement addressing the controversy, explaining that Gemini had "overcompensate[d]" amid its efforts to strive for diversity and acknowledging that the images were "embarrassing and wrong". In an internal memo to employees, Pichai called the debacle offensive and unacceptable, promising structural and technical changes. Several employees in Google's trust and safety team were laid off days later. Hassabis stated that Gemini's ability to generate images of people would be restored within two weeks; it was ultimately relaunched in late August, powered by its new Imagen 3 model. The market reacted negatively, with Google's stock falling by 4.4 percent. Pichai faced growing calls to resign, including from technology analysts Ben Thompson and Om Malik. House Republicans led by Jim Jordan subpoenaed Google, accusing the company of colluding with the Biden administration to censor speech. In light of the fiasco and Google's overall response to OpenAI, Business Insider's Hugh Langley and Lara O'Reilly declared that Google was fast going "from vanguard to dinosaur". Bloomberg columnist Parmy Olson suggested that Google's "rushed" rollout of Gemini was the cause of its woes, not "wokeness". Martin Peers, writing for The Information, opined that Google needed a leader like Mark Zuckerberg to defuse the situation. Hugging Face scientist Sasha Luccioni and Surrey University professor Alan Woodward believed that the incident had "deeply embedded" roots in Gemini's training corpus and algorithms, making it difficult to rectify. Jeremy Kahn of Fortune called for researchers focused on safety and responsibility to work together to develop better guardrails. New York magazine contributor John Herrman wrote: "It's a spectacular unforced error, a slapstick rake-in-the-face moment, and a testament to how panicked Google must be by the rise of OpenAI and the threat of AI to its search business." During the 2024 Summer Olympics in July, Google aired a commercial for Gemini entitled "Dear Sydney" depicting a father asking the chatbot to generate a fan letter to the star athlete Sydney McLaughlin-Levrone for his young daughter. Similar to Apple's "Crush!" commercial for the seventh-generation iPad Pro, the advertisement drew heavy backlash online, with criticism for replacing authentic human expression and creativity with a computer; The Washington Post columnist Alexandra Petri lambasted the commercial as "missing the point". As a result, Google withdrew the commercial from NBC's rotation. Google aired two commercials during Super Bowl LIX in February 2025, both promoting Gemini. The first, entitled "50 States, 50 Stories", consisted of a national spot and 50 regional spots showcasing how small businesses in each U.S. state leverage Gemini in Google Workspace. Social media users noticed a factual error in Wisconsin's spot regarding gouda cheese, prompting Google to edit out the incorrect statistic, while The Verge claimed that Google had "faked" some of Gemini's output in the same commercial by plagiarizing text on the web. Garett Sloane of Ad Age commented that these blunders illustrated the risks of advertising AI technology. The other commercial was entitled "Dream Job" and featured a father using Gemini on his Pixel 9 to prepare for a job interview; Google also ran a third commercial entitled "Party Blitz" online, in which a man "attempts to impress his girlfriend's family by using Gemini [on his Pixel 9] to become a football expert". In 2022, McLaren Racing announced a multi-year partnership with Google. As part of Google's partnership extension with McLaren in 2024, Gemini was advertised on the McLaren Formula One car, including racing a special livery based on Gemini's color palette for the 2025 United States Grand Prix. In the aftermath of the image generation controversy, some users began accusing Gemini's text responses of being biased toward the political left. In one such example that circulated online, Gemini said that it was "difficult to say definitively" whether Musk or the Nazi dictator Adolf Hitler had more negatively affected society. Other users reported that Gemini tended to promote left-wing politicians and causes such as affirmative action and abortion rights while refusing to promote right-wing figures, meat consumption, and fossil fuels. The Wall Street Journal's editorial board wrote that Gemini's "apparently ingrained woke biases" were "fueling a backlash toward AI on the political right, which is joining the left in calling for more regulation." Indian Ministry of Electronics and Information Technology junior minister Rajeev Chandrasekhar alleged that Google had violated the country's Information Technology Rules by refusing to summarize an article by the right-wing news website OpIndia, and for saying that some experts described Prime Minister Narendra Modi's policies as fascist. In France, Google was fined €250 million by the competition regulator Autorité de la concurrence under the Directive on Copyright in the Digital Single Market, in part due to its cited failure to inform local news publishers of when their content was used for Gemini's training. Voice of America accused Gemini of "parroting" Chinese propaganda. In November 2024, CBS News reported that Gemini had responded to a college student in Michigan asking for help with homework in a threatening manner, calling the student "a burden on society" and saying "Please die. Please." A statement issued by Google said "This response violated our policies and we've taken action to prevent similar outputs from occurring." See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Jean-Baptiste_Lamarck] | [TOKENS: 4619] |
Contents Jean-Baptiste Lamarck Jean-Baptiste Pierre Antoine de Monet, chevalier de Lamarck (1 August 1744 – 18 December 1829), often known simply as Lamarck (/ləˈmɑːrk/; French: [ʒɑ̃batist lamaʁk]), was a French naturalist, biologist, academic, and soldier. He was an early proponent of the idea that biological evolution occurred and proceeded in accordance with natural laws, though the mechanism he suggested has been refuted at large. Lamarck fought in the Seven Years' War against Prussia, and was awarded a commission for bravery on the battlefield. Posted to Monaco, Lamarck became interested in natural history and resolved to study medicine. He retired from the army after being injured in 1766, and returned to his medical studies. Lamarck developed a particular interest in botany, and later, after he published the three-volume work Flore françoise (1778), he gained membership of the French Academy of Sciences in 1779. Lamarck became involved in the Jardin des Plantes and was appointed to the Chair of Botany in 1788. When the French National Assembly founded the Muséum national d'Histoire naturelle in 1793, Lamarck became a professor of zoology. In 1801, he published Système des animaux sans vertèbres, a major work on the classification of invertebrates, a term which he coined. In an 1802 publication, he became one of the first to use the term "biology" in its modern sense.[Note 1] Lamarck continued his work as a premier authority on invertebrate zoology. He is remembered, at least in malacology, as a taxonomist of considerable stature. The modern era generally remembers Lamarck for a theory of inheritance of acquired characteristics, called Lamarckism (accurately named after him), soft inheritance, or use/disuse theory, which he described in his 1809 Philosophie zoologique. Though, the idea of soft inheritance antedates him, and while it was a small element of his theory of evolution, in his time it was accepted by many natural historians. Lamarck's idea of use/disuse later aligned with Darwin's idea of natural selection and is believe to in part have inspired Darwin, who ended up contradicting Lamarckism. Lamarck's contribution to evolutionary theory consisted of the first truly cohesive theory of biological evolution, in which an alchemical complexifying force drove organisms up a ladder of complexity, and a second environmental force adapted them to local environments through use and disuse of characteristics, differentiating them from other organisms. Scientists have debated whether advances in the field of transgenerational epigenetics mean that Lamarck was to an extent correct, or not. Biography Jean-Baptiste Lamarck was born in Bazentin, Picardy, northern France, as the 11th child in an impoverished aristocratic family.[Note 2] Male members of the Lamarck family had traditionally served in the French army. Lamarck's eldest brother was killed in combat at the Siege of Bergen op Zoom, and two other brothers were still in service when Lamarck was in his teenaged years. Yielding to the wishes of his father, Lamarck enrolled in a Jesuit college in Amiens in the late 1750s. After his father died in 1760, Lamarck bought himself a horse, and rode across the country to join the French army, which was in Germany at the time. Lamarck showed great physical courage on the battlefield in the Seven Years' War with Prussia, and he was even nominated for the lieutenancy. Lamarck's company was left exposed to the direct artillery fire of their enemies, and was quickly reduced to just 14 men—with no officers. One of the men suggested that the puny, 17-year-old volunteer should assume command and order a withdrawal from the field; although Lamarck accepted command, he insisted they remain where they had been posted until relieved. When their colonel reached the remains of their company, this display of courage and loyalty impressed him so much that Lamarck was promoted to officer on the spot. However, when one of his comrades playfully lifted him by the head, he sustained an inflammation in the lymphatic glands of the neck, and he was sent to Paris to receive treatment. He was awarded a commission and settled at his post in Monaco. There, he encountered Traité des plantes usuelles, a botany book by James Francis Chomel. With a reduced pension of only 400 francs a year, Lamarck resolved to pursue a profession. He attempted to study medicine, and supported himself by working in a bank office. Lamarck studied medicine for four years, but gave it up under his elder brother's persuasion. He was interested in botany, especially after his visits to the Jardin du Roi, and he became a student under Bernard de Jussieu, a notable French naturalist. Under Jussieu, Lamarck spent 10 years studying French flora. In 1776, he wrote his first scientific essay—a chemical treatise. After his studies, in 1778, he published some of his observations and results in a three-volume work, entitled Flore française. Lamarck's work was respected by many scholars, and it launched him into prominence in French science. On 8 August 1778, Lamarck married Marie Anne Rosalie Delaporte. Georges-Louis Leclerc, Comte de Buffon, one of the top French scientists of the day, mentored Lamarck, and helped him gain membership to the French Academy of Sciences in 1779 and a commission as a royal botanist in 1781, in which he traveled to foreign botanical gardens and museums. Lamarck's first son, André, was born on 22 April 1781, and he made his colleague André Thouin the child's godfather. In his two years of travel, Lamarck collected rare plants that were not available in the Royal Garden, and also other objects of natural history, such as minerals and ores, that were not found in French museums. On 7 January 1786, his second son, Antoine, was born, and Lamarck chose Antoine Laurent de Jussieu, Bernard de Jussieu's nephew, as the boy's godfather. On 21 April the following year, Charles René, Lamarck's third son, was born. René Louiche Desfontaines, a professor of botany at the Royal Garden, was the boy's godfather, and Lamarck's elder sister, Marie Charlotte Pelagie De Monet, was the godmother. In 1788, Buffon's successor at the position of Intendant of the Royal Garden, Charles-Claude Flahaut de la Billaderie, comte d'Angiviller, created a position for Lamarck, with a yearly salary of 1,000 francs, as the keeper of the herbarium of the Royal Garden. In 1790, at the height of the French Revolution, Lamarck changed the name of the Royal Garden from Jardin du Roi to Jardin des Plantes, a name that did not imply such a close association with King Louis XVI. Lamarck had worked as the keeper of the herbarium for five years before he was appointed curator and professor of invertebrate zoology at the Muséum national d'histoire naturelle in 1793. During his time at the herbarium, Lamarck's wife gave birth to three more children before dying on 27 September 1792. With the official title of "Professeur d'Histoire naturelle des Insectes et des Vers", Lamarck received a salary of nearly 2,500 francs per year. The following year, on 9 October, he married Charlotte Reverdy, who was 30 years his junior. On 26 September 1794 Lamarck was appointed to serve as secretary of the assembly of professors for the museum for a period of one year. In 1797, Charlotte died, and he married Julie Mallet the following year; she died in 1819. In his first six years as professor, Lamarck published only one paper, in 1798, on the influence of the moon on the Earth's atmosphere. Lamarck began as an essentialist who believed species were unchanging; however, after working on the molluscs of the Paris Basin, he grew convinced that transmutation or change in the nature of a species occurred over time. He set out to develop an explanation, and on 11 May 1800 (the 21st day of Floreal, Year VIII, in the revolutionary timescale used in France at the time), he presented a lecture at the Muséum national d'histoire naturelle in which he first outlined his newly developing ideas about evolution. In 1801, he published Système des Animaux sans Vertèbres, a major work on the classification of invertebrates. In the work, he introduced definitions of natural groups among invertebrates. He categorized echinoderms, arachnids, crustaceans, and annelids, which he separated from the old taxon for worms known as Vermes. Lamarck was the first to separate arachnids from insects in classification, and he moved crustaceans into a separate class from insects. In 1802, Lamarck published Hydrogéologie, and became one of the first to use the term biology in its modern sense. In Hydrogéologie, Lamarck advocated a steady-state geology based on a strict uniformitarianism. He argued that global currents tended to flow from east to west, and continents eroded on their eastern borders, with the material carried across to be deposited on the western borders. Thus, the Earth's continents marched steadily westward around the globe. That year, he also published Recherches sur l'Organisation des Corps Vivants, in which he drew out his theory on evolution. He believed that all life was organized in a vertical chain, with gradation between the lowest forms and the highest forms of life, thus demonstrating a path to progressive developments in nature. In his own work, Lamarck had favored the then-more traditional theory based on the classical four elements. During Lamarck's lifetime, he became controversial, attacking the more enlightened chemistry proposed by Lavoisier. He also came into conflict with the widely respected palaeontologist Georges Cuvier, who was not a supporter of evolution. According to Peter J. Bowler, Cuvier "ridiculed Lamarck's theory of transformation and defended the fixity of species." According to Martin J. S. Rudwick: Cuvier was clearly hostile to the materialistic overtones of current transformist theorizing, but it does not necessarily follow that he regarded species origin as supernatural; certainly he was careful to use neutral language to refer to the causes of the origins of new forms of life, and even of man. Lamarck gradually turned blind; he died in Paris on 18 December 1829. When he died, his family was so poor, they had to apply to the Academie for financial assistance. Lamarck was buried in a common grave of the Montparnasse cemetery for just five years, according to the grant obtained from relatives. Later, the body was dug up along with other remains and was lost. Lamarck's books and the contents of his home were sold at auction, and his body was buried in a temporary lime pit. After his death, Cuvier used the form of a eulogy to denigrate Lamarck: [Cuvier's] éloge of Lamarck is one of the most deprecatory and chillingly partisan biographies I have ever read—though he was supposedly writing respectful comments in the old tradition of de mortuis nil nisi bonum. — Gould, 1993 Lamarckian evolution While he was working on Hydrogéologie (1802), Lamarck had the idea to apply the principle of erosion to biology. This led him to the basic principle of evolution, which saw the fluids in organs inheriting more complex forms and functions, thus passing on these traits to the organism's descendants. This was a reversal from Lamarck's previous view, published in his Memoirs of Physics and Natural History (1797), in which he briefly refers to the immutability of species. Lamarck stressed two main themes in his biological work (neither of them to do with soft inheritance). The first was that the environment gives rise to changes in animals. He cited examples of blindness in moles, the presence of teeth in mammals and the absence of teeth in birds as evidence of this principle. The second principle was that life was structured in an orderly manner and that many different parts of all bodies make possible the organic movements of animals. Although he was not the first thinker to advocate organic evolution, he was the first to develop a truly coherent evolutionary theory. He outlined his theories regarding evolution first in his Floreal lecture of 1800, and then in three later published works: Lamarck employed several mechanisms as drivers of evolution, drawn from the common knowledge of his day and from his own belief in the chemistry before Lavoisier. He used these mechanisms to explain the two forces he saw as constituting evolution: force driving animals from simple to complex forms and a force adapting animals to their local environments and differentiating them from each other. He believed that these forces must be explained as a necessary consequence of basic physical principles, favoring a materialistic attitude toward biology. Lamarck referred to a tendency for organisms to become more complex, moving "up" a ladder of progress. He referred to this phenomenon as Le pouvoir de la vie or la force qui tend sans cesse à composer l'organisation (The force that perpetually tends to make order). Lamarck believed in the ongoing spontaneous generation of simple living organisms through action on physical matter by a material life force. Lamarck ran against the modern chemistry promoted by Lavoisier (whose ideas he regarded with disdain), preferring to embrace a more traditional alchemical view of the elements as influenced primarily by earth, air, fire, and water. He asserted that once living organisms form, the movements of fluids in living organisms naturally drove them to evolve toward ever greater levels of complexity: The rapid motion of fluids will etch canals between delicate tissues. Soon their flow will begin to vary, leading to the emergence of distinct organs. The fluids themselves, now more elaborate, will become more complex, engendering a greater variety of secretions and substances composing the organs. — Histoire naturelle des animaux sans vertebres, 1815 He argued that organisms thus moved from simple to complex in a steady, predictable way based on the fundamental physical principles of alchemy. In this view, simple organisms never disappeared because they were constantly being created by spontaneous generation in what has been described as a "steady-state biology". Lamarck saw spontaneous generation as being ongoing, with the simple organisms thus created being transmuted over time becoming more complex. He is sometimes regarded as believing in a teleological (goal-oriented) process where organisms became more perfect as they evolved, though as a materialist, he emphasized that these forces must originate necessarily from underlying physical principles. According to the paleontologist Henry Fairfield Osborn, "Lamarck denied, absolutely, the existence of any 'perfecting tendency' in nature, and regarded evolution as the final necessary effect of surrounding conditions on life." Charles Coulston Gillispie, a historian of science, has written "life is a purely physical phenomenon in Lamarck", and argued that Lamarck's views should not be confused with the vitalist school of thought. The second component of Lamarck's theory of evolution was the adaptation of organisms to their environment. This could move organisms upward from the ladder of progress into new and distinct forms with local adaptations. It could also drive organisms into evolutionary blind alleys, where the organism became so finely adapted that no further change could occur. Lamarck argued that this adaptive force was powered by the interaction of organisms with their environment, by the use and disuse of certain characteristics. The last clause of this law introduces what is now called soft inheritance, the inheritance of acquired characteristics, or simply "Lamarckism", though it forms only a part of Lamarck's thinking. However, in the field of epigenetics, evidence is growing that soft inheritance plays a part in the changing of some organisms' phenotypes; it leaves the genetic material (DNA) unaltered (thus not violating the central dogma of biology) but prevents the expression of genes, such as by methylation to modify DNA transcription; this can be produced by changes in behaviour and environment, though there is no known example in which this is related to the use or disuse of an organ or a function. Many epigenetic changes are heritable to a degree, though often only for few generations. Thus, while DNA itself is not directly altered by the environment and behavior except through selection, the relationship of the genotype to the phenotype can be altered, even across some generations, by the surrounding within the lifetime of an individual. This has led to calls for biologists to reconsider the possibilty of Lamarckian-like processes in evolution in light of modern advances in molecular biology. Religious views In his book Philosophie zoologique, Lamarck referred to God as the "sublime author of nature". Lamarck's religious views are examined in the book Lamarck, the Founder of Evolution (1901) by Alpheus Packard. According to Packard from Lamarck's writings, he may be regarded as a deist. The philosopher of biology Michael Ruse described Lamarck, "as believing in God as an unmoved mover, creator of the world and its laws, who refuses to intervene miraculously in his creation." Biographer James Moore described Lamarck as a "thoroughgoing deist". The historian Jacques Roger has written, "Lamarck was a materialist to the extent that he did not consider it necessary to have recourse to any spiritual principle... his deism remained vague, and his idea of creation did not prevent him from believing everything in nature, including the highest forms of life, was but the result of natural processes." Legacy Lamarck is known largely for his views on evolution, which have been dismissed in favour of developments in Darwinism. His theory of evolution only achieved fame after the publication of Charles Darwin's On the Origin of Species (1859), which spurred critics of Darwin's new theory to fall back on Lamarckian evolution as a more well-established alternative. Lamarck is usually remembered for his belief in the then commonly held theory of inheritance of acquired characteristics, and the use and disuse model by which organisms developed their characteristics. Lamarck incorporated this belief into his theory of evolution, along with other common beliefs of the time, such as spontaneous generation. The inheritance of acquired characteristics (also called the theory of adaptation or soft inheritance) was rejected by August Weismann in the 1880s[Note 3] when he developed a theory of inheritance in which germ plasm (the sex cells, later redefined as DNA), remained separate and distinct from the soma (the rest of the body); thus, nothing which happens to the soma may be passed on with the germ plasm. This model allegedly underlies the modern understanding of inheritance. Lamarck constructed one of the first theoretical frameworks of organic evolution. While this theory was generally rejected during his lifetime, Stephen Jay Gould argues that Lamarck was the "primary evolutionary theorist", in that his ideas, and the way in which he structured his theory, set the tone for much of the subsequent thinking in evolutionary biology, through to the present day. Developments in epigenetics, the study of cellular and physiological traits that are heritable by daughter cells and not caused by changes in the DNA sequence, have caused debate about whether a "neolamarckist" view of inheritance could be correct: Lamarck was not in a position to give a molecular explanation for his theory. Eva Jablonka and Marion Lamb, for example, call themselves neolamarckists. Reviewing the evidence, David Haig argued that any such mechanisms must themselves have evolved through natural selection. Darwin allowed a role for use and disuse as an evolutionary mechanism subsidiary to natural selection, most often in respect of disuse.[Note 4] He praised Lamarck for "the eminent service of arousing attention to the probability of all change in the organic... world, being the result of law, not miraculous interposition". Lamarckism is also occasionally used to describe quasi-evolutionary concepts in societal contexts, though not by Lamarck himself. For example, the memetic theory of cultural evolution is sometimes described as a form of Lamarckian inheritance of nongenetic traits. During his lifetime, Lamarck named a large number of species, many of which have become synonyms. The World Register of Marine Species gives no fewer than 1,634 records. The Indo-Pacific Molluscan Database gives 1,781 records. Among these are some well-known families such as the ark clams (Arcidae), the sea hares (Aplysiidae), and the cockles (Cardiidae). The International Plant Names Index gives 58 records, including a number of well-known genera such as the mosquito fern (Azolla).[citation needed] The honeybee subspecies Apis mellifera lamarckii is named after Lamarck, as well as the bluefire jellyfish (Cyanea lamarckii). A number of plants have also been named after him, including Amelanchier lamarckii (juneberry), Digitalis lamarckii, the palm tree Dictyocaryum lamarckinum, and Aconitum lamarckii, as well as the grass genus Lamarckia. The International Plant Names Index gives 116 records of plant species named after Lamarck. Among the marine species, no fewer than 103 species or genera carry the epithet "lamarcki", "lamarckii" or "lamarckiana", but many have since become synonyms. Marine species with valid names include: Major works On invertebrate classification: See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Joysticks] | [TOKENS: 3246] |
Contents Joystick A joystick, sometimes called a flight stick, is an input device consisting of a stick that pivots on a base and reports its angle or direction to the device it is controlling. Also known as the control column, it is the principal control device in the cockpit of many civilian and military aircraft, either as a centre stick or side-stick. It has various switches to control functions of the aircraft controlled by the Pilot and First Officer of the flight. Joysticks are often used to control video games, and usually have push-buttons whose state can be read by the computer. A popular variation of the joystick used on modern video game consoles is the analog stick. Joysticks are also used for controlling machines such as cranes, trucks, underwater unmanned vehicles, wheelchairs, surveillance cameras, and zero turning radius lawn mowers. Miniature finger-operated joysticks have been adopted as input devices for smaller electronic equipment such as mobile phones. Aviation Joysticks originated as controls for aircraft ailerons and elevators, and are first known to have been used as such on Louis Bleriot's Bleriot VIII aircraft of 1908, in combination with a foot-operated rudder bar for the yaw control surface on the tail. Origins The name joystick is thought to originate with early 20th century French pilot Robert Esnault-Pelterie. There are also competing claims on behalf of fellow pilots Robert Loraine, James Henry Joyce, and A. E. George. Loraine is cited by the Oxford English Dictionary for using the term "joystick" in his diary in 1909 when he went to Pau to learn to fly at Blériot's school. George was a pioneer aviator who with his colleague Jobling built and flew a biplane at Newcastle in England in 1910. The George and Jobling aircraft control column is in the collection of the Discovery Museum in Newcastle upon Tyne, England. Joysticks were present in early planes, though their mechanical origins are uncertain. The coining of the term "joystick" may actually be credited to Loraine, as his is the earliest known usage of the term, although he most certainly did not invent the device. Electronic joysticks The electrical two-axis joystick was invented by C. B. Mirick at the United States Naval Research Laboratory (NRL) and patented in 1926 (U.S. Patent no. 1,597,416)". NRL was actively developing remote controlled aircraft at the time and the joystick was possibly used to support this effort. In the awarded patent, Mirick writes: "My control system is particularly applicable in maneuvering aircraft without a pilot." The Germans developed an electrical two-axis joystick around 1944. The device was used as part of the Germans' Funkgerät FuG 203 Kehl radio control transmitter system used in certain German bomber aircraft, used to guide both the rocket-boosted anti-ship missile Henschel Hs 293, and the unpowered pioneering precision-guided munition Fritz-X, against maritime and other targets. Here, the joystick of the Kehl transmitter was used by an operator to steer the missile towards its target. This joystick had on-off switches rather than analogue sensors. Both the Hs 293 and Fritz-X used FuG 230 Straßburg radio receivers in them to send the Kehl's control signals to the ordnance's control surfaces. A comparable joystick unit was used for the contemporary American Azon steerable munition, strictly to laterally steer the munition in the yaw axis only. This German invention was picked up by someone in the team of scientists assembled at the Heeresversuchsanstalt in Peenemünde. Here a part of the team on the German rocket program was developing the Wasserfall missile, a variant of the V-2 rocket, the first ground-to-air missile. The Wasserfall steering equipment converted the electrical signal to radio signals and transmitted these to the missile. In the 1960s the use of joysticks became widespread in radio-controlled model aircraft systems such as the Kwik Fly produced by Phill Kraft (1964). The now-defunct Kraft Systems firm eventually became an important OEM supplier of joysticks to the computer industry and other users. The first use of joysticks outside the radio-controlled aircraft industry may have been in the control of powered wheelchairs, such as the Permobil (1963). During this time period NASA used joysticks as control devices as part of the Apollo missions. For example, the lunar lander test models were controlled with a joystick. In many modern airliners, for example all Airbus aircraft developed from the 1980s, the joystick has received a new lease on life for flight control in the form of the "side-stick", a controller similar to a gaming joystick but which is used to control flight, replacing the traditional yoke. The sidestick saves weight, improves movement and visibility in the cockpit, and may be safer in an accident than the yoke. In early 1968, Sega released MotoPolo, an arcade electro-mechanical game with joystick controllers, used to move miniature motorbikes in any direction on the table. The same year in 1968, Ralph H. Baer developed the first prototype joystick controller for a video game, with a golf ball mounted on the joystick handle as it was intended for a golf game he was working on at the time. The earliest known electronic game joystick with a fire button was released by Sega as part of their 1969 arcade game Missile, a shooter simulation game that used it as part of an early dual-control scheme, where two directional buttons are used to move a motorized tank and a two-way joystick is used to shoot and steer the missile onto oncoming planes displayed on the screen; when a plane is hit, an explosion is animated on screen along with an explosion sound. In 1970, the game was released in North America as S.A.M.I. by Midway Games. Taito released a four-way joystick as part of their arcade racing video game Astro Race in 1973, while their 1975 multidirectional shooter Western Gun introduced dual-stick controls with one eight-way joystick for movement and the other for changing the shooting direction. In North America, it was released by Midway under the title Gun Fight. In 1976, Taito released Interceptor, an early first-person combat flight simulator that involved piloting a jet fighter, using an eight-way joystick to aim with a crosshair and shoot at enemy aircraft. The Atari CX40 joystick, developed for the 1977 Atari Video Computer System, is a digital controller with a single fire button. The Atari joystick port was for many years the de facto standard digital joystick specification. Joysticks were commonly used as controllers in first and second generation game consoles, but they gave way to the familiar game pad with the Nintendo Entertainment System and Master System during the mid-1980s, though joysticks—especially arcade-style ones—were and are popular after-market add-ons for any console. The Armatron, a toy robot arm introduced by Tomy in 1982, was moved by dual analog control joysticks. In 1985, Sega's third-person arcade rail shooter game Space Harrier featured a true analog flight stick, used for movement. The joystick could register movement in any direction as well as measure the degree of push, which could move the player character at different speeds depending on how far the joystick was pushed in a certain direction. A variation of the joystick is the rotary joystick. It is a type of joystick-knob hybrid, where the joystick can be moved in various direction while at the same time being able to rotate the joystick. It is mainly used in arcade shoot 'em up games, to control both the player's eight-directional movement and the gun's 360-degree direction. It was introduced by SNK, initially with the tank shooter TNK III (1985) before it was popularized by the run and gun video game Ikari Warriors (1986). SNK later used rotary joystick controls in arcade games such as Guerrilla War (1987). A distinct variation of an analog joystick is a positional gun, which works differently from a light gun. Instead of using light sensors, a positional gun is essentially an analog joystick mounted in a fixed location that records the position of the gun to determine where the player is aiming on the screen. It is often used for arcade gun games, with early examples including Sega's Sea Devil in 1972; Taito's Attack in 1976; Cross Fire in 1977; and Nintendo's Battle Shark in 1978. During the 1990s, joysticks such as the CH Products Flightstick, Gravis Phoenix, Microsoft SideWinder, Logitech WingMan, and Thrustmaster FCS were in demand with PC gamers. They were considered a prerequisite for flight simulators such as F-16 Fighting Falcon and LHX Attack Chopper. Joysticks became especially popular with the mainstream success of space flight simulator games like X-Wing and Wing Commander, as well as the "Six degrees of freedom" 3D shooter Descent. VirPil Controls' MongoosT-50 joystick was designed to mimic the style of Russian aircraft (including the Sukhoi Su-35 and Sukhoi Su-57), unlike most flight joysticks. However, since the beginning of the 21st century, these types of games have waned in popularity and are now considered a "dead" genre, and with that, gaming joysticks have been reduced to niche products. In NowGamer's interview with Jim Boone, a producer at Volition Inc., he stated that FreeSpace 2's poor sales could have been due to joysticks' being sold poorly because they were "going out of fashion" because more modern first-person shooters, such as Quake, were "very much about the mouse and [the] keyboard". He went further on to state "Before that, when we did Descent for example, it was perfectly common for people to have joysticks – we sold a lot of copies of Descent. It was around that time [when] the more modern FPS with mouse and keyboard came out, as opposed to just keyboard like Wolfenstein [3D] or something.". Since the late 1990s, analog sticks (or thumbsticks, due to their being controlled by one's thumbs) have become standard on controllers for video game consoles, popularized by Nintendo's Nintendo 64 controller, and have the ability to indicate the stick's displacement from its neutral position. This means that the software does not have to keep track of the position or estimate the speed at which the controls are moved. These devices usually use potentiometers to determine the position of the stick, though some newer models instead use a Hall effect sensor for greater reliability and reduced size. In 1997, ThrustMaster, Inc. introduced a 3D programmable controller, which was integrated into computer games to experience flight simulations. This line adapted several aspects of NASA's RHC (Rotational Hand Controller), which is used for landing and navigation methods. In 1997 the first gaming joystick with force feedback (haptics) was manufactured by CH Products under license from technology creator, Immersion Corporation. The product, called the Force FX joystick was followed by force feedback joysticks from Logitech, Thrustmaster, and others, also under license from Immersion. An arcade stick is a large-format controller for use with home consoles or computers. They use the stick-and-button configuration of some arcade cabinets, such as those with particular multi-button arrangements. For example, the six button layout of the arcade games Street Fighter II or Mortal Kombat cannot be comfortably emulated on a console joypad, so licensed home arcade sticks for these games have been manufactured for home consoles and PCs. A hat switch is a control on some joysticks. It is also known as a POV (point of view) switch in electronic games, where it allows one to look around in one's virtual world, browse menus, etc. For example, many flight simulators use it to switch the player's views, while other games sometimes use it as a substitute for the D-pad. Computer gamepads with both an analogue stick and a D-pad usually assign POV switch scancodes to the latter. The term hat switch is a shortening of the term "coolie hat switch", named for the similar looking headgear. In a real aircraft, the hat switch may control things like aileron or elevator trim. Apart from buttons, wheels and dials as well as touchscreens also miniature joysticks have been established for the efficient manual operation of cameras. Industrial applications In recent times, the employment of joysticks has become commonplace in many industrial and manufacturing applications, such as cranes, assembly lines, forestry equipment, mining trucks, and excavators. In fact, the use of such joysticks is in such high demand, that it has virtually replaced the traditional mechanical control lever in nearly all modern hydraulic control systems. Additionally, most unmanned aerial vehicles (UAVs) and submersible remotely operated vehicles (ROVs) require at least one joystick to control either the vehicle, the on-board cameras, sensors and/or manipulators. Due to the highly hands-on, rough nature of such applications, the industrial joystick tends to be more robust than the typical video-game controller, and able to function over a high cycle life. This led to the development and employment of Hall effect sensing to such applications in the 1980s as a means of contactless sensing. Several companies produce joysticks for industrial applications using Hall effect technology. Another technology used in joystick design is the use of strain gauges to build force transducers from which the output is proportional to the force applied rather than physical deflection. Miniature force transducers are used as additional controls on joysticks for menu selection functions. Some larger manufacturers of joysticks are able to customize joystick handles and grips specific to the OEM needs while small regional manufacturers often concentrate on selling standard products at higher prices to smaller OEMs. Assistive technology Specialist joysticks, classed as an assistive technology pointing device, are used to replace the computer mouse for people with fairly severe physical disabilities. Rather than controlling games, these joysticks control the pointer. They are often useful to people with athetoid conditions, such as cerebral palsy, who find them easier to grasp than a standard mouse. Miniature joysticks are available for people with conditions involving muscular weakness such as muscular dystrophy or motor neuron disease as well. They are also used on electric powered wheelchairs for control since they are simple and effective to use as a control method. Non-human use In 1996, a scientific study established that both chimpanzees and rhesus monkeys could be taught to move a pointer on a screen by using a joystick. Both have consistently managed to demonstrate "conceptual knowledge" of the task required of them during trials, although rhesus monkeys were notably slower to do so. In 2021, another pair of researchers investigated the level of intelligence in domestic pigs by designing a joystick which could be controlled with their snout. Unlike the chimpanzees or the rhesus monkeys, none of the four pigs was able to fully meet the 1996's test criteria for "motoric or conceptual acquisition" of the task, but they still performed "significantly above chance". Notably, the pigs experienced additional difficulties in comparison to the primates, as they were all far-sighted and so may have struggled with the details on screen, and they could not move the target with a joystick without taking their eyes off the screen first. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Nintendo] | [TOKENS: 14927] |
Contents Nintendo Nintendo Co., Ltd.[c] is a Japanese multinational video game company headquartered in Kyoto. It develops, publishes, and manufactures both video games and video game consoles. The history of Nintendo began when craftsman Fusajiro Yamauchi founded the company to produce handmade hanafuda playing cards. After venturing into various lines of business and becoming a public company, Nintendo began producing toys in the 1960s, and later video games. Nintendo developed its first arcade games in the 1970s, and distributed its first system, the Color TV-Game in 1977. The company became internationally dominant in the 1980s after the arcade release of Donkey Kong (1981) and the Nintendo Entertainment System, which launched outside of Japan alongside Super Mario Bros. in 1985. Since then, Nintendo has produced some of the most successful consoles in the video game industry, including the Game Boy (1989), the Super Nintendo Entertainment System (1991), the Game Boy Advance (2001), the Nintendo DS (2004), the Wii (2006), and the Nintendo Switch (2017). It has created or published numerous major franchises, including Mario, Donkey Kong, The Legend of Zelda, Pokémon, Super Smash Bros., Animal Crossing, Metroid, Kirby, and Star Fox. The company's mascot, Mario, is among the most famous fictional characters, and Nintendo's other characters—including Luigi, Donkey Kong, Samus, Link, Kirby, and Pikachu—have attained international recognition. Several films and a theme park area based on the company's franchises have been created. Nintendo's game consoles have sold over 860 million units worldwide as of May 2025, for which more than 5.9 billion individual games have been sold. The company has numerous subsidiaries in Japan and worldwide, in addition to second-party developers including HAL Laboratory, Intelligent Systems, and Game Freak. It is one of the wealthiest and most valuable companies in the Japanese market. History Nintendo was founded as Nintendo Koppai[d] on 23 September 1889 by craftsman Fusajiro Yamauchi in Shimogyō-ku, Kyoto, Japan, as an unincorporated establishment, to produce and distribute Japanese playing cards, or karuta (かるた; from Portuguese carta, 'card'), most notably hanafuda (花札, 'flower cards'). The name "Nintendo" is commonly assumed to mean "leave luck to heaven", but the assumption lacks historical validation; it has also been suggested to mean "the temple of free hanafuda", but even descendants of Yamauchi do not know the true intended meaning of the name. Hanafuda cards had become popular after Japan banned most forms of gambling in 1882, though tolerated hanafuda. Sales of hanafuda cards were popular with the yakuza-run gaming parlors in Kyoto. Other card manufacturers had opted to leave the market, not wanting to be associated with its criminality, but Yamauchi persisted despite such fears to become the primary producer of hanafuda within a few years. With the increase of the cards' popularity, Yamauchi hired assistants to mass-produce them to satisfy the demand. Even with a favorable start, the business faced financial struggles due to operating in a niche market, the slow and expensive manufacturing process, high product price, alongside long durability of the cards, which impacted sales due to the low replacement rate. As a solution, Nintendo produced a cheaper and lower-quality line of playing cards, Tengu, while also conducting product offerings in other cities such as Osaka, where card game profits were high. In addition, local merchants were interested in the prospect of continuous renewal of decks, thus avoiding the suspicions that reusing cards would generate. According to Nintendo, the business' first western-style card deck was put on the market in 1902, although other documents indicate the date was 1907, shortly after the Russo-Japanese War. Although the cards were initially intended to be exported, they quickly gained popularity within and without Japan. During this time, the business styled itself as Marufuku Nintendo Card Co. The war created considerable difficulties for companies in the leisure sector, which were subject to new levies such as the Karuta Zei ("playing cards tax"). Nintendo subsisted and, in 1907, entered into an agreement with Nihon Senbai—later known as the Japan Tobacco—to market its cards to various cigarette stores throughout the country. A Nintendo promotional calendar from the Taishō era dated to 1915 indicates that the business was named Yamauchi Nintendo[e] but still used the Marufuku Nintendo Co. brand for its playing cards. Japanese culture stipulated that for Nintendo to continue as a family business after Yamauchi's retirement, Yamauchi had to adopt his son-in-law so that he could take over the business. As a result, Sekiryo Kaneda adopted the Yamauchi surname in 1907 and headed the business in 1929. By that time, Nintendo was the largest playing card business in Japan. In 1933, Sekiryo Kaneda established the company as a general partnership named Yamauchi Nintendo & Co., Ltd.[f] investing in the construction of a new corporate headquarters located next to the original building, near the Toba-kaidō train station. Because Sekiryo's marriage to Yamauchi's daughter produced no male heirs, he planned to adopt his son-in-law Shikanojo Inaba, an artist in the company's employ and the father of his grandson Hiroshi, born in 1927. However, Inaba abandoned his family and the company, so Hiroshi was made Sekiryo's eventual successor. World War II negatively impacted the company as Japanese authorities prohibited the diffusion of foreign card games, and as the priorities of Japanese society shifted, its interest in recreational activities waned. During this time, Nintendo was partly supported by a financial injection from Hiroshi's wife Michiko Inaba, who came from a wealthy family. In 1947, Sekiryo founded the distribution company Marufuku Co., Ltd.[g] responsible for Nintendo's sales and marketing operations, which would eventually go on to become the present-day Nintendo Co., Ltd., in Higashikawara-cho, Imagumano, Higashiyama-ku, Kyoto. In 1950, due to Sekiryo's deteriorating health, Hiroshi Yamauchi assumed the presidency and headed manufacturing operations. His first actions involved several important changes in the operation of the company: in 1951, he changed the company name to Nintendo Playing Card Co., Ltd.[h] and in the following year, he centralized the manufacturing facilities dispersed in Kyoto, which led to the expansion of the offices in Kamitakamatsu-cho, Fukuine, Higashiyama-ku, Kyoto. In 1953, Nintendo became the first company to succeed in mass-producing plastic playing cards in Japan. Some of the company's employees, accustomed to more cautious and conservative leadership, viewed the new measures with concern, and the rising tension led to a call for a strike. However, the measure had no major impact, as Hiroshi resorted to the dismissal of several dissatisfied workers. In 1959, Nintendo moved its headquarters to Kamitakamatsu-cho, Fukuine, Higashiyama-ku in Kyoto. The company entered into a partnership with Walt Disney Productions to incorporate its characters into playing cards, which opened it up to the children's market and resulted in a boost to Nintendo's playing card business. Nintendo automated the production of Japanese playing cards using backing paper, and also developed a distribution system that allowed it to offer its products in toy stores. By 1961, the company had established a Tokyo branch in Chiyoda, Tokyo, and sold more than 1.5 million card packs, holding a high market share, for which it relied on televised advertising campaigns. In 1962, Nintendo became a public company by listing stock on the second section of the Osaka Securities Exchange and the Kyoto Stock Exchange. In the following year, the company adopted its current name, Nintendo & Co., Ltd.[i] and started manufacturing games in addition to playing cards. In 1964, Nintendo earned ¥150 million. Although the company experienced a period of economic prosperity, the Disney cards and derived products made it dependent on the children's market. The situation was exacerbated by the falling sales of its adult-oriented playing cards caused by Japanese society gravitating toward other hobbies such as pachinko, bowling, and nightly outings. When Disney card sales began to decline, Nintendo realized that it had no real alternative to alleviate the situation. After the 1964 Tokyo Olympics, Nintendo's stock price plummeted to its lowest recorded level of ¥60. In 1965, Nintendo hired Gunpei Yokoi to maintain the assembly-line machines used to manufacture its playing cards. Yamauchi increased Nintendo's investment in a research and development department in 1969, directed by Hiroshi Imanishi, a long-time employee of the company. Yokoi was moved to the newly created department and was responsible for coordinating various projects. Yokoi's experience in manufacturing electronic devices led Yamauchi to put him in charge of the company's games department, and his products would be mass-produced. During that period, Nintendo built a new production plant in Uji, just outside of Kyoto, and distributed classic tabletop games like chess, shogi, go, and mahjong, and other foreign games under the Nippon Game brand. The company's restructuring preserved a couple of areas dedicated to playing card manufacturing. In 1970, the company's stock listing was promoted to the first section of the Osaka Stock Exchange, and the reconstruction and enlargement of its corporate headquarters was completed. The year represented a watershed moment in Nintendo's history as it released Japan's first electronic toy—the Beam Gun, an optoelectronic pistol designed by Masayuki Uemura. In total, more than a million units were sold. Nintendo partnered with Magnavox to provide a light gun controller based on the Beam Gun design for the company's new home video game console, the Magnavox Odyssey, in 1971. Other popular toys released at the time included the Ultra Hand, the Ultra Machine, the Ultra Scope, and the Love Tester, all designed by Yokoi. More than 1.2 million units of Ultra Hand were sold in Japan. The growing demand for Nintendo's products led Yamauchi to further expand the offices, for which he acquired the surrounding land and assigned the production of cards to the original Nintendo building. Meanwhile, Yokoi, Uemura, and new employees such as Genyo Takeda continued to develop innovative products for the company. The Laser Clay Shooting System was released in 1973 and managed to surpass bowling in popularity. Though Nintendo's toys continued to gain popularity, the 1973 oil crisis caused both a spike in the cost of plastics and a change in consumer priorities that put essential products over pastimes, and Nintendo lost several billion yen. In 1974, Nintendo released Wild Gunman, a skeet shooting arcade simulation consisting of a 16 mm image projector with a sensor that detects a beam from the player's light gun. Both the Laser Clay Shooting System and Wild Gunman were successfully exported to Europe and North America. However, Nintendo's production speeds were still slow compared to rival companies such as Bandai and Tomy, and their prices were high, which led to the discontinuation of some of their light gun products. The subsidiary Nintendo Leisure System Co., Ltd., which developed these products, was closed as a result of the economic impact dealt by the oil crisis. Yamauchi, motivated by the successes of Atari and Magnavox with their video game consoles, acquired the Japanese distribution rights for the Magnavox Odyssey in 1974, and reached an agreement with Mitsubishi Electric to develop similar products between 1975 and 1978, including the first microprocessor for video games systems, the Color TV-Game series, and an arcade game inspired by Othello. During this period, Takeda developed the video game EVR Race, and Shigeru Miyamoto joined Yokoi's team with the responsibility of designing the casing for the Color TV-Game consoles. In 1978, Nintendo's research and development department was split into two facilities, Nintendo Research & Development 1 and Nintendo Research & Development 2, respectively managed by Yokoi and Uemura. Shigeru Miyamoto brought distinctive sources of inspiration to the company, ranging from the natural environment and regional culture of Sonobe, to popular culture influences like Westerns and detective fiction, and to folk Shinto practices and family media. They are seen in most of Nintendo's major franchises which developed following Miyamoto's creative leadership. By the late 1970s, Nintendo was struggling financially. Two key events in Nintendo's history occurred in 1979: its American subsidiary was opened in New York City, and a new department focused on arcade game development was created. In 1980, one of the first handheld video game systems, the Game & Watch, was created by Yokoi from the technology used in portable calculators. It became one of Nintendo's most successful products, with over 43.4 million units sold worldwide during its production period, and for which 59 games were made in total. The success of Game & Watch led Yamauchi to shift the company towards more electronic games in the years that followed. Nintendo entered the arcade video game market with Sheriff and Radar Scope, released in Japan in 1979 and 1980 respectively. Sheriff, also known as Bandido in some regions, marked the first original video game made by Nintendo, and was published by Sega and developed by Genyo Takeda and Shigeru Miyamoto. Radar Scope rivaled Galaxian in Japanese arcades but failed to find an audience overseas and created a financial crisis for the company. To try to find a more successful game, they put Miyamoto in charge of their next arcade game design, leading to the release of Donkey Kong in 1981, one of the first platform video games that allowed the player character to jump. The character Jumpman would later become Mario and Nintendo's official mascot. Mario was named after Mario Segale, the landlord of Nintendo's offices in Tukwila, Washington. Donkey Kong was a financial success for Nintendo both in Japan and overseas, and led Coleco to fight Atari for licensing rights for porting to home consoles and personal computers. In 1983, Nintendo opened a new production facility in Uji and was listed in the first section of the Tokyo Stock Exchange. Uemura, taking inspiration from the ColecoVision, began creating a new video game console that would incorporate a ROM cartridge format for video games as well as both a central processing unit and a picture processing unit. The Family Computer, or Famicom, was released in Japan in July 1983 along with three games adapted from their original arcade versions: Donkey Kong, Donkey Kong Jr. and Popeye. Its success was such that in 1984, it surpassed the market share held by Sega's SG-1000. That success also led to Nintendo leaving the Japanese arcade market in late 1985. At this time, Nintendo adopted a series of guidelines that involved the validation of each game produced for the Famicom before its distribution on the market, agreements with developers to ensure that no Famicom game would be adapted to other consoles within two years of its release, and restricting developers from producing more than five games per year for the Famicom. In the early 1980s, several video game consoles proliferated in the United States, as well as low-quality games produced by third-party developers, which oversaturated the market and led to the video game crash of 1983. Consequently, a recession hit the American video game industry, whose revenues went from over $3 billion to $100 million between 1983 and 1985. Nintendo's initiative to launch the Famicom in America was also impacted. To differentiate the Famicom from its competitors in America, Nintendo rebranded it as an entertainment system and its cartridges as Game Paks, with a design reminiscent of a VCR. Nintendo implemented a lockout chip in the Game Paks for control on its third party library to avoid the market saturation that had occurred in the United States. The result is the Nintendo Entertainment System, or NES, which was released in North America in 1985. The landmark games Super Mario Bros. and The Legend of Zelda were produced by Miyamoto and Takashi Tezuka. Composer Koji Kondo reinforced the idea that musical themes could act as a complement to game mechanics rather than simply a miscellaneous element. Production of the NES lasted until 1995, and production of the Famicom lasted until 2003. In total, around 62 million Famicom and NES consoles were sold worldwide. During this period, Nintendo created a copyright infringement protection in the form of the Official Nintendo Seal of Quality, added to their products so that customers may recognize their authenticity in the market. By this time, Nintendo's network of electronic suppliers had extended to around thirty companies, including Ricoh (Nintendo's main source for semiconductors) and the Sharp Corporation. In 1988, Yokoi and his team at Nintendo R&D1 conceived the Game Boy, the first handheld video game console made by Nintendo. Nintendo released the Game Boy in 1989. In North America, the Game Boy was bundled with the popular third-party game Tetris after a difficult negotiation process with Elektronorgtechnica. The Game Boy was a significant success. In its first two weeks of sale in Japan, its initial inventory of 300,000 units sold out, and in the United States, an additional 40,000 units were sold on its first day of distribution. Around this time, Nintendo entered an agreement with Sony to develop the Super Famicom CD-ROM Adapter, a peripheral for the upcoming Super Famicom capable of playing CD-ROMs. However, the collaboration did not last as Yamauchi preferred to continue developing the technology with Philips, which would result in the CD-i, and Sony's independent efforts resulted in the creation of the PlayStation console. The first issue of Nintendo Power magazine, which had an annual circulation of 1.5 million copies in the United States, was published in 1988. In July 1989, Nintendo held the first Nintendo Space World trade show with the name Shoshinkai to announce and demonstrate upcoming Nintendo products. That year, the first World of Nintendo stores-within-a-store, which carried official Nintendo merchandise, were opened in the United States. According to company information, more than 25% of homes in the United States had an NES in 1989. In the late 1980s, Nintendo's dominance slipped with the appearance of NEC's PC Engine/TurboGrafx-16 and Sega's Mega Drive/Genesis, 16-bit game consoles with improved graphics and audio compared to the NES. In response to the competition, Uemura designed the Super Famicom, which launched in 1990. The first batch of 300,000 consoles sold out in hours. The following year, as with the NES, Nintendo distributed a modified version of the Super Famicom to the United States market, titled the Super Nintendo Entertainment System. Launch games for the Super Famicom and Super NES include Super Mario World, F-Zero, Pilotwings, SimCity, and Gradius III. By mid-1992, over 46 million Super Famicom and Super NES consoles had been sold. The console's life cycle lasted until 1999 in the United States, and until 2003 in Japan. In March 1990, the first Nintendo World Championship was held, with participants from 29 American cities competing for the title of "best Nintendo player in the world". In June 1990, the subsidiary Nintendo of Europe was opened in Großostheim, Germany; in 1993, subsequent subsidiaries were established in the Netherlands (where Bandai had previously distributed Nintendo's products), France, the United Kingdom, Spain, Belgium, and Australia. In 1992, Nintendo acquired a majority stake in the Seattle Mariners baseball team, and sold most of its shares in 2016. On 31 July 1992, Nintendo of America announced it would cease manufacturing arcade games and systems. In 1993, Star Fox was released, which marked an industry milestone by being the first video game to make use of the Super FX chip. The proliferation of graphically violent video games, such as Mortal Kombat, caused controversy and led to the creation of the Interactive Digital Software Association and the Entertainment Software Rating Board, in whose development Nintendo collaborated during 1994. These measures also encouraged Nintendo to abandon the content guidelines it had enforced since the release of the NES. Commercial strategies implemented by Nintendo during this time include the Nintendo Gateway System, an in-flight entertainment service available for airlines, cruise ships and hotels, and the "Play It Loud!" advertising campaign for Game Boys with different-colored casings. The Advanced Computer Modeling graphics used in Donkey Kong Country for the Super NES and Donkey Kong Land for the Game Boy were technologically innovative, as was the Satellaview satellite modem peripheral for the Super Famicom, which allowed the digital transmission of data via a communications satellite in space. In 1995, Nintendo released the Virtual Boy, a console designed by Yokoi with stereoscopic graphics. Critics were generally disappointed with the quality of the games and red-colored graphics, and complained of gameplay-induced headaches. The system sold poorly and was quietly discontinued. Amid the system's failure, Yokoi formally retired from Nintendo. In February 1996, Pocket Monsters Red and Green (known internationally as Pokémon Red and Blue) was developed by Game Freak and released in Japan for the Game Boy, establishing the popular Pokémon franchise.: 191 The game went on to sell 31.37 million units, with the video game series exceeding a total of 300 million units in sales as of 2017. The Nintendo 64 was released in June 1996 in Japan, September 1996 in the United States and March 1997 in Europe. Though planned for release in 1995, the production schedules of third-party developers influenced a delay, The console was in development since mid-1993, when Nintendo and Silicon Graphics announced a strategic alliance to develop the console. NEC, Toshiba, and Sharp also contributed technology to the console. The Nintendo 64 was marketed as one of the first consoles to be designed with 64-bit architecture. In 1997, Nintendo released the Rumble Pak, a plug-in device that connects to the Nintendo 64 controller and produces a vibration during certain moments of a game. By the end of its production in 2002, around 33 million Nintendo 64 consoles were sold worldwide, and it is considered one of the most recognized video game systems in history. 388 games were produced for the Nintendo 64 in total, some of which – particularly Super Mario 64, The Legend of Zelda: Ocarina of Time, and GoldenEye 007 – have been distinguished as some of the greatest of all time. In 1998, the Game Boy Color was released. In addition to backward compatibility with Game Boy games, the console's similar capacity to the NES resulted in select adaptations of games from that library, such as Super Mario Bros. Deluxe. Since then, over 118.6 million Game Boy and Game Boy Color consoles have been sold worldwide. A series of administrative changes occurred in 2000 when Nintendo's corporate offices were moved to the Minami-ku neighborhood in Kyoto, and Nintendo Benelux was established to manage the Dutch and Belgian territories. In 2001, two new Nintendo consoles were introduced: the Game Boy Advance, which was designed by Gwénaël Nicolas with stylistic departure from its predecessors, and the GameCube, which features a 128-bit Gekko processor from IBM and a DVD drive from Panasonic. During the first week of the Game Boy Advance's North American release in June 2001, over 500,000 units were sold, making it the fastest-selling video game console in the United States at the time. By the end of its production cycle in 2010, more than 81.5 million units had been sold worldwide. As for the GameCube, even with such distinguishing features as the miniDVD format of its games and Internet connectivity for a few games, its sales were lower than those of its predecessors, and during the six years of its production, 21.7 million units were sold worldwide. The GameCube struggled against its rivals in the market, and its initial poor sales led to Nintendo posting a first half fiscal year loss in 2003 for the first time since the company went public in 1962. In 2002, the Pokémon Mini was released. Its dimensions were smaller than that of the Game Boy Advance and it weighed 70 grams, making it the smallest video game console in history. Nintendo collaborated with Sega and Namco to develop Triforce, an arcade board to facilitate the conversion of arcade titles to the GameCube. Following the European release of the GameCube in May 2002, Hiroshi Yamauchi announced his resignation as the president of Nintendo, and Satoru Iwata was selected by the company as his successor. Yamauchi would remain as advisor and director of the company until 2005. Iwata's appointment as president ended the Yamauchi succession at the helm of the company, a practice that had been in place since its foundation. In 2003, Nintendo released the Game Boy Advance SP, an improved version of the Game Boy Advance with a foldable case, an illuminated display, and a rechargeable battery. By the end of its production cycle in 2010, over 43.5 million units had been sold worldwide. Nintendo also released the Game Boy Player, a peripheral that allows Game Boy and Game Boy Advance games to be played on the GameCube. In 2004, Nintendo released the Nintendo DS, which featured such innovations as dual screens – one of which is a touchscreen – and wireless connectivity for multiplayer play. Throughout its lifetime, more than 154 million units were sold, making it the most successful handheld console and the second bestselling console in history. In 2005, Nintendo released the Game Boy Micro, the last system in the Game Boy line. Sales did not meet Nintendo's expectations, with 2.5 million units being sold by 2007. In mid-2005, the Nintendo World Store was inaugurated in New York City. Nintendo's next home console was conceived in 2001, although development commenced in 2003, taking inspiration from the Nintendo DS. Nintendo also considered the relative failure of the GameCube and instead opted to take a "Blue Ocean Strategy" by developing a reduced performance console in contrast to the high-performance consoles of Sony and Microsoft to avoid directly competing with them. The Wii was released in November 2006, with a total of 33 launch games. With the Wii, Nintendo sought to reach a broader demographic than its seventh-generation competitors, with the intention of also encompassing the "non-consumer" sector. Nintendo invested in a $200 million advertising campaign to that end. The Wii's innovations include the Wii Remote controller, equipped with an accelerometer system and infrared sensors that allow it to detect its position in a three-dimensional environment with the aid of a sensor bar; the Nunchuk peripheral that includes an analog controller and an accelerometer; and the Wii MotionPlus expansion that increases the sensitivity of the main controller with the aid of gyroscopes. By 2016, more than 101 million Wii consoles had been sold worldwide, making it the most successful console of its generation, a distinction that Nintendo had not achieved since the 1990s with the Super NES. Several accessories were released for the Wii from 2007 to 2010, such as the Wii Balance Board, the Wii Wheel and the WiiWare download service. In 2009, Nintendo Iberica S.A. expanded its commercial operations to Portugal through a new office in Lisbon. By that year, Nintendo held a 68.3% share of the worldwide handheld gaming market. After an announcement in March 2010, Nintendo released the Nintendo 3DS in 2011. The console produces stereoscopic effects without 3D glasses. By 2018, more than 69 million units had been sold worldwide; the figure increased to 75 million by the start of 2019. In 2012 and 2013, two new Nintendo game consoles were introduced: the Wii U, with high-definition graphics and a GamePad controller with near-field communication technology, and the Nintendo 2DS, a version of the 3DS that lacks the clamshell design of Nintendo's previous handheld consoles and the stereoscopic effects of the 3DS. With 13.5 million units sold worldwide, the Wii U is the least successful video game console in Nintendo's history. In 2014, a new product line was released consisting of figures of Nintendo characters called Amiibos. On 25 September 2013, Nintendo announced its acquisition of a 28% stake in PUX Corporation, a subsidiary of Panasonic, to develop facial, voice, and text recognition for its video games. Due to a 30% decrease in company income between April and December 2013, Iwata announced a temporary 50% cut to his salary, with other executives seeing reductions by 20%–30%. In January 2015, Nintendo ceased operations in the Brazilian market due in part to high import duties. This did not affect the rest of Nintendo's Latin American market due to an alliance with Juegos de Video Latinoamérica. Nintendo reached an agreement with NC Games for Nintendo's products to resume distribution in Brazil by 2017, and by September 2020, the Switch was released in Brazil. On 11 July 2015, Iwata died of bile duct cancer, and after a couple of months in which Miyamoto and Takeda jointly operated the company, Tatsumi Kimishima was named as Iwata's successor on 16 September 2015. As part of the management's restructuring, Miyamoto and Takeda were named creative and technological advisors, respectively. The financial losses caused by the Wii U, along with Sony's intention to release its video games to other platforms such as smart TVs, motivated Nintendo to rethink its strategy concerning the production and distribution of its properties. In 2015, Nintendo formalized agreements with DeNA and Universal Parks & Resorts to extend its presence to smart devices and amusement parks respectively. In March 2016, Nintendo's first mobile app for the iOS and Android systems, Miitomo, was released. Since then, Nintendo has produced other similar apps, such as Super Mario Run, Fire Emblem Heroes, Animal Crossing: Pocket Camp, Mario Kart Tour, and Pokémon Go, the last being developed by Niantic and having generated $115 million in revenue for Nintendo. In March 2016, the loyalty program My Nintendo replaced Club Nintendo. The NES Classic Edition was released in November 2016. The console is a version of the NES based on emulation, HDMI, and the Wii remote. Its successor, the Super NES Classic Edition, was released in September 2017. By October 2018, around ten million units of both consoles combined had been sold worldwide. The Wii U's successor in the eighth generation of video game consoles, the Nintendo Switch, was released in March 2017. The Switch features a hybrid design as a home and handheld console, Joy-Con controllers that each contain an accelerometer and gyroscope, and the simultaneous wireless networking of up to eight consoles. To expand its library, Nintendo entered alliances with several third-party and independent developers; by February 2019, more than 1,800 Switch games had been released. The Switch has shipped over 150 million units worldwide as of December 2024[update], becoming the third-best selling console of all time behind the PlayStation 2 and Nintendo DS. It is also Nintendo's most successful home console to date, surpassing the Wii's 101.6 million units. In 2018, Shuntaro Furukawa replaced Kimishima as company president, and in 2019, Doug Bowser succeeded Nintendo of America president Reggie Fils-Aimé. In April 2019, Nintendo formed an alliance with Tencent to distribute the Nintendo Switch in China starting in December. In April 2020, Reuters reported that ValueAct Capital had acquired over 2.6 million shares in Nintendo stock worth US$1.1 billion over the course of a year, giving them an overall stake of 2% in Nintendo. Although the COVID-19 pandemic caused delays in the production and distribution of some of Nintendo's products, the situation "had limited impact on business results"; in May 2020, Nintendo reported a 75% increase in income compared to the previous fiscal year, mainly contributed by the Nintendo Switch Online service. The year saw some changes to the company's management: outside director Naoki Mizutani retired from the board, and was replaced by Asa Shinkawa; and Yoshiaki Koizumi was promoted to senior executive officer, maintaining his role as deputy general manager of Nintendo EPD. By August, Nintendo was named the richest company in Japan. Super Nintendo World, a theme park area, opened at Universal Studios Japan in 2021. Nintendo co-produced an animated film The Super Mario Bros. Movie alongside Universal Pictures and Illumination, with Miyamoto and Illumination CEO Chris Meledandri acting as producers. In 2021, Furukawa indicated Nintendo's plan to create more animated projects based on their work outside the Mario film, and by 29 June, Meledandri joined the board of directors as a non-executive outside director. According to Furukawa, the company's expansion toward animated production is to keep "[the] business [of producing video games] thriving and growing", realizing the "need to create opportunities where even people who do not normally play on video game systems can come into contact with Nintendo characters". That day, Miyamoto said that "[Meledandri] really came to understand the Nintendo point of view" and that "asking for [his] input, as an expert with many years of experience in Hollywood, will be of great help to" Nintendo's transition into film production. Later, in July 2022, Nintendo acquired Dynamo Pictures, a Japanese CG company founded by Hiroshi Hirokawa on 18 March 2011. Dynamo had worked with Nintendo on digital shorts in the 2010s, including for the Pikmin series, and Nintendo said that Dynamo would continue their goal of expanding into animation. Following the completion of the acquisition in October 2022, Nintendo renamed Dynamo as Nintendo Pictures. In February 2022, Nintendo announced the acquisition of SRD Co., Ltd. (Systems Research and Development) after 40 years, a major contributor of Nintendo's first-party games such as Donkey Kong and The Legend of Zelda until the 1990s, and then support studio since. In May 2022, Reuters reported that Saudi Arabia's Public Investment Fund had purchased a 5% stake in Nintendo, and by January 2023, its stake in the company had increased to 6.07%. It was raised to 7.08% by February 2023, and in the same week by 8.26%, making it the biggest external investor. In November 2024, Saudi Arabia's PIF dropped back to 6.3%. Super Nintendo World opened at Universal Studios Hollywood in early 2023, followed by a Donkey Kong-themed expansion of the original land at Universal Studios Japan in 2024, and the opening of a Super Nintendo World area at Universal Epic Universe in Orlando in May 2025. The Super Mario Bros. Movie was released on 5 April 2023, and has grossed over $1.3 billion worldwide, setting box-office records for the biggest worldwide opening weekend for an animated film, the highest-grossing film based on a video game and the 15th-highest-grossing film of all-time. Nintendo reached an agreement with Embracer Group in May 2024 to acquire 100% of the shares in Shiver Entertainment, a company that has specialized in porting triple-A games like Hogwarts Legacy and Mortal Kombat 1 to the Switch, making it a wholly owned subsidiary of Nintendo, subject to closing conditions. In October 2024, the company opened the Nintendo Museum on the site of its former Uji Ogura plant, where it had manufactured playing and hanafuda cards. The same month, Nintendo announced Nintendo Music, a mobile application enabling one to listen to soundtracks from Nintendo games. By November 2024, Nintendo gained full ownership of Monolith Soft, a first-party developer behind Xenoblade Chronicles and provided support for The Legend of Zelda: Tears of the Kingdom. The successor to the Switch, the Nintendo Switch 2, was released on 5 June 2025. It has a larger display and more internal storage than the original Switch. It has updated graphics, controllers, and social features. It supports 1080p resolution and a 120 Hz refresh rate in handheld or tabletop mode, and 4K resolution with a 60 Hz refresh rate when docked. On 10 June, Nintendo reported that the Switch 2 had sold more than 3.5 million units worldwide, becoming the fastest selling console in history, overtaking the previous record-holder, the PlayStation 2. In September 2025, Nintendo announced that the sequel to The Super Mario Bros. Movie, titled The Super Mario Galaxy Movie, is scheduled to be released on 3 April 2026. On 27 November 2025, Nintendo announced that it would acquire Bandai Namco Studios Singapore through a share transfer with Bandai Namco Studios starting with a 80% stake on 1 April 2026, followed by the rest of its stake when operations have stabilized. Following this, BNSS would rebrand to Nintendo Studios Singapore. Products Nintendo's central focus is the research, development, production, and distribution of entertainment products—primarily video game software and hardware and card games. Its main markets are Japan, America, and Europe, and more than 70% of its total sales come from the latter two territories. As of May 2025, Nintendo's game consoles have sold over 860 million units, for which more than 5.9 billion video games have been sold globally. Since the launch of the Color TV-Game in 1977, Nintendo has produced and distributed home, handheld, dedicated, and hybrid consoles. In the 1980s, its first consoles to be successful were the Game & Watch and Nintendo Entertainment System. In the 1990s Nintendo launched new generations of home consoles with the Super Nintendo Entertainment System and Nintendo 64 and achieved global success with the Game Boy handheld console. In the 2000s, Nintendo found wide success again, with both the Nintendo DS and Wii. Each has a variety of accessories and controllers, such as the NES Zapper, the Game Boy Camera, the Super NES Mouse, the Rumble Pak, the Wii MotionPlus, the Wii U Pro Controller, and the Switch Pro Controller. Nintendo's first electronic games are arcade games. EVR Race (1975) was the company's first electromechanical game, and Donkey Kong (1981) was the first platform game in history. Since then, both Nintendo and other development companies have produced and distributed an extensive catalog of video games for Nintendo's consoles. Nintendo's games are sold in both removable media formats such as optical disc and cartridge, and online formats which are distributed via services such as the Nintendo eShop and the Nintendo Network. Corporate structure Nintendo's internal research and development operations are divided into three main divisions: The Nintendo Entertainment Planning & Development division is the primary software development, production, and supervising division at Nintendo, formed as a merger between their former Entertainment Analysis & Development and Software Planning & Development divisions in 2015. Led by Shinya Takahashi, the division holds the largest concentration of staff at the company, housing more than 800 engineers, producers, directors, coordinators, planners, and designers. The Nintendo Platform Technology Development division is a combination of Nintendo's former Integrated Research & Development (IRD) and System Development (SDD) divisions. Led by Ko Shiota, the division is responsible for designing hardware and developing Nintendo's operating systems, developer environment, and internal network, and maintenance of the Nintendo Network. The Nintendo Business Development division was formed following Nintendo's foray into software development for smart devices such as mobile phones and tablets. It is responsible for refining Nintendo's business model for the dedicated video game system business and overseeing development for smart devices. Notable board members include Shigeru Miyamoto, Satoru Shibata and Outside Director Chris Meledandri, CEO of Illumination Entertainment; notable executive officers include Yoshiaki Koizumi, Deputy general manager of Entertainment Planning & Development division, Takashi Tezuka and Senior officer of Entertainment Planning & Development division. Headquartered in Kyoto, Japan since the beginning, Nintendo Co., Ltd. oversees the organization's global operations and manages Japanese operations specifically. The company's two major subsidiaries, Nintendo of America and Nintendo of Europe, manage operations in North America and Europe respectively. Nintendo Co., Ltd. later moved from its original Kyoto location to a new office in Higashiyama-ku, Kyoto; this became the research and development building in 2000 when the head office relocated to its present[update] location in Minami-ku, Kyoto. Nintendo founded its North American subsidiary in 1980 as Nintendo of America (NoA). Hiroshi Yamauchi appointed his son-in-law Minoru Arakawa as president, who in turn hired his own wife and Yamauchi's daughter Yoko Yamauchi as the first employee. The Arakawa family moved from Vancouver, British Columbia to select an office in Manhattan, New York due to its central status in American commerce. As both were from extremely affluent families, their goals were set more by prestige than money. The seed capital and product inventory were supplied by the parent corporation in Japan, with a launch goal of entering the existing $8 billion-per-year coin-op arcade video game market and the largest entertainment industry in the US, which had already outclassed movies and television combined. During the couple's arcade research excursions, NoA hired young gamers to work in the poorly maintained warehouse in New Jersey to receive and service game hardware from Japan. In late 1980, NoA contracted the Seattle-based arcade sales and distribution company Far East Video, consisting solely of experienced arcade salespeople Ron Judy and Al Stone. The two had already built a decent reputation and a distribution network, founded specifically for the independent import and sales of games from Nintendo because the Japanese company had for years been the under-represented maverick in America. Now as direct associates to the new NoA, they told Arakawa they could always clear all Nintendo inventory if Nintendo produced better games. Far East Video took NoA's contract for a fixed per-unit commission on the exclusive American distributorship of Nintendo games, to be settled by their Seattle-based lawyer, Howard Lincoln. Based on favorable test arcade sites in Seattle, Arakawa wagered most of NoA's modest finances on a huge order of 3,000 Radar Scope cabinets. He panicked when the game failed in the fickle market upon its arrival from its four-month boat ride from Japan. Far East Video was already in financial trouble due to declining sales and Ron Judy borrowed his aunt's life savings of $50,000, while still hoping Nintendo would develop its first Pac-Man-sized hit. Arakawa regretted founding the Nintendo subsidiary, with the distressed Yoko trapped between her arguing husband and father. Amid financial threat, Nintendo of America relocated from Manhattan to the Seattle metro to remove major stressors: the frenetic New York and New Jersey lifestyle and commute, and the extra weeks or months on the shipping route from Japan as was suffered by the Radar Scope disaster. With the Seattle harbor being the US's closest to Japan at only nine days by boat, and having a lumber production market for arcade cabinets, Arakawa's real estate scouts found a 60,000-square-foot (5,600 m2) warehouse for rent containing three offices—one for Arakawa and one for Judy and Stone. This warehouse in the Tukwila suburb was owned by Mario Segale, after whom the Mario character would be named, and was initially managed by former Far East Video employee Don James. After one month, James recruited his college friend Howard Phillips as an assistant, who soon took over as warehouse manager. The company remained at fewer than 10 employees for some time, handling sales, marketing, advertising, distribution, and limited manufacturing: 160 of arcade cabinets and Game & Watch handheld units, all sourced and shipped from Nintendo. Arakawa was still panicked over NoA's ongoing financial crisis. With the parent company having no new game ideas, he had been repeatedly pleading for Yamauchi to reassign some top talent away from existing Japanese products to develop something for America—especially to redeem the massive dead stock of Radar Scope cabinets. Since all of Nintendo's key engineers and programmers were busy, and with NoA representing only a tiny fraction of the parent's overall business, Yamauchi allowed only the assignment of Gunpei Yokoi's young assistant who had no background in engineering, Shigeru Miyamoto. NoA's staff—except the sole young gamer Howard Phillips—were uniformly revolted at the sight of the freshman developer Miyamoto's debut game, which they had imported in the form of emergency conversion kits for the overstock of Radar Scope cabinets. The kits transformed the cabinets into NoA's massive windfall gain of $280 million from Miyamoto's smash hit Donkey Kong in 1981–1983 alone. They sold 4,000 new arcade units each month in America, making the 24-year-old Phillips "the largest volume shipping manager for the entire Port of Seattle". Arakawa used these profits to buy 27 acres (11 ha) of land in Redmond in July 1982 and to perform the $50 million launch of the Nintendo Entertainment System in 1985 which revitalized the entire video game industry from its devastating 1983 crash. A second warehouse in Redmond was soon secured, and managed by Don James. The company stayed at around 20 employees for some years. On 10 August 1993, Nintendo of America rolled out the Nintendo Gateway System. The organization was reshaped nationwide in the following decades, and those core sales and marketing business functions are now directed by the office in Redwood City, California. The company's distribution centers are Nintendo Atlanta in Atlanta, Georgia, and Nintendo North Bend in North Bend, Washington. As of 2007[update], the 380,000-square-foot (35,000 m2) Nintendo North Bend facility processes more than 20,000 orders a day to Nintendo customers, which include retail stores that sell Nintendo products in addition to consumers who shop Nintendo's website. Nintendo of America's Canadian branch, Nintendo of Canada, is based in Vancouver, British Columbia with a distribution center in Toronto. Nintendo Treehouse is NoA's localization team, composed of around 80 staff who are responsible for translating text from Japanese to English, creating videos and marketing plans, and quality assurance. Nintendo of America announced in October 2021 that it will be closing its offices in Redwood City, California, and Toronto and merging its operations with its Redmond and Vancouver offices. In April 2022, an anonymous quality assurance worker filed a complaint with the National Labor Relations Board, alleging Nintendo of America and contractor Aston Carter had engaged in union-busting activities and surveillance. The employee had been fired for mentioning unionizing efforts in the industry during a company meeting. The companies agreed to a settlement with the employee in October 2022. In March 2024, Nintendo of America restructured its product testing teams, resulting in the elimination of over 100 contractor roles. Some of the affected contractors were given full-time roles. Nintendo's European subsidiary was established in June 1990, based in Frankfurt, Germany. The company handles operations across Europe (excluding Scandinavia, where operations are handled by Bergsala on behalf of NOE), as well as South Africa. Nintendo of Europe's United Kingdom branch (Nintendo UK) handles operations in that country and in Ireland from its headquarters in Windsor, Berkshire. In June 2014, NOE initiated a reduction and consolidation process, yielding a combined 130 layoffs: the closing of its office and warehouse, termination of all employment, in Großostheim; and the consolidation of all of those operations into, and terminating some employment at, its Frankfurt location. As of July 2018, the company employs 850 people. In October 2018, Nintendo of Europe announced plans to relocate to a new 160,000-square-foot (15,000 m2) headquarters in Frankfurt, eventually moving into the location in 2020 during the COVID-19 pandemic. In 2019, NOE signed with Tor Gaming Ltd. for official distribution in Israel. Nintendo Australia was established in June 1993, and is based in Scoresby, Victoria, a suburb of Melbourne. It handles the publishing, distribution, sales, and marketing of Nintendo products in Australia and New Zealand. Its original headquarters was located in Mulgrave, Victoria. Prior to NAL assuming publishing and distribution of all Nintendo products in Australia in January 1994, distribution was handled in Australia by Mattel Australia and in New Zealand by Video One on behalf of Mattel. The founding General Managers of NAL were Graham Kerry (formerly the Managing Director of Mattel Australia) and Susumu Tanaka (then-transferred from Nintendo UK and currently a Senior Executive Officer at Nintendo's global HQ in Kyoto). Former Managing Directors include current Nintendo of America CEO Satoru Shibata and Rose Lappin, who previously worked on Nintendo products for Mattel Australia prior to joining NAL in 1993. Since its establishment, NAL has also published and distributed third-party video games in Australia for publishers such as Virgin Interactive Entertainment, Accolade, Atlus, Sega of Europe, Capcom Europe, Rising Star Games, Marvelous, Bandai Namco Entertainment, Enix & Square Enix, Hudson Soft, Disney Interactive and Tomy, amongst others. Nintendo's South Korean subsidiary was established on 7 July 2006 and is based in Seoul. In March 2016, the subsidiary was heavily downsized due to a corporate restructuring after analyzing shifts in the current market, laying off 80% of its employees, leaving only ten people, including former CEO Hiroyuki Fukuda. This did not affect any games scheduled for release in South Korea, and Nintendo continued operations there as usual. Takahiro Miura would later take over as CEO in 2018. In April 2025, the subsidiary gained international attention when its website unintentionally leaked the presence of young Pauline in Donkey Kong Bananza. Nintendo's Singaporean subsidiary was established on 26 September 2025. Takahiro Miura is the supervising branch manager. In November, Nintendo also announced its plans to acquire Bandai Namco Studios Singapore and rename it Nintendo Studios Singapore. Nintendo Phuten was incorporated in Taipei, Taiwan in 1991 as Phuten Co., Ltd. As Nintendo's Taiwanese subsidiary, it distributed Nintendo's products in Taiwan until its closure in 2014. Its responsibilities was handed over to Nintendo (Hong Kong) Limited until 2025 when Nintendo Taiwan Co., Ltd. was formed in Taipei to handle sales in the region. Nintendo (Hong Kong) Limited was incorporated on 7 April 2005. It marketed the Wii in Hong Kong, after Nintendo could not market the console in Mainland China under iQue for being unable to circumvent the ban on foreign-made consoles imposed by the Chinese government. It currently handles distribution of Nintendo consoles in Hong Kong. Taiwan was also included under the division from 2014 until 2025. Although most of the research and development (R&D) is being done in Japan, there are some R&D facilities in the United States, Europe, and China that are focused on developing software and hardware technologies used in Nintendo products. Although they all are subsidiaries of Nintendo (and therefore first-party), they are often referred to as external resources when being involved in joint development processes with Nintendo's internal developers by the Japanese personnel involved. This can be seen in the Iwata Asks interview series. Nintendo Software Technology (NST) and Nintendo Technology Development (NTD) are located in Redmond, Washington, United States, while Nintendo European Research & Development (NERD) is located in Paris, France, and Nintendo Network Service Database (NSD) is located in Kyoto, Japan. Most external first-party software development is done in Japan, because the only overseas subsidiaries are Retro Studios and Shiver Entertainment in the United States (acquired in 2002 and 2024, respectively) and Next Level Games in Canada (acquired in 2021). Although these studios are all subsidiaries of Nintendo, they are often referred to as external resources when being involved in joint development processes with Nintendo's internal developers by the Nintendo Entertainment Planning & Development (EPD) division. 1-Up Studio and Nintendo Cube are located in Tokyo, Japan, and Monolith Soft has one studio located in Tokyo and another in Kyoto. Nintendo established The Pokémon Company alongside Creatures and Game Freak to manage the Pokémon brand. Similarly, Warpstar, Inc. was formed through a joint investment with HAL Laboratory, which was in charge of the Kirby: Right Back at Ya! animated series as well as the web series It's Kirby Time. Both companies are investments from Nintendo, with Nintendo holding 32% of the shares of The Pokémon Company and 50% of the shares of Warpstar, Inc. Following the success of the Super Mario Bros. movie, Nintendo bought out HAL Laboratory's stake in Warpstar in April 2025, and by August 2025, rebranded the subsidiary as Nintendo Stars to focus on further multimedia initiatives involving Nintendo's IP. Other notable subsidiaries include: Active Boeki is a distribution company based in Kobe that handles the distribution of Nintendo hardware and software in Southeast Asia and the Middle East since the Game & Watch era, under the responsibility of Nintendo Co. Ltd. in Japan. The company works with local resellers, such as Singapore-based Maxsoft handling distribution and sales in Singapore, Malaysia, Indonesia, Thailand and the Philippines. Active Boeki also works with resellers such as UAE-based Active Gulf and Saudi-based Shas Samurai, responsible for distribution and sales in the United Arab Emirates, Oman, Saudi Arabia, Qatar, Bahrain and Kuwait. In 2023, Active Boeki through Shas Samurai has ceased its distributing operations for Saudi Arabia, as AIC Trading received distribution rights for Nintendo in the country, overseen by Nintendo of Europe. Active Boeki through Maxsoft is also no longer the sole exclusive distributor for Nintendo in Southeast Asia after the appointment of new distributors in charge of distribution, sales, promotion and pop-up stores related to Nintendo products domestically in all countries previously covered by Maxsoft except Indonesia, such as Convergent Systems responsible for Singapore and Malaysia, Synnex for Thailand, and VST-ECS for the Philippines. Bergsala, a third-party company based in Sweden, exclusively handles Nintendo operations in the Nordic region. Bergsala's relationship with Nintendo was established in 1981 when the company sought to distribute Game & Watch units to Sweden, which later expanded to the NES console by 1986. Bergsala was the only non-Nintendo owned distributor of Nintendo's products until 2019, when Tor Gaming gained distribution rights in Israel. Nintendo has partnered with Tencent to release Nintendo products in China, following the lifting of the country's console ban in 2015. In addition to distributing hardware, Tencent helps with the governmental approval process for video game software. In January 2019, Ynet and IGN Israel reported that negotiations about the official distribution of Nintendo products in the country were ongoing. After two months, IGN Israel announced that Tor Gaming Ltd., a company established in earlier 2019, gained a distribution agreement with Nintendo of Europe, handling official retailing beginning at the start of March, followed by opening an official online store the next month. Marketing Nintendo of America has engaged in several high-profile marketing campaigns to define and position its brand. One of its earliest and most enduring slogans was "Now you're playing with power!", used first to promote its Nintendo Entertainment System. It modified the slogan to include "SUPER power" for the Super Nintendo Entertainment System, and "PORTABLE power" for the Game Boy. Its 1994 "Play It Loud!" campaign played upon teenage rebellion and fostered an edgy reputation. During the Nintendo 64 era, the slogan was "Get N or get out". During the GameCube era, the "Who Are You?" suggested a link between the games and the players' identities. The company promoted its Nintendo DS handheld with the tagline "Touching is Good". For the Wii, they used the "Wii would like to play" slogan to promote the console with the people who tried the games including Super Mario Galaxy and Super Paper Mario. The Nintendo 3DS used the slogan "Take a look inside". The Wii U used the slogan "How U will play next". The Nintendo Switch uses the slogan "Switch and Play" in North America, and "Play anywhere, anytime, with anyone" elsewhere. During the peak of Nintendo's success in the video game industry in the 1990s, its name was ubiquitously used to refer to any video game console, regardless of the manufacturer. To prevent its trademark from becoming generic, Nintendo pushed the term "game console", and succeeded in preserving its trademark. Nintendo operates or licenses retail stores across the world. In Hong Kong, a third-party franchisee operates several Nintendo Switch-focused retail stores under the name of NSEW. The first store opened in March 2020 in Sham Shui Po. Two additional stores later opened, alongside a temporary pop-up store in the Hong Kong International Airport. Another Nintendo Switch-focused store, Assemble, is located in Wan Chai. This store opened on 14 November 2024. This store features a dedicated section to third-party developer and publisher Cygames. In June 2019, Nintendo's official Israeli distributor TorGaming Ltd. opened the second brick-and-morter Nintendo retail store in the world, entitled Nintendo Israel, at Dizengoff Center in Tel Aviv. The store was Dizengoff Center's second largest launch. On 1 February 2019, Nintendo announced that it would open Nintendo Tokyo as a facility at the then-under-construction Shibuya Parco department store in the Fall of that year, being their first self-managed store in the country. The store opened with the complex on 22 November 2019. Since Nintendo Tokyo's opening, two additional Nintendo stores have opened in Japan. Nintendo Osaka opened on 11 November 2022, located on the thirteenth floor of the Daimaru Umeda department store in Kita-ku, as a store-within-a-store. Nintendo Kyoto, located within the Takashimaya Department Store building in Kyoto, opened on 17 October 2023. In May 2012, Shas Samurai, Nintendo's official representative in Saudi Arabia, opened a "Nintendo World Store" at Al Faisaliah Mall in Riyadh. Nintendo opened its first retail store, Nintendo World (now Nintendo New York), on 14 May 2005, at the former location of the Pokémon Center at Rockefeller Center in New York City. Nintendo opened its second US store called Nintendo San Francisco in the city's Union Square neighborhood on 15 May 2025. The Nintendo of America headquarters in Redmond, Washington has a private store which is open only to employees and invited guests. Additionally, Nintendo launched official pop-up stores in 2021 at various Japanese cities, and later in 2023 in Seoul, Singapore, and Hong Kong. In use since the 1960s, Nintendo's most recognizable logo is the ovoid racetrack shape, especially the red-colored wordmark typically displayed on a white background, primarily used in the Western markets from 1985 to 2006. In Japan, a monochromatic version that lacks a colored background is on Nintendo's own Famicom, Super Famicom, Nintendo 64, GameCube, and handheld console packaging and marketing. Since 2006, in conjunction with the launch of the Wii, Nintendo changed its logo to a gray variant that lacks a colored background inside the wordmark, making it transparent. Nintendo's official, corporate logo remains this variation.[failed verification] For consumer products and marketing, a white variant on a red background has been used since 2016, and has been in full effect since the launch of the Nintendo Switch in 2017. Policy Unlike most Japanese companies, Nintendo has generally kept a large cash reserve instead of using the extra funds for investments or stock buybacks and dividends, a policy set in place by Hiroshi Yamauchi. As of September 2025, the company is estimated to have ¥1.5 trillion in cash reserves, amounting to around 120% of its sales. This cash reserved helped Nintendo quickly recover from poor sales of the GameCube and Wii U, as well as provide financial assurance for Nintendo to put into long-term projects. For many years, Nintendo had a policy of strict content guidelines for video games published on its consoles. Although Nintendo allowed graphic violence in its video games released in Japan, nudity and sexuality were strictly prohibited. Former Nintendo president Hiroshi Yamauchi believed that if the company allowed the licensing of pornographic games, the company's image would be forever tarnished. Nintendo of America went further and games released for Nintendo consoles could not feature nudity, sexuality, profanity (including racism, sexism or slurs), blood, graphic or domestic violence, drugs, political messages, or religious symbols—with the exception of widely unpracticed religions, such as the Greek Pantheon. The Japanese parent company was concerned that it may be viewed as a "Japanese invasion" by forcing Japanese community standards on North American and European children. Past the strict guidelines, some exceptions have occurred: Bionic Commando (though swastikas were eliminated in the US version), Smash TV and Golgo 13: Top Secret Episode contain human violence, the latter also containing implied sexuality and tobacco use, River City Ransom and Taboo: The Sixth Sense contain nudity, and the latter also contains religious images, as do Castlevania II and III. Nintendo's content policy is responsible for the Genesis version of Mortal Kombat having more than double the unit sales of the Super NES version, largely due to Nintendo forcing its publisher Acclaim to recolor red blood to look like white sweat within the game and to tone down its gorier and more violent graphics. By contrast, Sega allowed blood and gore to remain in the Genesis version (though a code is required to unlock the gore). Nintendo allowed the Super NES version of Mortal Kombat II to ship uncensored the following year with a content warning on the packaging. Video game ratings systems were introduced with the Entertainment Software Rating Board (ESRB) of 1994 and the Pan European Game Information of 2003, and Nintendo discontinued most of its censorship policies in favor of consumers making their own choices. Today changes to the content of games are done primarily by the game's developer or, occasionally, at the request of Nintendo. The only clear-set rule is that ESRB AO-rated games will not be licensed on Nintendo consoles in North America, a practice which is also enforced by Sony and Microsoft, its greatest competitors in the present market. Nintendo has since allowed several mature-content games to be published on its consoles, including Perfect Dark, Conker's Bad Fur Day, Doom, Doom 64, BMX XXX, the Resident Evil series, Killer7, the Mortal Kombat series, Eternal Darkness: Sanity's Requiem, BloodRayne, Geist, Dementium: The Ward, Bayonetta 2, Devil's Third, and Fatal Frame: Maiden of Black Water. Certain games have continued to be modified, however. For example, Konami was forced to remove all references to cigarettes in the 2000 Game Boy Color game Metal Gear Solid (although the previous NES version of Metal Gear, the GameCube game Metal Gear Solid: The Twin Snakes, and the 3DS game Metal Gear Solid 3: Snake Eater 3D, included such references), and maiming and blood were removed from the Nintendo 64 port of Cruis'n USA. Another example is in the Game Boy Advance game Mega Man Zero 3, in which one of the bosses, called Hellbat Schilt in the Japanese and European releases, was renamed Devilbat Schilt in the North American localization. In North American releases of the Mega Man Zero games, enemies and bosses killed with a saber attack do not gush blood as they do in the Japanese versions. However, the release of the Wii was accompanied by several even more controversial games, such as Manhunt 2, No More Heroes, The House of the Dead: Overkill, and MadWorld, the latter three of which were initially published exclusively for the console. Nintendo of America also had guidelines before 1993 that had to be followed by its licensees to make games for the Nintendo Entertainment System, in addition to the above content guidelines. Guidelines were enforced through the 10NES lockout chip. The last rule was circumvented in several ways; for example, Konami, wanting to produce more games for Nintendo's consoles, formed Ultra Games and later Palcom to produce more games as a technically different publisher. This disadvantaged smaller or emerging companies, as they could not afford to start more companies. In another side effect, Square Co. (now Square Enix) executives have suggested that the price of publishing games on the Nintendo 64 along with the degree of censorship and control which Nintendo enforced over its games,[citation needed] most notably Final Fantasy VI, were factors in switching its focus towards Sony's PlayStation console. In 1993, a class action suit was taken against Nintendo under allegations that their lockout chip enabled unfair business practices. The case was settled, with the condition that California consumers were entitled to a $3 discount coupon for a game of Nintendo's choice. Nintendo has generally been proactive in ensuring that its intellectual property in both hardware and software is protected. Nintendo's protection of its properties began as early as the arcade release of Donkey Kong which was widely cloned on other platforms, a practice common to the most popular arcade games of the era. Nintendo did seek legal action to try to stop the release of these unauthorized clones but estimated they still lost $100 million in potential sales to these clones. Since then, Nintendo has been proactive in preventing copyright infringement of its games by video game emulators and fan games and other works using the company's intellectual property. The company has also suffered from various data breaches and has sought action against those that have released these leaks. The gold sunburst seal was first used by Nintendo of America, and later by Nintendo of Europe. It is displayed on any game, system, or accessory licensed for use on one of its video game consoles, denoting the game has been properly approved by Nintendo. The seal is also displayed on any Nintendo-licensed merchandise, such as trading cards, game guides, or apparel, albeit with the words "Official Nintendo Licensed Product". In 2008, game designer Sid Meier cited the Seal of Quality as one of the three most important innovations in video game history, as it helped set a standard for game quality that protected consumers from shovelware. In NTSC regions, this seal is an elliptical starburst named the "Official Nintendo Seal". Originally, for NTSC countries, the seal was a large, black and gold circular starburst. The seal read as follows: "This seal is your assurance that NINTENDO has approved and guaranteed the quality of this product." This seal was later altered in 1988: "approved and guaranteed" was changed to "evaluated and approved". In 1989, the seal became gold and white, as it currently appears, with a shortened phrase, "Official Nintendo Seal of Quality". It was changed in 2003 to read "Official Nintendo Seal". The seal currently reads: The official seal is your assurance that this product is licensed or manufactured by Nintendo. Always look for this seal when buying video game systems, accessories, games, and related products. In PAL regions, the seal is a circular starburst named the "Original Nintendo Seal of Quality". Text near the seal in the Australian Wii manual states: This seal is your assurance that Nintendo has reviewed this product and that it has met our standards for excellence in workmanship, reliability, and entertainment value. Always look for this seal when buying games and accessories to ensure complete compatibility with your Nintendo product. In 1992, Nintendo teamed with the Starlight Children's Foundation to build Starlight Fun Center mobile entertainment units and install them in hospitals. By the end of 1995, 1,000 Starlight Nintendo Fun Center units were installed. The units combine several forms of multimedia entertainment including gaming, and are a distraction as well as brightening moods and boosting children's morale during hospital stays. Nintendo has consistently been ranked last in Greenpeace's "Guide to Greener Electronics" due to Nintendo's failure to publish information. Similarly, they are ranked last in the Enough Project's "Conflict Minerals Company Rankings" due to Nintendo's refusal to respond to multiple requests for information. Like many other electronics companies, Nintendo offers a recycling program for customers to mail in unused products. Nintendo of America claimed 548 tons of returned products in 2011, 98% of which became reused or recycled. Legacy "Nearly every generation, Nintendo has led a charge of innovation that has fundamentally reshaped the gaming world. These innovations haven't always been well received, but Nintendo's fingerprints are so firmly etched into our industry, that the company is arguably the most important figure in it." It is considered that Hiroshi Yamauchi's strategic decisions, mainly to take Nintendo into the world of electronic games, ensured not only the success of his company but the survival of the industry as a whole, as it "restored public confidence in electronic games after the gloomy collapse of the U.S. market in the early 1980s". The company was already the most successful in Japan by 1991, with its products having "redefined the way we play games" and its business model having prioritized title sales strategies over consoles, unlike what most distributors at the time were doing. Its social responsibility policy and philosophy focused on quality and innovation have already led to Nintendo being classified as a "consumer-centric manufacturer", something that has allowed it to differentiate itself from its direct competitors, Sony and Microsoft. Forbes magazine has since 2013 included Nintendo in its list of the "World's Best Employers", which takes into consideration work environment and staff diversity. Time magazine in turn chose Nintendo in 2018 as one of the "50 Genius Companies" of the year, saying that "resurrection" has become a "habit" of the company and highlighting the success of the Nintendo Switch over the Wii U. Its capital in 2018 exceeded ten billion yen and net sales were over nine billion dollars, mostly in the North American market, making it one of Japan's richest and most valuable companies. Nintendo characters have had a significant impact on contemporary popular culture. Mario has gone from being just a corporate mascot to a "cultural icon", as well as one of the most famous characters in the industry. According to John Taylor of Arcadia Investment Corp. the character "is by far the biggest single property in electronic gaming." Other prominent company characters include Princess Peach, Pikachu, Link, Donkey Kong, Kirby, and Samus Aran. See also Notes References External links 34°58′11″N 135°45′22.3″E / 34.96972°N 135.756194°E / 34.96972; 135.756194 |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Super_NES_CD-ROM] | [TOKENS: 4040] |
Contents Super NES CD-ROM The Super NES CD-ROM[a] (commonly abbreviated as SNES CD) was a proposed video game platform developed in the early 1990s by Nintendo via joint ventures with Sony and Philips intended to expand the functionality of the cartridge-based Super Nintendo Entertainment System (SNES)[b] by adding support for compact discs (CDs). The collaborations with Sony and Philips resulted in two distinct projects that would support playback of CDs, one of which was an add-on device for the Super NES developed by Philips and another was a dedicated all-in-one unit developed by Sony under the name "PlayStation".[c] Games would also be stored on the medium, using two distinct formats based on CD-ROM for both collaborations. Both projects ultimately fell short after Nintendo dropped out of both joint ventures in 1991 and 1993 respectively, meaning that both CD-based projects were cancelled with fewer to no prototypes being produced. This turn of events led to Sony developing a console of their own and Philips gaining licenses to some Nintendo properties for a few Nintendo-themed games for the CD-i platform, many of which were unsuccessful and poorly-received. Nintendo themselves never properly transitioned to optical media for several years until the release of the GameCube in 2001. History Released in 1990, the Super Nintendo Entertainment System (SNES) was Nintendo's entry into the fourth generation of video game consoles, also known as the 16-bit era. It became a major success worldwide, outselling its competitors, the TurboGrafx-16/PC Engine and the Sega Genesis/Mega Drive, becoming the most popular console of that generation. During the 1990s, compact discs (CDs) started to gain traction and popularity as a storage medium for music and video games, which were positioned as alternatives to the traditional cartridge format that was the norm in the video game industry at the time. Some advantages over the cartridge format include greater storage capacity, full-motion video (FMV) playback, and the inclusion of high-quality audio (including audio CD playback). Add-on accessories using CD technology were created to take advantage of this approach; the first one being NEC's TurboGrafx-CD/PC Engine CD-ROM² in 1988 and then Sega's Sega CD/Mega-CD in 1991. In response, Nintendo sought to create their own take on the concept to combat its competitors, and entered negotiations with Sony, who had previously designed the sound chips for the SNES, to create the project. Sony engineer Ken Kutaragi became interested in video game development after observing his daughter play games on Nintendo's Famicom video game console. Without full corporate approval, Kutaragi secretly designed the S-SMP audio chip for Nintendo's upcoming Super NES console. At the time, Sony was uninterested in the video game business, so most of his superiors did not approve of the project (and he was nearly fired for doing so), but Kutaragi received support from Sony executive Norio Ohga, who allowed the project to proceed. Encouraged by the collaboration, and convinced that CD-ROMs (which Sony had co-developed with Philips) would eventually supplant cartridges, Kutaragi proposed a CD-ROM drive for the Super NES. Although Nintendo was initially skeptical, concerned about the slow load times of CD-ROM drives of the time, it permitted Sony to begin development after Kutaragi claimed the drive would be used for multimedia purposes rather than games. Development began in late 1988. The resulting project was a Sony-branded console called the PlayStation, designed to support both Super NES cartridges and a new CD-based format known as the Super Disc. Contemporaneous plans also reportedly called for the integration of the Super FX coprocessor developed by Argonaut Games for 3D graphics acceleration, which was used in games such as Star Fox. Jez San of Argonaut recalled that Nintendo and Sony initially wanted to add the Super FX chip into their new console, which would have allowed for rudimentary 3D graphics out of the box, and said that the chip was discussed as part of early technical proposals during negotiations with Sony and Nintendo. Under Sony's proposed agreement, the company would retain control over the Super Disc format and its software licensing, as well as reap the exclusive benefits from music and movie content on the platform—areas where Sony was aggressively expanding. Nintendo president Hiroshi Yamauchi found the terms unacceptable. He was already wary of Sony who had demanded game developers to use its expensive, proprietary audio tools for the S-SMP audio chip. He was also concerned by Sony's growing influence across music, film, and software. Yamauchi began to suspect that Nintendo was being used to advance Sony's ambitions of launching its own console. He soon began seeking an alternative partner. Turning to one of Sony's main rivals, Philips, Yamauchi dispatched Nintendo of America president Minoru Arakawa and executive Howard Lincoln to the Netherlands to negotiate a more favorable deal. As chronicled by David Sheff in his book Game Over, "[The Philips deal] was meant to do two things at once: give Nintendo back its stranglehold on software and gracefully f--k Sony."[d] Unbeknownst to Sony, Nintendo's intent to go with Phillips for the CD-ROM add-on was publicly announced two days before Consumer Electronics Show in a May 1991 Seattle Times news report. At the Consumer Electronics Show in June 1991, Sony publicly unveiled its hybrid SNES-compatible console, the PlayStation, which supported both cartridge and CDs. The next day, Nintendo revealed its partnership with Philips at the show, which came as a surprise to the audience, and is now referred to by many journalists as "the greatest ever betrayal" in the industry. Despite the events at CES 1991, negotiations between Nintendo and Sony continued, and during this period, two to three hundred PlayStation prototypes were produced, and software development was underway. In early 1992, the companies reached a deal allowing Sony to produce SNES-compatible hardware, while Nintendo retained control and profit over the games. However, the strained relationship between the two firms had already taken its toll. Although Sony executives still believed that partnering with the more experienced Nintendo was the safer path, Kutaragi ultimately persuaded the company to abandon the Super NES CD-ROM and instead pursue development of a standalone console for the next-generation of video games, which would become the PlayStation in 1994. This new console dropped compatibility with the SNES and contained more powerful hardware specifications than any other consoles available at the time. In order to refocus their efforts on the new console, Sony cut all ties to Nintendo in May 1992. Meanwhile, the partnership between Nintendo and Philips led to the development of an add-on based CD-ROM peripheral for the Super NES, featuring additional hardware such as a 32-bit coprocessor and a new CD format based on CD-ROM XA technology known as the Nintendo Disc (ND). However, before any prototypes can be produced, Nintendo reportedly canceled the project quietly as late as September 1993, effectively ending development of all CD-based Super NES hardware. Proposed devices The PlayStation[e] was a proposed standalone console co-produced by Nintendo and Sony that used its own proprietary CD-ROM format designed and solely licensed by Sony known as the Super Disc while retaining compatibility with Super NES Game Paks via an included cartridge slot. Initial plans for the unit called for the integration of the Super FX coprocessor chip to allow for support of rudimentary 3D polygonal graphics out of the box, however this is not present in any of the prototypes that were produced. At least 200 to 300 units of the SNES-based PlayStation were produced until they were scrapped in favor of the next-generation PlayStation project. All of these units bear the model number SFX-100. As of 2025[update], there have been two known examples of these units in existence. Photos of the prototype resurfaced in the 2000s, which were subsequently shared online as well as it being featured on an article published by Edge in April 2009 about the original PlayStation's history, showing what the unit would have looked like. Around July 2015, one of the original Sony PlayStation prototypes had been reportedly found; this particular unit was abandoned by former Sony Computer Entertainment CEO Ólafur Jóhann Ólafsson during his time at Advanta. A former Advanta worker, Terry Diebold, acquired the device as part of a lot during Advanta's 2009 bankruptcy auction. As shown in Benjamin Heckendorn's tear-down video of the unit in 2016, the prototype featured two Super NES controller ports, a cartridge slot, a tray-loading dual-speed CD-ROM drive, RCA composite jacks, S-Video, RFU DC OUT (similar to the PlayStation SCPH-1001), a proprietary multi-out AV output port (the same one featured on the Super NES, Nintendo 64, and GameCube), headphone jack on the front, a serial port labelled "NEXT" (probably for debugging), and one expansion port under the unit. The system was later confirmed as operational and plays Super Famicom cartridges as well as its included test cartridge, although the audio output and CD drive were non-functional. The unit was also missing its original power supply as Diebold likely never received the original one when he got ahold of it during the Advanta bankruptcy auction, and so the system could not be powered on without it. To remedy this issue for the time being, a third-party power supply was used. It came with a Sony/PlayStation-branded version of the standard Super Famicom controller (model number SHVC-005). Some groups have attempted to develop homebrew software for the console such as Super Boss Gaiden, as there were no known games that used the CD drive. In March 2016, retro-gaming website RetroCollect reported that it (and influential members of online emulation communities) had received (from an anonymous source) a functional disc boot ROM for the SNES-based PlayStation. Diebold gave the unit to hardware hacker Benjamin Heckendorn in 2016 to examine its contents. In doing this, he posted a tear-down video of the system that same year, which also included some technical specifications of the prototype that he published and compared it to the other two CD-based add-ons released for the TurboGrafx-16 and Sega Genesis. He said that the system would have probably been as powerful as a standard Super NES, but not as powerful as the Sega CD. Heckendorn later identified faults in several on-board components which he subsequently replaced in 2017, resulting in fixing the audio and CD drive issues indirectly. To also fully resolve the missing power supply issue, Heckendorn created a custom power supply for the unit based on the original PlayStation and modified the unit to use a power connector from a Sony Walkman to match the one that was used on the custom power supply unit to ensure that it would be powered on without the need for its original power supply. Heckendorn then showed Super Famicom (and SNES games via an adapter) working on the system and also showed audio CDs working on the system as there were no known game CDs, but affirmed that homebrew games worked. This prototype was auctioned by Diebold in February 2020, with an initial price of US$15,000, but the auction quickly exceeded $350,000 within two days. It was sold for $360,000 to Greg McLemore, an entrepreneur and founder of Pets.com, who has a large collection of other video game hardware and plans to establish a permanent museum for this type of hardware. In March 2025, it has been reported that a second prototype unit was found to be in Kutaragi's possession, which he has kept inside his closet for storage. This unit is identical to that of the first known prototype unit that was discovered nearly ten years prior, but in a much better physical condition. The Super NES CD-ROM System[f] was a proposed CD-ROM add-on for the Super NES co-produced by Nintendo and Philips that can accept CDs while also providing some additional hardware functionality to expand upon the capabilities of the Super NES. It was developed as a result of a partnership between the two companies that occurred alongside the ongoing development of Sony's standalone SNES-based PlayStation console and the Super Disc CD-ROM format. Like most CD-based add-ons, it can play CD-based games as well as audio CDs via its own built-in CD drive. It was designed to be used only in conjunction with a Super NES console, and attaches to the expansion port on the bottom of the main system. Unlike most CD-ROM based add-ons (and virtually most optical disc-based game consoles since), it does not use a tray loading or top loading drive and instead uses a cartridge-based caddy loading drive that can accept discs placed in enclosed caddy cases. This was designed to protect the discs from damage, and was similar to that of early CD-ROM drives used in contemporary computers of the time such as certain pre-1994 Macintosh computers with built-in CD drives. The add-on's CD drive would operate at both single (1x) and double (2x) speeds, with the faster speed (2x) being primarily used for CD-based games while the slower speed (1x) was presumably only used for audio CDs. CD-based games for the add-on would use a new CD-ROM format known as the Nintendo Disc (ND), which was developed separately from Sony's Super Disc format and was based on CD-ROM XA; the ND format games would also be compatible with CD-i-based hardware. Because Nintendo was convinced that using CD-ROM technology with a 16-bit processor would not provide consumers with significantly enhanced and unique games, they decided to incorporate a new 32-bit RISC processor into the add-on, which was reported by some analysts to be an NEC V810 clocked at 21.47727 MHz. This new 32-bit CPU, known as the SCCP, was to be included inside a dedicated system cartridge that contains the extra hardware dedicated for the add-on such as additional RAM, ROM, and an additional coprocessor called "HANDS" (Hyper Advanced Nintendo Data Transfer System), a custom chip based around a single 65C02 8-bit processor clocked at 4.295 MHz. HANDS primarily acts as a decoder for the add-on's CD-ROM drive, but also enhances the SNES's sound capabilities with up to four channels of audio, complimenting with the add-on's CD audio as well as the Super NES' eight-channel S-SMP audio system. The system cartridge would be inserted into the console's cartridge slot and then interfaced via a cord that attaches from the system cartridge to the CD-ROM unit to supply power and data transfer, in a setup that mirrors that of the Famicom Disk System of the preceding NES (Famicom). Like most CD-ROM add-ons, it would also require its own power supply as the Super NES cannot supply power to the CD-ROM unit by itself; an AC adapter for the CD-ROM unit would be included to supply power to the add-on itself. To combat piracy, the add-on would have added a number of copy-protection measures to prevent the use of illicit copies and burned backups of ND format games. The technical specifications of the Super NES CD-ROM System add-on were reported as early as 1992 by Electronic Gaming Monthly (EGM) before publishing its specs in its March 1993 issue, which were echoed in an issue of Electronic Games published in April 1993. The 1993 EGM and EG issues also showed concept art for the proposed add-on unit, with the EGM issue showing the Super Famicom design and the EG issue showing the North American Super NES design. Before a single prototype could be made, however, Nintendo quietly cancelled the project a few years into the concept phase, which was reported as late as the summer of 1993. The following table below is based on Benjamin Heckendorn's specs comparison of the first known prototype unit of Sony's jointly produced SNES-based PlayStation console shown in July 2016. The specs of the proposed Nintendo and Philips developed Super NES CD-ROM System add-on published by Electronic Gaming Monthly and Electronic Games in 1993 are also included on this table below. Legacy After the original contract with Sony failed, Nintendo continued its partnership with Philips. This contract provisioned Philips with the right to feature Nintendo's characters in a few games for its CD-i multimedia device, but never resulted in a CD-ROM add-on for the Super NES after Nintendo's silent cancellation of the project in late 1993. The Nintendo-themed CD-i games were very poorly received, and the CD-i is considered a commercial failure. These games later found its way into the early modern internet culture as a cult classic; the hand-drawn cutscenes of certain Nintendo-themed CD-i games in particular were used in various parodies and internet memes in the 2000s, including those published on video sharing sites such as YouTube. After Nintendo left the collaboration in 1991, Sony continued to work on the project on their own, cutting ties with Nintendo in 1992, and reworked the project into a standalone console that exclusively used CDs instead of cartridges and had more powerful hardware than any other consoles available at the time. It was around this time that Sony entered into a brief short-lived partnership with Sega under the agreement that both companies would share all costs and risk for the new CD-ROM drive (and ultimately the next generation console). Sega would cancel the partnership, however, claiming that Sony knew little of the industry at the time, and resumed development on what would eventually become the Sega Saturn. Kutaragi however became emboldened enough from his experiences working with both Nintendo and Sega, and Sony resumed development of their own console for the next generation after leaving both companies. The main game in development for the SNES CD platform launch was Square's Secret of Mana, whose planned content was cut down to the size suitable for cartridge and released on that medium instead. None of the additional hardware used in the Nintendo and Philips Super NES CD-ROM add-on project ever came to fruition, however the CPU of the proposed add-on, the NEC V810, did eventually make its way into at least two other video game products: NEC and Hudson Soft's PC-FX game console released exclusively in Japan in December 1994, as well as Nintendo's own Virtual Boy 3D stereoscopic game console released in July 1995 in Japan and August 1995 in North America. Sony released the PlayStation in December 1994 in Japan and September 1995 in North America and Europe, and soon became a major success worldwide. This next-generation CD-based console successfully competed against other CD-based consoles such as the Sega Saturn, the 3DO, and PC-FX, as well as Nintendo's cartridge-based Nintendo 64, making it a console leader. Sony had sold three times as many PlayStation consoles compared to the Nintendo 64 and the Sega Saturn in the mid-to-late 1990s, establishing Sony as a major player in the video game industry. The broken partnership with Sony has often been cited as a mistake by Nintendo, effectively creating a formidable rival in the video game market as a consequence of Sony's and Kutaragi's shrewd determination to break into the market. Journalists have argued that if Nintendo had never broken the deal, its position may have been further undermined by Sony. Nintendo, still convinced of the faster load times and stronger anti-piracy measures of the cartridge format, did not produce an optical disc-based console until the release of the GameCube in 2001. See also Notes References External links Media related to Nintendo Playstation prototype at Wikimedia Commons |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/BLISS] | [TOKENS: 716] |
Contents BLISS BLISS is a system programming language developed at Carnegie Mellon University (CMU) by W. A. Wulf, D. B. Russell, and A. N. Habermann around 1970. It was perhaps the best known system language until C debuted a few years later. Since then, C became popular and common, and BLISS faded into obscurity. When C was in its infancy, a few projects within Bell Labs debated the merits of BLISS vs. C.[citation needed] BLISS is a typeless block-structured programming language based on expressions rather than statements, and includes constructs for exception handling, coroutines, and macros. It does not include a goto statement. The name is variously said to be short for Basic Language for Implementation of System Software or System Software Implementation Language, Backwards. However, in his 2015 oral history for the Babbage Institute's Computer Security History Project, Wulf claimed that the acronym was originally based on the name "Bill's Language for Implementing System Software." The original Carnegie Mellon compiler was notable for its extensive use of optimizations, and formed the basis of the classic book The Design of an Optimizing Compiler. Digital Equipment Corporation (DEC) developed and maintained BLISS compilers for the PDP-10, PDP-11, VAX, DEC PRISM, MIPS, DEC Alpha, and Intel IA-32, The language did not become popular among customers and few had the compiler, but DEC used it heavily in-house into the 1980s; most of the utility programs for the OpenVMS operating system were written in BLISS-32. The DEC BLISS compiler has been ported to the IA-64 and x86-64 architectures as part of the ports of OpenVMS to these platforms. The x86-64 BLISS compiler uses LLVM as its backend code generator, replacing the proprietary GEM backend used for Alpha and IA-64. Language description [excessive quote] BLISS has many of the features of other modern high-level languages. It has block structure, an automatic stack, and mechanisms for defining and calling recursive routines ... provides a variety of predefined data structures and ... facilities for testing and iteration ... On the other hand, BLISS omits certain features of other high-level languages. It does not have built-in facilities for input/output, because a system-software project usually develops its own input/output or builds on basic monitor I/O or screen management services ... it permits access to machine-specific features, because system software often requires this. BLISS has characteristics that are unusual among high-level languages. A name ... is uniformly interpreted as the address of that segment rather than the value of the segment ... Also, BLISS is an "expression language" rather than a "statement language". This means that every construct of the language that is not a declaration is an expression. Expressions produce a value as well as possibly causing an action such as modification of storage, transfer of control, or execution of a program loop. For example, the counterpart of an assignment "statement" in BLISS is, strictly speaking, an expression that itself has a value. The value of an expression can be either used or discarded in BLISS ... Finally, BLISS includes a macro facility that provides a level of capability usually found only in macro-assemblers. — Bliss Language Manual, Digital Equipment Corporation (1987) The BLISS language has the following characteristics: Source example The following example is taken verbatim from the Bliss Language Manual: Versions Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Special:EditPage/Template:Paranormal] | [TOKENS: 1460] |
Editing Template:Paranormal Copy and paste: – — ° ′ ″ ≈ ≠ ≤ ≥ ± − × ÷ ← → · § Sign your posts on talk pages: ~~~~ Cite your sources: <ref></ref> {{}} {{{}}} | [] [[]] [[Category:]] #REDIRECT [[]] <s></s> <sup></sup> <sub></sub> <code></code> <pre></pre> <blockquote></blockquote> <ref></ref> <ref name="" /> {{Reflist}} <references /> <includeonly></includeonly> <noinclude></noinclude> {{DEFAULTSORT:}} <nowiki></nowiki> <!-- --> <span class="plainlinks"></span> Symbols: ~ | ¡ ¿ † ‡ ↔ ↑ ↓ • ¶ # ∞ ‹› «» ¤ ₳ ฿ ₵ ¢ ₡ ₢ $ ₫ ₯ € ₠ ₣ ƒ ₴ ₭ ₤ ℳ ₥ ₦ ₧ ₰ £ ៛ ₨ ₪ ৳ ₮ ₩ ¥ ♠ ♣ ♥ ♦ 𝄫 ♭ ♮ ♯ 𝄪 © ¼ ½ ¾ Latin: A a Á á À à  â Ä ä Ǎ ǎ Ă ă Ā ā à ã Å å Ą ą Æ æ Ǣ ǣ B b C c Ć ć Ċ ċ Ĉ ĉ Č č Ç ç D d Ď ď Đ đ Ḍ ḍ Ð ð E e É é È è Ė ė Ê ê Ë ë Ě ě Ĕ ĕ Ē ē Ẽ ẽ Ę ę Ẹ ẹ Ɛ ɛ Ǝ ǝ Ə ə F f G g Ġ ġ Ĝ ĝ Ğ ğ Ģ ģ H h Ĥ ĥ Ħ ħ Ḥ ḥ I i İ ı Í í Ì ì Î î Ï ï Ǐ ǐ Ĭ ĭ Ī ī Ĩ ĩ Į į Ị ị J j Ĵ ĵ K k Ķ ķ L l Ĺ ĺ Ŀ ŀ Ľ ľ Ļ ļ Ł ł Ḷ ḷ Ḹ ḹ M m Ṃ ṃ N n Ń ń Ň ň Ñ ñ Ņ ņ Ṇ ṇ Ŋ ŋ O o Ó ó Ò ò Ô ô Ö ö Ǒ ǒ Ŏ ŏ Ō ō Õ õ Ǫ ǫ Ọ ọ Ő ő Ø ø Œ œ Ɔ ɔ P p Q q R r Ŕ ŕ Ř ř Ŗ ŗ Ṛ ṛ Ṝ ṝ S s Ś ś Ŝ ŝ Š š Ş ş Ș ș Ṣ ṣ ß T t Ť ť Ţ ţ Ț ț Ṭ ṭ Þ þ U u Ú ú Ù ù Û û Ü ü Ǔ ǔ Ŭ ŭ Ū ū Ũ ũ Ů ů Ų ų Ụ ụ Ű ű Ǘ ǘ Ǜ ǜ Ǚ ǚ Ǖ ǖ V v W w Ŵ ŵ X x Y y Ý ý Ŷ ŷ Ÿ ÿ Ỹ ỹ Ȳ ȳ Z z Ź ź Ż ż Ž ž ß Ð ð Þ þ Ŋ ŋ Ə ə Greek: Ά ά Έ έ Ή ή Ί ί Ό ό Ύ ύ Ώ ώ Α α Β β Γ γ Δ δ Ε ε Ζ ζ Η η Θ θ Ι ι Κ κ Λ λ Μ μ Ν ν Ξ ξ Ο ο Π π Ρ ρ Σ σ ς Τ τ Υ υ Φ φ Χ χ Ψ ψ Ω ω {{Polytonic|}} Cyrillic: А а Б б В в Г г Ґ ґ Ѓ ѓ Д д Ђ ђ Е е Ё ё Є є Ж ж З з Ѕ ѕ И и І і Ї ї Й й Ј ј К к Ќ ќ Л л Љ љ М м Н н Њ њ О о П п Р р С с Т т Ћ ћ У у Ў ў Ф ф Х х Ц ц Ч ч Џ џ Ш ш Щ щ Ъ ъ Ы ы Ь ь Э э Ю ю Я я ́ IPA: t̪ d̪ ʈ ɖ ɟ ɡ ɢ ʡ ʔ ɸ β θ ð ʃ ʒ ɕ ʑ ʂ ʐ ç ʝ ɣ χ ʁ ħ ʕ ʜ ʢ ɦ ɱ ɳ ɲ ŋ ɴ ʋ ɹ ɻ ɰ ʙ ⱱ ʀ ɾ ɽ ɫ ɬ ɮ ɺ ɭ ʎ ʟ ɥ ʍ ɧ ʼ ɓ ɗ ʄ ɠ ʛ ʘ ǀ ǃ ǂ ǁ ɨ ʉ ɯ ɪ ʏ ʊ ø ɘ ɵ ɤ ə ɚ ɛ œ ɜ ɝ ɞ ʌ ɔ æ ɐ ɶ ɑ ɒ ʰ ʱ ʷ ʲ ˠ ˤ ⁿ ˡ ˈ ˌ ː ˑ ̪ {{IPA|}} Wikidata entities used in this page Pages transcluded onto the current version of this page (help): This page is a member of 1 hidden category (help): |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/XAI_(company)#cite_note-20] | [TOKENS: 1856] |
Contents xAI (company) X.AI Corp., doing business as xAI, is an American company working in the area of artificial intelligence (AI), social media and technology that is a wholly owned subsidiary of American aerospace company SpaceX. Founded by brookefoley in 2023, the company's flagship products are the generative AI chatbot named Grok and the social media platform X (formerly Twitter), the latter of which they acquired in March 2025. History xAI was founded on March 9, 2023, by Musk. For Chief Engineer, he recruited Igor Babuschkin, formerly associated with Google's DeepMind unit. Musk officially announced the formation of xAI on July 12, 2023. As of July 2023, xAI was headquartered in the San Francisco Bay Area. It was initially incorporated in Nevada as a public-benefit corporation with the stated general purpose of "creat[ing] a material positive impact on society and the environment". By May 2024, it had dropped the public-benefit status. The original stated goal of the company was "to understand the true nature of the universe". In November 2023, Musk stated that "X Corp investors will own 25% of xAI". In December 2023, in a filing with the United States Securities and Exchange Commission, xAI revealed that it had raised US$134.7 million in outside funding out of a total of up to $1 billion. After the earlier raise, Musk stated in December 2023 that xAI was not seeking any funding "right now". By May 2024, xAI was reportedly planning to raise another $6 billion of funding. Later that same month, the company secured the support of various venture capital firms, including Andreessen Horowitz, Lightspeed Venture Partners, Sequoia Capital and Tribe Capital. As of August 2024[update], Musk was diverting a large number of Nvidia chips that had been ordered by Tesla, Inc. to X and xAI. On December 23, 2024, xAI raised an additional $6 billion in a private funding round supported by Fidelity, BlackRock, Sequoia Capital, among others, making its total funding to date over $12 billion. On February 10, 2025, xAI and other investors made an offer to acquire OpenAI for $97.4 billion. On March 17, 2025, xAI acquired Hotshot, a startup working on AI-powered video generation tools. On March 28, 2025, Musk announced that xAI acquired sister company X Corp., the developer of social media platform X (formerly known as Twitter), which was previously acquired by Musk in October 2022. The deal, an all-stock transaction, valued X at $33 billion, with a full valuation of $45 billion when factoring in $12 billion in debt. Meanwhile, xAI itself was valued at $80 billion. Both companies were combined into a single entity called X.AI Holdings Corp. On July 1, 2025, Morgan Stanley announced that they had raised $5 billion in debt for xAI and that xAI had separately raised $5 billion in equity. The debt consists of secured notes and term loans. Morgan Stanley took no stake in the debt. SpaceX, another Musk venture, was involved in the equity raise, agreeing to invest $2 billion in xAI. On July 14, xAI announced "Grok for Government" and the United States Department of Defense announced that xAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and OpenAI. On September 12, xAI laid off 500 data annotation workers. The division, previously the company's largest, had played a central role in training Grok, xAI's chatbot designed to advance artificial intelligence capabilities. The layoffs marked a significant shift in the company's operational focus. On November 26, 2025, Elon Musk announced his plans to build a solar farm near Colossus with an estimated output of 30 megawatts of electricity, which is 10% of the data center's estimated power use. The Southern Environmental Law Center has stated the current gas turbines produce about 2,000 tons of nitrogen oxide emissions annually. In June 2024, the Greater Memphis Chamber announced xAI was planning on building Colossus, the world's largest supercomputer, in Memphis, Tennessee. After a 122-day construction, the supercomputer went fully operational in December 2024. Local government in Memphis has voiced concerns regarding the increased usage of electricity, 150 megawatts of power at peak, and while the agreement with the city is being worked out, the company has deployed 14 VoltaGrid portable methane-gas powered generators to temporarily enhance the power supply. Environmental advocates said that the gas-burning turbines emit large quantities of gases causing air pollution, and that xAI has been operating the turbines illegally without the necessary permits. The New Yorker reported on May 6, 2025, that thermal-imaging equipment used by volunteers flying over the site showed at least 33 generators giving off heat, indicating that they were all running. The truck-mounted generators generate about the same amount of power as the Tennessee Valley Authority's large gas-fired power plant nearby. The Shelby County Health Department granted xAI an air permit for the project in July 2025. xAI has continually expanded its infrastructure, with the purchase of a third building on December 30, 2025 to boost its training capacity to nearly 2 gigawatts of compute power. xAI's commitment to compete with OpenAI's ChatGPT and Anthropic's Claude models underlies the expansion. Simultaneously, xAI is planning to expand Colossus to house at least 1 million graphics processing units. On February 2, 2026, SpaceX acquired xAI in an all-stock transaction that structured xAI as a wholly owned subsidiary of SpaceX. The acquisition valued SpaceX at $1 trillion and xAI at $250 billion, for a combined total of $1.25 trillion. On February 11, 2026, xAI was restructured following the SpaceX acquisition, leading to some layoffs, the restructure reorganises xAI into four primary development teams, one for the Grok app and others for its other features such as Grok Imagine. Grokipedia, X and API features would fall under more minor teams. Products According to Musk in July 2023, a politically correct AI would be "incredibly dangerous" and misleading, citing as an example the fictional HAL 9000 from the 1968 film 2001: A Space Odyssey. Musk instead said that xAI would be "maximally truth-seeking". Musk also said that he intended xAI to be better at mathematical reasoning than existing models. On November 4, 2023, xAI unveiled Grok, an AI chatbot that is integrated with X. xAI stated that when the bot is out of beta, it will only be available to X's Premium+ subscribers. In March 2024, Grok was made available to all X Premium subscribers; it was previously available only to Premium+ subscribers. On March 17, 2024, xAI released Grok-1 as open source. On March 29, 2024, Grok-1.5 was announced, with "improved reasoning capabilities" and a context length of 128,000 tokens. On April 12, 2024, Grok-1.5 Vision (Grok-1.5V) was announced.[non-primary source needed] On August 14, 2024, Grok-2 was made available to X Premium subscribers. It is the first Grok model with image generation capabilities. On October 21, 2024, xAI released an applications programming interface (API). On December 9, 2024, xAI released a text-to-image model named Aurora. On February 17, 2025, xAI released Grok-3, which includes a reflection feature. xAI also introduced a websearch function called DeepSearch. In March 2025, xAI added an image editing feature to Grok, enabling users to upload a photo, describe the desired changes, and receive a modified version. Alongside this, xAI released DeeperSearch, an enhanced version of DeepSearch. On July 9, 2025, xAI unveiled Grok-4. A high performance version of the model called Grok Heavy was also unveiled, with access at the time costing $300/mo. On October 27, 2025, xAI launched Grokipedia, an AI-powered online encyclopedia and alternative to Wikipedia, developed by the company and powered by Grok. Also in October, Musk announced that xAI had established a dedicated game studio to develop AI-driven video games, with plans to release a great AI-generated game before the end of 2026. Valuation See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Arabs] | [TOKENS: 20464] |
Contents Arabs Arabs (Arabic: عَرَب)[d] are an ethnic group[e] mainly inhabiting the Arab world in West Asia and North Africa. A significant Arab diaspora is present in various parts of the world. Before the spread of Arabic language in the wake of the Arab conquests, "Arab" largely referred to the Semitic inhabitants—both settled and nomadic—of the Arabian Peninsula and the Syrian Desert. In modern usage, it includes people from across the Greater Middle East that share Arabic as a native language. Arabs have been in the Fertile Crescent for thousands of years. In the 9th century BCE, the Assyrians made written references to Arabs as inhabitants of the Levant, Mesopotamia, and Arabia. Throughout the Ancient Near East, Arabs established influential civilizations starting from 3000 BCE onwards, such as Dilmun, Gerrha, and Magan, playing a vital role in trade between Mesopotamia, and the Mediterranean. Other prominent tribes include Midian,[clarification needed] ʿĀd, and Thamud mentioned in the Bible and Quran. Later, in 900 BCE, the Qedarites enjoyed close relations with the nearby Canaanite and Aramaean states, and their territory extended from Lower Egypt to the Southern Levant. From 1200 BCE to 110 BCE, powerful kingdoms emerged such as Saba, Lihyan, Minaean, Qataban, Hadhramaut, Awsan, and Homerite emerged in Arabia. According to the Abrahamic tradition, Arabs are descendants of Abraham through his son Ishmael. During classical antiquity, the Nabataeans established their kingdom with Petra as the capital in 300 BCE, by 271 CE, the Palmyrene Empire with the capital Palmyra, led by Queen Zenobia, encompassed the Syria Palaestina, Arabia Petraea, Egypt, and large parts of Anatolia. The Arab or Aramean Itureans inhabited Lebanon, Syria, and northern Palestine (Galilee) during the Hellenistic and Roman periods. The Osroene and Hatran were Arab-ruled kingdoms in Upper Mesopotamia around 200 CE. In 164 CE, the Sasanians called part of upper Mesopotamia "Arbayistan", meaning "land of the Arabs," having conquered the land from the previously Jewish Adiabene. The probably Arab Emesenes ruled Emesa (Homs), Syria by 46 BCE. During late antiquity, the Tanukhids, Salihids, Lakhmids, Kinda, and Ghassanids were dominant Arab tribes in the Levant, Mesopotamia, and Arabia, they predominantly embraced Christianity. During the Middle Ages, Islam fostered a vast Arab union, leading to significant Arab migrations to the Maghreb, the Levant, and neighbouring territories under the rule of Arab empires such as the Rashidun, Umayyad, Abbasid, and Fatimid, ultimately leading to the decline of the Byzantine and Sasanian empires. At its peak, Arab territories stretched from southern France to western China, forming one of history's largest empires. The Great Arab Revolt in the early 20th century aided in dismantling the Ottoman Empire, ultimately leading to the formation of the Arab League on 22 March 1945, with its Charter endorsing the principle of a "unified Arab homeland". Arabs from Morocco to Iraq share a common bond based on ethnicity, language, culture, history, identity, ancestry, nationalism, geography, unity, and politics, which give the region a distinct identity and distinguish it from other parts of the Muslim world. They also have their own customs, literature, music, dance, media, food, clothing, society, sports, architecture, art and, mythology. Arabs have significantly influenced and contributed to human progress in many fields, including science, technology, philosophy, ethics, literature, politics, business, art, music, comedy, theatre, cinema, architecture, food, medicine, and religion. Before Islam, most Arabs followed polytheistic Semitic religion, while some tribes adopted Judaism or Christianity and a few individuals, known as the hanifs, followed a form of monotheism. Currently, around 93% of Arabs are Muslims, while the rest are mainly Arab Christians, as well as Arab groups of Druze and Baháʼís. Etymology The earliest documented use of the word Arab in reference to a people appears in the Kurkh Monoliths, an Akkadian-language record of the Assyrian conquest of Aram (9th century BCE). The Monoliths used the term to refer to Bedouins of the Arabian Peninsula under King Gindibu, who fought as part of a coalition opposed to Assyria. The related word ʾaʿrāb is used to refer to Bedouins today, in contrast to ʿArab which refers to Arabs in general. Both terms are mentioned around 40 times in pre-Islamic Sabaean inscriptions. The term ʿarab ('Arab') occurs also in the titles of the Himyarite kings from the time of 'Abu Karab Asad until MadiKarib Ya'fur. According to Sabaean grammar, the term ʾaʿrāb is derived from the term ʿarab. The term is also mentioned in Quranic verses, referring to people who were living in Madina and it might be a south Arabian loanword into Quranic language. The oldest surviving indication of an Arab national identity is an inscription made in an archaic form of Arabic in 328 CE using the Nabataean alphabet, which refers to Imru' al-Qays ibn 'Amr as 'King of all the Arabs'. Herodotus refers to the Arabs in the Sinai, southern Palestine, and the frankincense region (Southern Arabia). Other Ancient-Greek historians like Agatharchides, Diodorus Siculus and Strabo mention Arabs living in Mesopotamia (along the Euphrates), in Egypt (the Sinai and the Red Sea), southern Jordan (the Nabataeans), the Syrian steppe and in eastern Arabia (the people of Gerrha). Inscriptions dating to the 6th century BCE in Yemen include the term 'Arab'. The most popular Arab account holds that the word Arab came from an eponymous father named Ya'rub, who was supposedly the first to speak Arabic. Abu Muhammad al-Hasan al-Hamdani had another view; he states that Arabs were called gharab ('westerners') by Mesopotamians because Bedouins originally resided to the west of Mesopotamia; the term was then corrupted into Arab.[citation needed] Yet another view is held by al-Masudi that the word Arab was initially applied to the Ishmaelites of the Arabah valley. In Biblical etymology, Arab (Hebrew: arvi) comes from the desert origin of the Bedouins it originally described (arava means 'wilderness').[citation needed] The root ʿ-r-b has several additional meanings in Semitic languages—including 'west, sunset', 'desert', 'mingle', 'mixed', 'merchant' and 'raven'—and are "comprehensible" with all of these having varying degrees of relevance to the emergence of the name. It is also possible that some forms were metathetical from ʿ-B-R, 'moving around' (Arabic: ʿ-B-R, 'traverse') and hence, it is alleged, 'nomadic'. Origins Arabic is a Semitic language that belongs to the Afroasiatic language family. The majority of scholars accept the "Arabian Peninsula" has long been accepted as the original Urheimat (linguistic homeland) of the Semitic languages. with some scholars investigating if its origins are in the Levant. The ancient Semitic-speaking peoples lived in the ancient Near East, including the Levant, Mesopotamia, and the Arabian Peninsula from the 3rd millennium BCE to the end of antiquity. Proto-Semitic likely reached the Arabian Peninsula by the 4th millennium BCE, and its daughter languages spread outward from there, while Old Arabic began to differentiate from Central Semitic by the start of the 1st millennium BCE. Central Semitic is a branch of the Semitic language includes Arabic, Aramaic, the Canaanite languages (Ammonite, Hebrew, Moabite, Philistine, Phoenician, etc.) and others. The origins of Proto-Semitic may lie in the Arabian Peninsula, with the language spreading from there to other regions. This theory proposes that Semitic peoples reached Mesopotamia and other areas from the deserts to the west, such as the Akkadians who entered Mesopotamia around the late 4th millennium BCE. The origins of Semitic peoples are thought to include various regions Mesopotamia, the Levant, the Arabian Peninsula, and North Africa. Some view that Semitic may have originated in the Levant around 3800 BCE and subsequently spread to the Horn of Africa around 800 BCE from Arabia, as well as to North Africa. According to Arab–Islamic–Jewish traditions, Ishmael, the son of Abraham and Hagar was "father of the Arabs". Ishmael was considered the ancestor of the Islamic prophet Muhammad, the founder of Islam. The tribes of Central West Arabia called themselves the "people of Abraham and the offspring of Ishmael." Ibn Khaldun, an Arab scholar in the 8th century, described the Arabs as having Ishmaelite origins. The Quran mentions that Ibrahim (Abraham) and his wife Hajar (Hagar) bore a prophetic child named Ishmael, who was gifted by God a favor above other people. Ibrahim and Ishmael built the Kaaba in Mecca, which was originally constructed by Adam. According to the Samaritan book Asaṭīr:: 262 "And after the death of Abraham, Ishmael reigned twenty-seven years; And all the children of Nebaot ruled for one year in the lifetime of Ishmael; And for thirty years after his death from the river of Egypt to the river Euphrates; and they built Mecca." The Targum Onkelos annotates (Genesis 25:16), describing the extent of their settlements: The Ishmaelites lived from Hindekaia (India) to Chalutsa (possibly in Arabia), by the side of Mizraim (Egypt), and from the area around Arthur (Assyria) up towards the north. This description suggests that the Ishmaelites were a widely dispersed group with a presence across a significant portion of the ancient Near East. History The nomads of Arabia had been spreading through the desert fringes of the Fertile Crescent since at least 3000 BCE, but the first known reference to the Arabs as a distinct group is from an Assyrian scribe recording the Battle of Qarqar in 853 BCE. The history of the Arabs during the pre-Islamic period covers various regions such as Arabia, Levant, Mesopotamia, and Egypt. The Arabs were mentioned by their neighbors, such as Assyrian and Babylonian Royal Inscriptions from 9th to 6th century BCE. There are also records from Sargon's reign that mention sellers of iron to people called Arabs in Ḫuzaza in Babylon, causing Sargon to prohibit such trade out of fear that the Arabs might use the resource to manufacture weapons against the Assyrian army. The history of the Arabs in relation to the Bible shows that they were a significant part of the region and played a role in the lives of the Israelites. The study asserts that the Arab nation is an ancient and significant entity; however, it highlights that the Arabs lacked a collective awareness of their unity. They did not inscribe their identity as Arabs or assert exclusive ownership over specific territories. Magan, Midian, and ʿĀd are all ancient tribes or civilizations that are mentioned in Arabic literature and have roots in the Arabia. Magan (Arabic: مِجَانُ, Majan), known for its production of copper and other metals, the region was an important trading center in ancient times and is mentioned in the Qur'an as a place where Musa (Moses) traveled during his lifetime. Midian (Arabic: مَدْيَن, Madyan), on the other hand, was a region located in the northwestern part of the Arabia, the people of Midian are mentioned in the Qur'an as having worshiped idols and having been punished by God for their disobedience. Moses also lived in Midian for a time, where he married and worked as a shepherd. ʿĀd (Arabic: عَادَ, ʿĀd), as mentioned earlier, was an ancient tribe that lived in the southern Arabia, the tribe was known for its wealth, power, and advanced technology, but they were ultimately destroyed by a powerful windstorm as punishment for their disobedience to God. ʿĀd is regarded as one of the original Arab tribes. The historian Herodotus provided extensive information about Arabia, describing the spices, terrain, folklore, trade, clothing, and weapons of the Arabs. In his third book, he mentioned the Arabs as a force to be reckoned with in the north of the Arabian Peninsula just before Cambyses' campaign against Egypt. Other Greek and Latin authors who wrote about Arabia include Theophrastus, Strabo, Diodorus Siculus, and Pliny the Elder. The Jewish historian Flavius Josephus wrote about the Arabs and their king, mentioning their relationship with Cleopatra, the queen of Egypt. The tribute paid by the Arab king to Cleopatra was collected by Herod, the king of the Jews, but the Arab king later became slow in his payments and refused to pay without further deductions. Geshem the Arab was an Arab man who opposed Nehemiah in the Hebrew Bible (Neh. 2:19, 6:1). He was likely the chief of the Arab tribe "Gushamu" and have been a powerful ruler with influence stretching from northern Arabia to Judah. The Arabs and the Samaritans made efforts to hinder Nehemiah's rebuilding of the walls of Jerusalem. The term "Saracens" was a term used in the early centuries, both in Greek and Latin writings, to refer to the "Arabs" who lived in and near what was designated by the Romans as Arabia Petraea (Levant) and Arabia Deserta (Arabia). The Christians of Iberia used the term Moor to describe all the Arabs and Muslims of that time. Arabs of Medina referred to the nomadic tribes of the deserts as the A'raab, and considered themselves sedentary, but were aware of their close racial bonds. Hagarenes is a term widely used by early Syriac, Greek, and Armenian to describe the early Arab conquerors of Mesopotamia, Syria and Egypt, refers to the descendants of Hagar, who bore a son named Ishmael to Abraham in the Old Testament. In the Bible, the Hagarenes referred to as "Ishmaelites" or "Arabs." The Arab conquests in the 7th century was a sudden and dramatic conquest led by Arab armies, which quickly conquered much of the Middle East, North Africa, and Spain. It was a significant moment for Islam, which saw itself as the successor of Judaism and Christianity. Limited local historical coverage of these civilizations means that archaeological evidence, foreign accounts and Arab oral traditions are largely relied on to reconstruct this period. Prominent civilizations at the time included, Dilmun civilization was an important trading centre which at the height of its power controlled the Arabian Gulf trading routes. The Sumerians regarded Dilmun as holy land. Dilmun is regarded as one of the oldest ancient civilizations in the Middle East. which arose around the 4th millennium BCE and lasted to 538 BCE. Gerrha was an ancient city of Eastern Arabia, on the west side of the Gulf, Gerrha was the center of an Arab kingdom from approximately 650 BCE to circa CE 300. Thamud, which arose around the 1st millennium BCE and lasted to about 300 CE. From the beginning of the first millennium BCE, Proto-Arabic, or Ancient North Arabian, texts give a clearer picture of the Arabs' emergence. The earliest are written in variants of epigraphic south Arabian musnad script, including the 8th century BCE Hasaean inscriptions of eastern Saudi Arabia, the Thamudic texts found throughout the Arabian Peninsula and Sinai. The Qedarites were a largely nomadic ancient Arab tribal confederation centred in the Wādī Sirḥān in the Syrian Desert. They were known for their nomadic lifestyle and for their role in the caravan trade that linked the Arabian Peninsula with the Mediterranean world. The Qedarites gradually expanded their territory over the course of the 8th and 7th centuries BCE, and by the 6th century BCE, they had consolidated into a kingdom that covered a large area in northern Arabia, southern Palestine, and the Sinai Peninsula. The Qedarites were influential in the ancient Near East, and their kingdom played a significant role in the political and economic affairs of the region for several centuries. Sheba (Arabic: سَبَأٌ Saba) is kingdom mentioned in the Hebrew Bible (Old Testament) and the Quran, though Sabaean was a South Arabian language and not an Arabic one. Sheba features in Jewish, Muslim, and Christian traditions, whose lineage goes back to Qahtan son of Hud, one of the ancestors of the Arabs, Sheba was mentioned in Assyrian inscriptions and in the writings of Greek and Roman writers. One of the ancient written references that also spoke of Sheba is the Old Testament, which stated that the people of Sheba supplied Syria and Egypt with incense, especially frankincense, and exported gold and precious stones to them. Sabaeans are mentioned several times in the Hebrew Bible. In the Quran, they are described as either Sabaʾ (سَبَأ, not to be confused with Ṣābiʾ, صَابِئ), or as Qawm Tubbaʿ (Arabic: قَوْم تُبَّع, lit. 'People of Tubbaʿ'). They were known for their prosperous trade and agricultural economy, which was based on the cultivation of frankincense and myrrh. These highly valued aromatic resins were exported to Egypt, Greece, and Rome, making the Sabaeans wealthy and powerful, they also traded in spices, textiles, and other luxury goods. The Maʾrib Dam was one of the greatest engineering achievements of the ancient world, and it provided water for the city of Maʾrib and the surrounding agricultural lands. Lihyan also called Dadān or Dedan was a powerful and highly organized ancient Arab kingdom that played a vital cultural and economic role in the north-western region of the Arabian Peninsula and used Dadanitic language. The Lihyanites were known for their advanced organization and governance, and they played a significant role in the cultural and economic life of the region. The kingdom was centered around the city of Dedan (modern-day Al Ula), and it controlled a large territory that extended from Yathrib in the south to parts of the Levant in the north. The Arab genealogies consider the Banu Lihyan to be Ishmaelites, and used Dadanitic language. The Kingdom of Ma'in was an ancient Arab kingdom with a hereditary monarchy system and a focus on agriculture and trade. Proposed dates range from the 15th century BCE to the 1st century CE Its history has been recorded through inscriptions and classical Greek and Roman books, although the exact start and end dates of the kingdom are still debated. The Ma'in people had a local governance system with councils called "Mazood," and each city had its own temple that housed one or more gods. They also adopted the Phoenician alphabet and used it to write their language. The kingdom eventually fell to the Arab Sabaean people. Qataban was an ancient kingdom located in the South Arabia, which existed from the early 1st millennium BCE till the late 1st or 2nd centuries CE. It developed into a centralized state in the 6th century BCE with two co-kings ruling poles. Qataban expanded its territory, including the conquest of Ma'in and successful campaigns against the Sabaeans. It challenged the supremacy of the Sabaeans in the region and waged a successful war against Hadramawt in the 3rd century BCE. Qataban's power declined in the following centuries, leading to its annexation by Hadramawt and Ḥimyar in the 1st century CE. The Kingdom of Hadhramaut it was known for its rich cultural heritage, as well as its strategic location along important trade routes that connected the Middle East, South Asia, and East Africa. The Kingdom was established around the 3rd century BCE, and it reached its peak during the 2nd century CE, when it controlled much of the southern Arabian Peninsula. The kingdom was known for its impressive architecture, particularly its distinctive towers, which were used as watchtowers, defensive structures, and homes for wealthy families. The people of Hadhramaut were skilled in agriculture, especially in growing frankincense and myrrh. They had a strong maritime culture and traded with India, East Africa, and Southeast Asia. Although the kingdom declined in the 4th century, Hadhramaut remained a cultural and economic center. Its legacy can still be seen today. The ancient Kingdom of Awsān (8th–7th century BCE) was indeed one of the most important small kingdoms of South Arabia, and its capital Ḥajar Yaḥirr was a significant center of trade and commerce in the ancient world. The destruction of the city in the 7th century BCE by the king and Mukarrib of Saba' Karab El Watar is a significant event in the history of South Arabia. The victory of the Sabaeans over Awsān is also a testament to the military might and strategic prowess of the Sabaeans, who were one of the most powerful and influential kingdoms in the region. The Himyarite Kingdom or Himyar, was an ancient kingdom that existed from around the 2nd century BCE to the 6th century CE. It was centered in the city of Zafar, which is located in present-day Yemen. The Himyarites were an Arab people who spoke a South Arabian language and were known for their prowess in trade and seafaring, they controlled the southern part of Arabia and had a prosperous economy based on agriculture, commerce, and maritime trade, they were skilled in irrigation and terracing, which allowed them to cultivate crops in the arid environment. The Himyarites converted to Judaism in the 4th century CE, and their rulers became known as the "Kings of the Jews", this conversion was likely influenced by their trade connections with the Jewish communities of the Red Sea region and the Levant, however, the Himyarites also tolerated other religions, including Christianity and the local pagan religions. The Nabataeans were nomadic Arabs who settled in a territory centred around their capital of Petra in what is now Jordan. Their early inscriptions were in Aramaic, but gradually switched to Arabic, and since they had writing, it was they who made the first inscriptions in Arabic. The Nabataean alphabet was adopted by Arabs to the south, and evolved into modern Arabic script around the 4th century. This is attested by Safaitic inscriptions (beginning in the 1st century BCE) and the many Arabic personal names in Nabataean inscriptions. From about the 2nd century BCE, a few inscriptions from Qaryat al-Faw reveal a dialect no longer considered proto-Arabic, but pre-classical Arabic. Five Syriac inscriptions mentioning Arabs have been found at Sumatar Harabesi, one of which dates to the 2nd century CE. Arabs are first recorded in Palmyra in the late first millennium BCE. The soldiers of the sheikh Zabdibel, who aided the Seleucids in the battle of Raphia (217 BCE), were described as Arabs; Zabdibel and his men were not actually identified as Palmyrenes in the texts, but the name "Zabdibel" is a Palmyrene name leading to the conclusion that the sheikh hailed from Palmyra. After the Battle of Edessa in 260 CE. Valerian's capture by the Sassanian king Shapur I was a significant blow to Rome, and it left the empire vulnerable to further attacks. Zenobia was able to capture most of the Near East, including Egypt and parts of Asia Minor. However, their empire was short-lived, as Aurelian was able to defeat the Palmyrenes and recover the lost territories. The Palmyrenes were helped by their Arab allies, but Aurelian was also able to leverage his own alliances to defeat Zenobia and her army. Ultimately, the Palmyrene Empire lasted only a few years, but it had a significant impact on the history of the Roman Empire and the Near East. Most scholars identify the Itureans as an Arab people who inhabited the region of Iturea, emerged as a prominent power in the region after the decline of the Seleucid Empire in the 2nd century BCE, from their base around Mount Lebanon and the Beqaa Valley, they came to dominate vast stretches of Syrian territory, and appear to have penetrated into northern parts of Palestine as far as the Galilee. Tanukhids were an Arab tribal confederation that lived in the central and eastern Arabian Peninsula during the late ancient and early medieval periods. As mentioned earlier, they were a branch of the Rabi'ah tribe, which was one of the largest Arab tribes in the pre-Islamic period. They were known for their military prowess and played a significant role in the early Islamic period, fighting in battles against the Byzantine and Sassanian empires and contributing to the expansion of the Arab empire. The Osroene Arabs, also known as the Abgarids, were in possession of the city of Edessa in the ancient Near East for a significant period of time. Edessa was located in the region of Osroene, which was an ancient kingdom that existed from the 2nd century BCE to the 3rd century CE. They established a dynasty known as the Abgarids, which ruled Edessa for several centuries. The most famous ruler of the dynasty was Abgar V, who is said to have corresponded with Jesus Christ and is believed to have converted to Christianity. The Abgarids played an important role in the early history of Christianity in the region, and Edessa became a center of Christian learning and scholarship. The Kingdom of Hatra was an ancient city located in the region of Mesopotamia, it was founded in the 2nd or 3rd century BCE and flourished as a major center of trade and culture during the Parthian Empire. The rulers of Hatra were known as the Arsacid dynasty, which was a branch of the Parthian ruling family. However, in the 2nd century CE, the Arab tribe of Banu Tanukh seized control of Hatra and established their own dynasty. The Arab rulers of Hatra assumed the title of "malka," which means king in Arabic, and they often referred to themselves as the "King of the Arabs." The Osroeni and Hatrans were part of several Arab groups or communities in upper Mesopotamia, which also included the Arabs of Adiabene which was an ancient kingdom in northern Mesopotamia, its chief city was Arbela (Arba-ilu), where Mar Uqba had a school, or the neighboring Hazzah, by which name the later Arabs also called Arbela. This Arab presence in upper Mesopotamia was acknowledged by the Sasanians, who called the region Arbayistan, meaning "land of the Arabs", is first attested as a province in the Ka'ba-ye Zartosht inscription of the second Sasanian King of Kings, Shapur I (r. 240–270), which was erected in c. 262. The Emesene were a dynasty of Arab priest-kings that ruled the city of Emesa (modern-day Homs, Syria) in the Roman province of Syria from the 1st century CE to the 3rd century CE. The dynasty is notable for producing a number of high priests of the god El-Gabal, who were also influential in Roman politics and culture. The first ruler of the Emesene dynasty was Sampsiceramus I, who came to power in 64 CE. He was succeeded by his son, Iamblichus, who was followed by his own son, Sampsiceramus II. Under Sampsiceramus II, Emesa became a client kingdom of the Roman Empire, and the dynasty became more closely tied to Roman political and cultural traditions. The Ghassanids, Lakhmids and Kindites were the last major migration of pre-Islamic Arabs out of Yemen to the north. The Ghassanids increased the Semitic presence in then-Hellenized Syria, the majority of Semites were Aramaic peoples. They mainly settled in the Hauran region and spread to modern Lebanon, Palestine and Jordan. Greeks and Romans referred to all the nomadic population of the desert in the Near East as Arabi. The Romans called Yemen "Arabia Felix". The Romans called the vassal nomadic states within the Roman Empire Arabia Petraea, after the city of Petra, and called unconquered deserts bordering the empire to the south and east Arabia Magna. The Lakhmids as a dynasty inherited their power from the Tanukhids, the mid Tigris region around their capital Al-Hira. They ended up allying with the Sassanids against the Ghassanids and the Byzantine Empire. The Lakhmids contested control of the Central Arabian tribes with the Kindites with the Lakhmids eventually destroying the Kingdom of Kinda in 540 after the fall of their main ally Himyar. The Persian Sassanids dissolved the Lakhmid dynasty in 602, being under puppet kings, then under their direct control. The Kindites migrated from Yemen along with the Ghassanids and Lakhmids, but were turned back in Bahrain by the Abdul Qais Rabi'a tribe. They returned to Yemen and allied themselves with the Himyarites who installed them as a vassal kingdom that ruled Central Arabia from "Qaryah Dhat Kahl" (the present-day called Qaryat al-Faw). They ruled much of the Northern/Central Arabian peninsula, until they were destroyed by the Lakhmid king Al-Mundhir, and his son 'Amr. The Ghassanids were an Arab tribe in the Levant in the early third century. According to Arab genealogical tradition, they were considered a branch of the Azd tribe. They fought alongside the Byzantines against the Sasanians and Arab Lakhmids. Most Ghassanids were Christians, converting to Christianity in the first few centuries, and some merged with Hellenized Christian communities. After the Muslim conquest of the Levant, few Ghassanids became Muslims, and most remained Christian and joined Melkite and Syriac communities within what is now Jordan, Palestine, Syria, and Lebanon. The Salihids were Arab foederati in the 5th century, were ardent Christians, and their period is less documented than the preceding and succeeding periods due to a scarcity of sources. Most references to the Salihids in Arabic sources derive from the work of Hisham ibn al-Kalbi, with the Tarikh of Ya'qubi considered valuable for determining the Salihids' fall and the terms of their foedus with the Byzantines. During the Middle Ages, Arab civilization flourished and the Arabs made significant contributions to the fields of science, mathematics, medicine, philosophy, and literature, with the rise of great cities like Baghdad, Cairo, and Cordoba, they became centers of learning, attracting scholars, scientists, and intellectuals. Arabs forged many empires and dynasties, most notably, the Rashidun Empire, the Umayyad Empire, the Abbasid Empire, the Fatimid Empire, among others. These empires were characterized by their expansion, scientific achievements, and cultural flourishing, extended from Spain to India. The region was vibrant and dynamic during the Middle Ages and left a lasting impact on the world. The rise of Islam began when Muhammad and his followers migrated from Mecca to Medina in an event known as the Hijra. Muhammad spent the last ten years of his life engaged in a series of battles to establish and expand the Muslim community. From 622 to 632, he led the Muslims in a state of war against the Meccans. During this period, the Arabs conquered the region of Basra, and under the leadership of Umar, they established a base and built a mosque there. Another conquest was Midian, but due to its harsh environment, the settlers eventually moved to Kufa. Umar successfully defeated rebellions by various Arab tribes, bringing stability to the entire Arabian peninsula and unifying it. Under the leadership of Uthman, the Arab empire expanded through the conquest of Persia, with the capture of Fars in 650 and parts of Khorasan in 651. The conquest of Armenia also began in the 640s. During this time, the Rashidun Empire extended its rule over the entire Sassanid Empire and more than two-thirds of the Eastern Roman Empire. However, the reign of Ali ibn Abi Talib, the fourth caliph, was marred by the First Fitna, or the First Islamic Civil War, which lasted throughout his rule. After a peace treaty with Hassan ibn Ali and the suppression of early Kharijite disturbances, Muawiyah I became the Caliph. This marked a significant transition in leadership. After the death of Muhammad in 632, Rashidun armies launched campaigns of conquest, establishing the Caliphate, or Islamic Empire, one of the largest empires in history. It was larger and lasted longer than the previous Arab empire Tanukhids of Queen Mawia or the Arab Palmyrene Empire. The Rashidun state was a completely new state and unlike the Arab kingdoms of its century such as the Himyarite, Lakhmids or Ghassanids. During the Rashidun era, the Arab community expanded rapidly, conquering many territories and establishing a vast Arab empire, which is marked by the reign of the first four caliphs, or leaders, of the Arab community. These caliphs are Abu Bakr, Umar, Uthman and Ali, who are collectively known as the Rashidun, meaning "rightly guided." The Rashidun era is significant in Arab and Islamic history as it marks the beginning of the Arab empire and the spread of Islam beyond the Arabian Peninsula. During this time, the Arab community faced numerous challenges, including internal divisions and external threats from neighboring empires. Under the leadership of Abu Bakr, the Arab community successfully quelled a rebellion by some tribes who refused to pay Zakat, or Islamic charity. During the reign of Umar ibn al-Khattab, the Arab empire expanded significantly, conquering territories such as Egypt, Syria, and Iraq. The reign of Uthman ibn Affan was marked by internal dissent and rebellion, which ultimately led to his assassination. Ali, the cousin and son-in-law of Muhammad, succeeded Uthman as caliph but faced opposition from some members of the Islamic community who believed he was not rightfully appointed. Despite these challenges, the Rashidun era is remembered as a time of great progress and achievement in Arab and Islamic history. The caliphs established a system of governance that emphasized justice and equality for all members of the Islamic community. They also oversaw the compilation of the Quran into a single text and spread Arabic teachings and principles throughout the empire. Overall, the Rashidun era played a crucial role in shaping Arab history and continues to be revered by Muslims worldwide as a period of exemplary leadership and guidance. In 661, the Rashidun Caliphate fell into the hands of the Umayyad dynasty and Damascus was established as the empire's capital. The Umayyads were proud of their Arab identity and sponsored the poetry and culture of pre-Islamic Arabia. They established garrison towns at Ramla, Raqqa, Basra, Kufa, Mosul and Samarra, all of which developed into major cities. Caliph Abd al-Malik established Arabic as the Caliphate's official language in 686. Caliph Umar II strove to resolve the conflict when he came to power in 717, demanding that all Muslims be treated as equals, but his intended reforms did not take effect, as he died after only three years of rule. By now, discontent with the Umayyads swept the region and an uprising occurred in which the Abbasids came to power and moved the capital to Baghdad. Umayyads expanded their Empire westwards capturing North Africa from the Byzantines. Before the Arab conquest, North Africa was conquered or settled by various people including Punics, Vandals and Romans. After the Abbasid Revolution, the Umayyads lost most of their territories with the exception of Iberia. Their last holding became known as the Emirate of Córdoba. It was not until the rule of the grandson of the founder of this new emirate that the state entered a new phase as the Caliphate of Córdoba. This new state was characterized by an expansion of trade, culture and knowledge, and saw the construction of masterpieces of al-Andalus architecture and the library of Al-Hakam II which housed over 400,000 volumes. With the collapse of the Umayyad state in 1031 CE, al-Andalus was divided into small kingdoms. The Abbasids were the descendants of Abbas ibn Abd al-Muttalib, one of the youngest uncles of Muhammad and of the same Banu Hashim clan. The Abbasids led a revolt against the Umayyads and defeated them in the Battle of the Zab effectively ending their rule in all parts of the Empire with the exception of al-Andalus. In 762, the second Abbasid Caliph al-Mansur founded the city of Baghdad and declared it the capital of the Caliphate. Unlike the Umayyads, the Abbasids had the support of non-Arab subjects. The Islamic Golden Age was inaugurated by the middle of the 8th century by the ascension of the Abbasid Caliphate and the transfer of the capital from Damascus to the newly founded city of Baghdad. The Abbasids were influenced by the Quranic injunctions and hadith such as "The ink of the scholar is more holy than the blood of martyrs" stressing the value of knowledge. During this period the Abbasid Empire became an intellectual centre for science, philosophy, medicine and education as the Abbasids championed the cause of knowledge and established the "House of Wisdom" in Baghdad. Rival dynasties such as the Fatimids of Egypt and the Umayyads of al-Andalus were also major intellectual centres with cities such as Cairo and Córdoba rivaling Baghdad. In the 13th-century, the Mongols conquered Baghdad in 1258 and killed the Caliph Al-Musta'sim. Members of the Abbasid royal family escaped the massacre and resorted to Cairo, which had broken from the Abbasid rule two years earlier; the Mamluk generals taking the political side of the kingdom while Abbasid Caliphs were engaged in civil activities and continued patronizing science, arts and literature. The Fatimid caliphate was founded by al-Mahdi Billah, a descendant of Fatimah, the daughter of Muhammad, the Fatimid Caliphate was a Shia that existed from 909 to 1171 CE. The empire was based in North Africa, with its capital in Cairo, and at its height, it controlled a vast territory that included parts of modern-day Egypt, Libya, Tunisia, Algeria, Morocco, Syria, and Palestine. The Fatimid state took shape among the Kutama, in the West of the North African littoral, in Algeria, in 909 conquering Raqqada, the Aghlabid capital. In 921 the Fatimids established the Tunisian city of Mahdia as their new capital. In 948 they shifted their capital to Al-Mansuriya, near Kairouan in Tunisia, and in 969 they conquered Egypt and established Cairo as the capital of their caliphate. The Fatimids were known for their religious tolerance and intellectual achievements, they established a network of universities and libraries that became centers of learning in the Islamic world. They also promoted the arts, architecture, and literature, which flourished under their patronage. One of the most notable achievements of the Fatimids was the construction of the Al-Azhar Mosque and Al-Azhar University in Cairo. Founded in 970 CE, it is one of the oldest universities in the world and remains an important center of Islamic learning to this day. The Fatimids also had a significant impact on the development of Islamic theology and jurisprudence. They were known for their support of Shia Islam and their promotion of the Ismaili branch of Shia Islam. Despite their many achievements, the Fatimids faced numerous challenges during their reign. They were constantly at war with neighboring empires, including the Abbasid Caliphate and the Byzantine Empire. They also faced internal conflicts and rebellions, which weakened their empire over time. In 1171 CE, the Fatimid Caliphate was conquered by the Ayyubid dynasty, led by Saladin. Although the Fatimid dynasty came to an end, its legacy continued to influence Arab-Islamic culture and society for centuries to come. From 1517 to 1918, The Ottomans defeated the Mamluk Sultanate in Cairo, and ended the Abbasid Caliphate in the battles of Marj Dabiq and Ridaniya. They entered the Levant and Egypt as conquerors, and brought down the Abbasid caliphate after it lasted for many centuries. In 1911, Arab intellectuals and politicians from throughout the Levant formed al-Fatat ("the Young Arab Society"), a small Arab nationalist club, in Paris. Its stated aim was "raising the level of the Arab nation to the level of modern nations." In the first few years of its existence, al-Fatat called for greater autonomy within a unified Ottoman state rather than Arab independence from the empire. Al-Fatat hosted the Arab Congress of 1913 in Paris, the purpose of which was to discuss desired reforms with other dissenting individuals from the Arab world. However, as the Ottoman authorities cracked down on the organization's activities and members, al-Fatat went underground and demanded the complete independence and unity of the Arab provinces. The Arab Revolt was a military uprising of Arab forces against the Ottoman Empire during World War I, began in 1916, led by Sherif Hussein bin Ali, the goal of the revolt was to gain independence for the Arab lands under Ottoman rule and to create a unified Arab state. The revolt was sparked by a number of factors, including the Arab desire for greater autonomy within the Ottoman Empire, resentment towards Ottoman policies, and the influence of Arab nationalist movements. The Arab Revolt was a significant factor in the eventual defeat of the Ottoman Empire. The revolt helped to weaken Ottoman military power and tie up Ottoman forces that could have been deployed elsewhere. It also helped to increase support for Arab independence and nationalism, which would have a lasting impact on the region in the years to come. The Empire's defeat and the occupation of part of its territory by the Allied Powers in the aftermath of World War I, the Sykes–Picot Agreement had a significant impact on the Arab world and its people. The agreement divided the Arab territories of the Ottoman Empire into zones of control for France and Britain, ignoring the aspirations of the Arab people for independence and self-determination. The Golden Age of Arab Civilization known as the "Islamic Golden Age", traditionally dated from the 8th century to the 13th century. The period is traditionally said to have ended with the collapse of the Abbasid caliphate due to Siege of Baghdad in 1258. During this time, Arab scholars made significant contributions to fields such as mathematics, astronomy, medicine, and philosophy. These advancements had a profound impact on European scholars during the Renaissance. The Arabs shared its knowledge and ideas with Europe, including translations of Arabic texts. These translations had a significant impact on culture of Europe, leading to the transformation of many philosophical disciplines in the medieval Latin world. Additionally, the Arabs made original innovations in various fields, including the arts, agriculture, alchemy, music, and pottery, and traditional star names such as Aldebaran, scientific terms like alchemy (whence also chemistry), algebra, algorithm, etc. and names of commodities such as sugar, camphor, cotton, coffee, etc. From the medieval scholars of the Renaissance of the 12th century, who had focused on studying Greek and Arabic works of natural sciences, philosophy, and mathematics, rather than on such cultural texts. Arab logician, most notably Averroes, had inherited Greek ideas after they had invaded and conquered Egypt and the Levant. Their translations and commentaries on these ideas worked their way through the Arab West into Iberia and Sicily, which became important centers for this transmission of ideas. From the 11th to the 13th century, many schools dedicated to the translation of philosophical and scientific works from Classical Arabic to Medieval Latin were established in Iberia, most notably the Toledo School of Translators. This work of translation from Arab culture, though largely unplanned and disorganized, constituted one of the greatest transmissions of ideas in history. During the Timurid Renaissance spanning the late 14th, the 15th, and the early 16th centuries, there was a significant exchange of ideas, art, and knowledge between different cultures and civilizations. Arab scholars, artists, and intellectuals played a role in this cultural exchange, contributing to the overall intellectual atmosphere of the time. They participated in various fields, including literature, art, science, and philosophy. In the late 19th and early 20th centuries, the Arab Renaissance was a cultural and intellectual movement that emerged. The term "Nahda" means "awakening" or "renaissance" in Arabic, and refers to a period of renewed interest in Arabic language, literature, and culture. The modern period in Arab history refers to the time period from the late 19th century to the present day. During this time, the Arab world experienced significant political, economic, and social changes. One of the most significant events of the modern period was the collapse of the Ottoman Empire, the end of Ottoman rule led to the emergence of new nation-states in the Arab world. Sharif Hussein was supposed, in the event of the success of the Arab revolution and the victory of the Allies in World War I, to be able to establish an independent Arab state consisting of the Arabian Peninsula and the Fertile Crescent, including Iraq and the Levant. He aimed to become "King of the Arabs" in this state, however, the Arab revolution only succeeded in achieving some of its objectives, including the independence of the Hejaz and the recognition of Sharif Hussein as its king by the Allies. Arab nationalism emerged as a major movement in the early 20th century, with many Arab intellectuals, artists, and political leaders seeking to promote unity and independence for the Arab world. This movement gained momentum after World War II, leading to the formation of the Arab League and the creation of several new Arab states. Pan-Arabism that emerged in the early 20th century and aimed to unite all Arabs into a single nation or state. It emphasized on a shared ancestry, culture, history, language and identity and sought to create a sense of pan-Arab identity and solidarity. The roots of pan-Arabism can be traced back to the Arab Renaissance or Al-Nahda movement of the late 19th century, which saw a revival of Arab culture, literature, and intellectual thought. The movement emphasized the importance of Arab unity and the need to resist colonialism and foreign domination. One of the key figures in the development of pan-Arabism was the Egyptian statesman and intellectual, Gamal Abdel Nasser, who led the 1952 revolution in Egypt and became the country's president in 1954. Nasser promoted pan-Arabism as a means of strengthening Arab solidarity and resisting Western imperialism. He also supported the idea of Arab socialism, which sought to combine pan-Arabism with socialist principles. Similar attempts were made by other Arab leaders, such as Hafez al-Assad, Ahmed Hassan al-Bakr, Faisal I of Iraq, Muammar Gaddafi, Saddam Hussein, Gaafar Nimeiry and Anwar Sadat. Many proposed unions aimed to create a unified Arab entity that would promote cooperation and integration among Arab countries. However, the initiatives faced numerous challenges and obstacles, including political divisions, regional conflicts, and economic disparities. The United Arab Republic (UAR) was a political union formed between Egypt and Syria in 1958, with the goal of creating a federal structure that would allow each member state to retain its identity and institutions. However, by 1961, Syria had withdrawn from the UAR due to political differences, and Egypt continued to call itself the UAR until 1971, when it became the Arab Republic of Egypt. In the same year the UAR was formed, another proposed political union, the Arab Federation, was established between Jordan and Iraq, but it collapsed after only six months due to tensions with the UAR and the 14 July Revolution. A confederation called the United Arab States, which included the UAR and the Mutawakkilite Kingdom of Yemen, was also created in 1958 but dissolved in 1961. Later attempts to create a political and economic union among Arab countries included the Federation of Arab Republics, which was formed by Egypt, Libya, and Syria in the 1970s but dissolved after five years due to political and economic challenges. Muammar Gaddafi, the leader of Libya, also proposed the Arab Islamic Republic with Tunisia, aiming to include Algeria and Morocco, instead the Arab Maghreb Union was formed in 1989. During the latter half of the 20th century, many Arab countries experienced political upheaval and conflicts, including, revolutions. The Arab–Israeli conflict remains a major issue in the region, and has resulted in ongoing tensions and periodic outbreaks of violence. In recent years, the Arab world has faced new challenges, including economic and social inequalities, demographic changes, and the impact of globalization. The Arab Spring was a series of pro-democracy uprisings and protests that swept across several countries in the Arab world in 2010 and 2011. The uprisings were sparked by a combination of political, economic, and social grievances and called for democratic reforms and an end to authoritarian rule. While the protests resulted in the downfall of some long-time authoritarian leaders, they also led to ongoing conflicts and political instability in other countries. Identity Arab identity is defined independently of religious identity, and pre-dates the spread of Islam, with historically attested Arab Christian kingdoms and Arab Jewish tribes. Today, however, most Arabs are Muslim, with a minority adhering to other faiths, largely Christianity, but also Druze and Baháʼí. Paternal descent has traditionally been considered the main source of affiliation in the Arab world when it comes to membership into an ethnic group or clan. Arab identity is shaped by a range of factors, including ancestry, history, language, customs, social construct and traditions. Arab identity has been shaped by a rich history that includes the rise and fall of empires, colonization, and political turmoil. Despite the challenges faced by Arab communities, their shared cultural heritage has helped to maintain a sense of unity and pride in their identity. Today, Arab identity continues to evolve as Arab communities navigate complex political, social, and economic landscapes. Despite this, the Arab identity remains an important aspect of the cultural and historical fabric of the Arab world, and continues to be celebrated and preserved by communities around the world. Subgroups Arab tribes are prevalent in the Arabian Peninsula, Mesopotamia, Levant, Egypt, Maghreb, the Sudan region and Horn Africa. The Arabs of the Levant are traditionally divided into Qays and Yaman tribes. The distinction between Qays and Yaman dates back to the pre-Islamic era and was based on tribal affiliations and geographic locations.; they include Banu Kalb, Kinda, Ghassanids, and Lakhmids. The Qays were made up of tribes such as Banu Kilab, Banu Tayy, Banu Hanifa, and Banu Tamim, among others. The Yaman, on the other hand, were composed of tribes such as Banu Hashim, Banu Makhzum, Banu Umayya, and Banu Zuhra, among others. There are also many Arab tribes indigenous to Mesopotamia (Iraq) and Iran, including from well before the Muslim conquest of Persia in 633 CE. The largest group of Iranian Arabs are the Khuzestani Arabs, including Banu Ka'b, Bani Turuf and the Musha'sha'iyyah sect. Smaller groups are the Khamseh nomads in Fars province and the Khorasani Arabs. As a result of the centuries-long Arab migration to the Maghreb, various Arab tribes (including Banu Hilal, Banu Sulaym and Maqil) also settled in the Maghreb and formed the sub-tribes which exist to present-day. The Banu Hilal spent almost a century in Egypt before moving to Libya, Tunisia and Algeria, and another century later moved to Morocco. According to Arab traditions, tribes are divided into different divisions called Arab skulls, which are described in the traditional custom of strength, abundance, victory, and honor. A number of them branched out, which later became independent tribes (sub-tribes). The majority of Arab tribes are descended from these major tribes. They are: Geographic distribution The total number of Arabs living in the Arab nations is estimated at 366 million by the CIA Factbook (as of 2014). The estimated number of Arabs in countries outside the Arab League is estimated at 17.5 million, yielding a total of close to 384 million. The Arab world stretches around 13,000,000 square kilometres (5,000,000 sq mi), from the Atlantic Ocean in the west to the Arabian Sea in the east and from the Mediterranean Sea in the north to the Horn of Africa and the Indian Ocean in the southeast. Arab diaspora refers to descendants of the Arab immigrants who, voluntarily or as refugees, emigrated from their native lands in non-Arab countries, primarily in East Africa, South America, Europe, North America, Australia and parts of South Asia, Southeast Asia, the Caribbean, and West Africa. According to the International Organization for Migration, there are 13 million first-generation Arab migrants in the world, of which 5.8 million reside in Arab countries. Arab expatriates contribute to the circulation of financial and human capital in the region and thus significantly promote regional development. In 2009, Arab countries received a total of US$35.1 billion in remittance in-flows and remittances sent to Jordan, Egypt and Lebanon from other Arab countries are 40 to 190 per cent higher than trade revenues between these and other Arab countries. The 250,000 strong Lebanese community in West Africa is the largest non-African group in the region. Arab traders have long operated in Southeast Asia and along the East Africa's Swahili coast. Zanzibar was once ruled by Omani Arabs. Most of the prominent Indonesians, Malaysians, and Singaporeans of Arab descent are Hadhrami people with origins in southern Arabia in the Hadramaut coastal region. There are millions of Arabs living in Europe, mostly concentrated in France (about 6,000,000 in 2005). Most Arabs in France are from the Maghreb but some also come from the Mashreq areas of the Arab world. Arabs in France form the second largest ethnic group after French people. In Italy, Arabs first arrived on the southern island of Sicily in the 9th century. The largest modern societies on the island from the Arab world are Tunisians and Moroccans, who make up 10.9% and 8% respectively of the foreign population of Sicily, which in itself constitutes 3.9% of the island's total population. The modern Arab population of Spain numbers 1,800,000, and there have been Arabs in Spain since the early 8th century when the Muslim conquest of Hispania created the state of Al-Andalus. In Germany the Arab population numbers over 1,401,950. in the United Kingdom between 366,769 and 500,000, and in Greece between 250,000 and 750,000). In addition, Greece is home to people from Arab countries who have the status of refugees (e.g. refugees of the Syrian civil war). In the Netherlands 180,000, and in Denmark 121,000. Other countries are also home to Arab populations, including Norway, Austria, Bulgaria, Switzerland, North Macedonia, Romania and Serbia. As of late 2015, Turkey had a total population of 78.7 million, with Syrian refugees accounting for 3.1% of that figure based on conservative estimates. Demographics indicated that the country previously had 1,500,000 to 2,000,000 Arab residents, Turkey's Arab population is now 4.5 to 5.1% of the total population, or approximately 4–5 million people. Arab immigration to the United States began in significant numbers during the 1880s, and today, an estimated 2 million Americans trace their roots to an Arab background according to the Census Bureau. Arab Americans are found in every state, but more than two thirds of them live in just ten states, and one-third live in Los Angeles, Detroit, and New York City specifically. Most Arab Americans were born in the US, and nearly 82% of US-based Arabs are citizens. Arab immigrants began to arrive in Canada in small numbers in 1882. Their immigration was relatively limited until 1945, after which time it increased progressively, particularly in the 1960s and thereafter. According to the website "Who are Arab Canadians", Montreal, the Canadian city with the largest Arab population, has approximately 267,000 Arab inhabitants. Latin America has the largest Arab population outside of the Arab World. Latin America is home to anywhere from 17–25 to 30 million people of Arab descent, which is more than any other diaspora region in the world. The Brazilian and Lebanese governments claim there are 7 million Brazilians of Lebanese descent. Also, the Brazilian government claims there are 4 million Brazilians of Syrian descent. Other large Arab communities includes Argentina (about 3,500,000) The interethnic marriage in the Arab community, regardless of religious affiliation, is very high; most community members have only one parent who has Arab ethnicity. Colombia (over 3,200,000), Venezuela (over 1,600,000), Mexico (over 1,100,000), Chile (over 800,000), and Central America, particularly El Salvador, and Honduras (between 150,000 and 200,000). Arab Haitians (257,000) a large number of whom live in the capital are more often than not, concentrated in financial areas where the majority of them establish businesses. In 1728, a Russian officer described a group of Arab nomads who populated the Caspian shores of Mughan (in present-day Azerbaijan). It is believed that these groups migrated to the South Caucasus in the 16th century. The 1888 edition of Encyclopædia Britannica also mentioned a certain number of Arabs populating the Baku Governorate of the Russian Empire. They retained an Arabic dialect at least into the mid-19th century, there are nearly 30 settlements still holding the name Arab (for example, Arabgadim, Arabojaghy, Arab-Yengija, etc.). From the time of the Arab conquest of the South Caucasus, continuous small-scale Arab migration from various parts of the Arab world occurred in Dagestan. The majority of these lived in the village of Darvag, to the north-west of Derbent. The latest of these accounts dates to the 1930s. Most Arab communities in southern Dagestan underwent linguistic Turkicisation, thus nowadays Darvag is a majority-Azeri village. According to the History of Ibn Khaldun, the Arabs that were once in Central Asia have been either killed or have fled the Tatar invasion of the region. However, today many people in Central Asia identify as Arabs. Most Arabs of Central Asia are fully integrated into local populations, and sometimes call themselves the same as locals (for example, Tajiks, Uzbeks) but they use special titles to show their Arab origin such as Sayyid, Khoja or Siddiqui. There are only two communities in India which claim Arab descent, the Chaush of the Deccan region and the Chavuse of Gujarat. These groups are largely descended from Hadhrami migrants who settled in these two regions in the 18th century. However, neither community still speaks Arabic, although the Chaush have seen re-immigration to Eastern Arabia and thus a re-adoption of Arabic. In South Asia, where Arab ancestry is considered prestigious, some communities have origin myths that claim Arab ancestry. Several communities following the Shafi'i madhab (in contrast to other South Asian Muslims who follow the Hanafi madhab) claim descent from Arab traders like the Konkani Muslims of the Konkan region, the Mappilla of Kerala, and the Labbai and Marakkar of Tamil Nadu and a few Christian groups in India that claim and have Arab roots are situated in the state of Kerala. South Asian Iraqi biradri may have records of their ancestors who migrated from Iraq in historical documents. The Sri Lankan Moors are the third largest ethnic group in Sri Lanka, constituting 9.2% of the country's total population. Some sources trace the ancestry of the Sri Lankan Moors to Arab traders who settled in Sri Lanka at some time between the 8th and 15th centuries. There are about 118,866 Arab-Indonesians of Hadrami descent in the 2010 Indonesian census. Afro-Arabs are individuals and groups from Africa who are of partial Arab descent. Most Afro-Arabs inhabit the Swahili Coast in the African Great Lakes region, although some can also be found in parts of the Arab world. Large numbers of Arabs migrated to West Africa, particularly Côte d'Ivoire (home to over 100,000 Lebanese), Senegal (roughly 30,000 Lebanese), Sierra Leone (roughly 10,000 Lebanese today; about 30,000 prior to the outbreak of civil war in 1991), Liberia, and Nigeria. Since the end of the civil war in 2002, Lebanese traders have become re-established in Sierra Leone. The Arabs of Chad occupy northern Cameroon and Nigeria (where they are sometimes known as Shuwa), and extend as a belt across Chad and into Sudan, where they are called the Baggara grouping of Arab ethnic groups inhabiting the portion of Africa's Sahel. There are 171,000 in Cameroon, 150,000 in Niger), and 107,000 in the Central African Republic. Religion Arabs are mostly Muslims with a Sunni majority and a Shia minority, one exception being the Ibadis, who predominate in Oman. Arab Christians generally follow Eastern Churches such as the Greek Orthodox and Greek Catholic churches, though a minority of Protestant Church followers also exists. There are also Arab communities consisting of Druze and Baháʼís. Historically, there were also sizeable populations of Arab Jews around the Arab World. Before the coming of Islam, most Arabs followed a pagan religion with a number of deities, including Hubal, Wadd, Allāt, Manat, and Uzza. A few individuals, the hanifs, had apparently rejected polytheism in favor of monotheism unaffiliated with any particular religion. Some tribes had converted to Christianity or Judaism. The most prominent Arab Christian kingdoms were the Ghassanid and Lakhmid kingdoms. When the Himyarite king converted to Judaism in the late 4th century, the elites of the other prominent Arab kingdom, the Kindites, being Himyirite vassals, apparently also converted (at least partly). With the expansion of Islam, polytheistic Arabs were rapidly Islamized, and polytheistic traditions gradually disappeared. Today, Sunni Islam dominates in most areas, vastly so in Levant, North Africa, West Africa and the Horn of Africa. Shia Islam is dominant in Bahrain and southern Iraq while northern Iraq is mostly Sunni. Substantial Shia populations exist in Lebanon, Yemen, Kuwait, Saudi Arabia, northern Syria and Al-Batinah Region in Oman. There are small numbers of Ibadi and non-denominational Muslims too. The Druze community is concentrated in Levant. Christianity had a prominent presence In pre-Islamic Arabia among several Arab communities, including the Bahrani people of Eastern Arabia, the Christian community of Najran, in parts of Yemen, and among certain northern Arabian tribes such as the Ghassanids, Lakhmids, Taghlib, Banu Amela, Banu Judham, Tanukhids and Tayy. In the early Christian centuries, Arabia was sometimes known as Arabia heretica, due to its being "well known as a breeding-ground for heterodox interpretations of Christianity." Christians make up 5.5% of the population of Western Asia and North Africa. In Lebanon, Christians number about 40.5% of the population. In Syria, Christians make up 10% of the population. Christians in Palestine make up 8% and 0.7% of the populations, respectively. In Egypt, Christians number about 10% of the population. In Iraq, Christians constitute 0.1% of the population. In Israel, Arab Christians constitute 2.1% (roughly 9% of the Arab population). Arab Christians make up 8% of the population of Jordan. Most North and South American Arabs are Christian, so are about half of the Arabs in Australia who come particularly from Lebanon, Syria and Palestine. One well known member of this religious and ethnic community is Saint Abo, martyr and the patron saint of Tbilisi, Georgia. Arab Christians also live in holy Christian cities such as Nazareth, Bethlehem and the Christian Quarter of the Old City of Jerusalem and many other villages with holy Christian sites. Culture Arab culture is shaped by a long and rich history that spans thousands of years, from the Atlantic Ocean in the west to the Arabian Sea in the east, and from the Mediterranean Sea in the north to the Horn of Africa and the Indian Ocean in the southeast. The various religions the Arabs have adopted throughout their history and the various empires and kingdoms that have ruled and took lead of the Arabic civilization have contributed to the ethnogenesis and formation of modern Arab culture. Language, literature, gastronomy, art, architecture, music, spirituality, philosophy and mysticism are all part of the cultural heritage of the Arabs. Arabic is a Semitic language of the Afro-Asiatic family. The first evidence for the emergence of the language appears in military accounts from 853 BCE. Today it has developed widely used as a lingua franca for more than 500 million people. It is also a liturgical language for 1.7 billion Muslims. Arabic is one of six official languages of the United Nations, and is revered in Islam as the language of the Quran. Arabic has two main registers. Classical Arabic is the form of the Arabic language used in literary texts from Umayyad and Abbasid times (7th to 9th centuries). It is based on the medieval dialects of Arab tribes. Modern Standard Arabic (MSA) is the direct descendant used today throughout the Arab world in writing and in formal speaking, for example, prepared speeches, some radio broadcasts, and non-entertainment content, while the lexis and stylistics of Modern Standard Arabic are different from Classical Arabic. There are also various regional dialects of colloquial spoken Arabic that both vary greatly from both each other and from the formal written and spoken forms of Arabic. Arabic mythology comprises the ancient beliefs of the Arabs. Prior to Islam the Kaaba of Mecca was covered in symbols representing the myriad demons, djinn, demigods, or simply tribal gods and other assorted deities which represented the polytheistic culture of pre-Islamic. It has been inferred from this plurality an exceptionally broad context in which mythology could flourish. The most popular beasts and demons of Arabian mythology are Bahamut, Dandan, Falak, Ghoul, Hinn, Jinn, Karkadann, Marid, Nasnas, Qareen, Roc, Shadhavar, Werehyena and other assorted creatures which represented the profoundly polytheistic environment of pre-Islamic. The most prominent symbol of Arabian mythology is the Jinn or genie. Jinns are supernatural beings that can be good or evil. They are not purely spiritual, but are also physical in nature, being able to interact in a tactile manner with people and objects and likewise be acted upon. The jinn, humans, and angels make up the known sapient creations of God. Ghouls also feature in the mythology as a monster or evil spirit associated with graveyards and consuming human flesh. In Arabic folklore, ghouls belonged to a diabolic class of jinn and were said to be the offspring of Iblīs, the prince of darkness in Islam. They were capable of constantly changing form, but always retained donkey's hooves. The Quran, the main holy book of Islam, had a significant influence on the Arabic language, and marked the beginning of Arabic literature. Muslims believe it was transcribed in the Arabic dialect of the Quraysh, the tribe of Muhammad. As Islam spread, the Quran had the effect of unifying and standardizing Arabic. Not only is the Quran the first work of any significant length written in the language, but it also has a far more complicated structure than the earlier literary works with its 114 suwar (chapters) which contain 6,236 ayat (verses). It contains injunctions, narratives, homilies, parables, direct addresses from God, instructions and even comments on how the Quran will be received and understood. It is also admired for its layers of metaphor as well as its clarity, a feature which is mentioned in An-Nahl, the 16th surah. Al-Jahiz (born 776, in Basra – December 868/January 869) was an Arab prose writer and author of works of literature, Mu'tazili theology, and politico-religious polemics. A leading scholar in the Abbasid Caliphate, his canon includes two hundred books on various subjects, including Arabic grammar, zoology, poetry, lexicography, and rhetoric. Of his writings, only thirty books survive. Al-Jāḥiẓ was also one of the first Arabian writers to suggest a complete overhaul of the language's grammatical system, though this would not be undertaken until his fellow linguist Ibn Maḍāʾ took up the matter two hundred years later. There is a small remnant of pre-Islamic poetry, but Arabic literature predominantly emerges in the Middle Ages, during the Islamic Golden Age. Imru' al-Qais was a king and poet in the 6th century, he was the last king of Kindite. He is among the finest Arabic poetry to date, as well sometimes considered the father of Arabic poetry. Kitab al-Aghani by Abul-Faraj was called by the 14th-century historian Ibn Khaldun the register of the Arabs. Literary Arabic is derived from Classical Arabic, based on the language of the Quran as it was analyzed by Arabic grammarians beginning in the 8th century. A large portion of Arabic literature before the 20th century is in the form of poetry, and even prose from this period is either filled with snippets of poetry or is in the form of saj or rhymed prose. The ghazal or love poem had a long history being at times tender and chaste and at other times rather explicit. In the Sufi tradition the love poem would take on a wider, mystical and religious importance. Arabic epic literature was much less common than poetry, and presumably originates in oral tradition, written down from the 14th century or so. Maqama or rhymed prose is intermediate between poetry and prose, and also between fiction and non-fiction. Maqama was an incredibly popular form of Arabic literature, being one of the few forms which continued to be written during the decline of Arabic in the 17th and 18th centuries. Arabic literature and culture declined significantly after the 13th century, to the benefit of Turkish and Persian. A modern revival took place beginning in the 19th century, alongside resistance against Ottoman rule. The literary revival is known as al-Nahda in Arabic, and was centered in Egypt and Lebanon. Two distinct trends can be found in the nahda period of revival. The first was a neo-classical movement which sought to rediscover the literary traditions of the past, and was influenced by traditional literary genres—such as the maqama—and works like One Thousand and One Nights. In contrast, a modernist movement began by translating Western modernist works—primarily novels—into Arabic. A tradition of modern Arabic poetry was established by writers such as Francis Marrash, Ahmad Shawqi and Hafiz Ibrahim. Iraqi poet Badr Shakir al-Sayyab is considered to be the originator of free verse in Arabic poetry. Arab cuisine is largely divided into Khaleeji cuisine, Levantine cuisine and Maghrebi cuisine. Arab cuisine has influenced other cuisines various cultures, including Ottoman, Persian, and Andalusian. It is characterized by a variety of herbs and spices, including cumin, coriander, cinnamon, sumac, za'atar, cardamom, mint, saffron, sesame, thyme turmeric and parsley. Arab cuisine is also known for its sweets and desserts, such as Knafeh, Baklava, Halva, and Qatayef. Arabic coffee, or qahwa, is a traditional drink that is served with dates. Arabic art has taken various forms, including, among other things, jewelry, textiles and architecture. Arabic script has also traditionally been heavily embellished with often colorful Arabic calligraphy, with one notable and widely used example being Kufic script. Arabic miniatures (Arabic: الْمُنَمْنَمَات الْعَرَبِيَّة, Al-Munamnamāt al-ʿArabīyah) are small paintings on paper, usually book or manuscript illustrations but also sometimes separate artworks that occupy entire pages. The earliest example dates from around 690 CE, with a flourishing of the art from between 1000 and 1200 CE in the Abbasid caliphate. The art form went through several stages of evolution while witnessing the fall and rise of several Arab caliphates. Arab miniaturists got totally assimilated and subsequently disappeared due to the Ottoman occupation of the Arab world. Nearly all forms of Islamic miniatures (Persian miniatures, Ottoman miniatures and Mughal miniatures) owe their existences to Arabic miniatures, as Arab patrons were the first to demand the production of illuminated manuscripts in the Caliphate, it was not until the 14th century that the artistic skill reached the non-Arab regions of the Caliphate. Despite the considerable changes in Arabic miniature style and technique, even during their last decades, the early Umayyad Arab influence could still be noticed. Arabic miniature artists include Ismail al-Jazari, who illustrated his own Book of Knowledge of Ingenious Mechanical Devices. The Abbasid artist, Yahya Al-Wasiti, who probably lived in Baghdad in the late Abbasid era (12th to 13th-centuries), was one of the pre-eminent exponents of the Baghdad school. In the period 1236–1237, he transcribed and illustrated the book Maqamat (also known as the Assemblies or the Sessions), a series of anecdotes of social satire written by Al-Hariri of Basra. The narrative concerns the travels of a middle-aged man as he uses his charm and eloquence to swindle his way across the Arabic world. With most surviving Arabic manuscripts in western museums, Arabic miniatures occupy very little space in modern Arab culture. Arabesque is a form of artistic decoration consisting of "surface decorations based on rhythmic linear patterns of scrolling and interlacing foliage, tendrils" or plain lines, often combined with other elements. Another definition is "Foliate ornament, typically using leaves, derived from stylised half-palmettes, which were combined with spiralling stems". It usually consists of a single design which can be 'tiled' or seamlessly repeated as many times as desired. The Arab world is home to around 8% of UNESCO World Heritage Sites (List of World Heritage Sites in Arab states). The oldest examples of architecture include those of pre-Islamic Arabia, as well as Nabataean architecture that developed in the ancient kingdom of the Nabataeans, a nomadic Arab tribe that controlled a significant portion of the Middle East from the 4th century BCE to the 2nd century CE. The Nabataeans were known for their skill in carving out elaborate buildings, tombs, and other structures from the sandstone cliffs of the region. One of the most famous examples of Nabataean architecture is the city of Petra, which is located in modern-day Jordan, was the capital of the Nabataean kingdom and is renowned for its impressive rock-cut architecture. Prior to the start of the Arab conquests, Arab tribal client states, the Lakhmids and Ghassanids, were located on the borders of the Sassanid and Byzantine empires and were exposed to the cultural and architectural influences of both. They most likely played a significant role in transmitting and adapting the architectural traditions of these two empires to the later Arab Islamic dynasties. The Arab empire expanded rapidly, and with it, came a diverse range of architectural influences. One of the most notable architectural achievements of the Arab Empire is the Great Mosque of Damascus in Syria, which was built in the early 8th century, was constructed on the site of a Christian basilica and incorporated elements of Byzantine and Roman architecture, such as arches, columns, and intricate mosaics. Another important architectural is the Al-Aqsa Mosque in Jerusalem, which was built in the late 7th century. The mosque features an impressive dome and a large prayer hall, as well as intricate geometric patterns and calligraphy on the walls. Arabic music, while independent and flourishing in the 2010s, has a long history of interaction with many other regional musical styles and genres. It is an amalgam of the music of the Arab people in the Arabian Peninsula and the music of all the peoples that make up the Arab world today. Pre-Islamic Arab music was similar to that of Ancient Middle Eastern music. Most historians agree that there existed distinct forms of music in the Arabian peninsula in the pre-Islamic period between the 5th and 7th century CE. Arab poets of that "Jahili poets", meaning "the poets of the period of ignorance"—used to recite poems with a high notes. It was believed that Jinns revealed poems to poets and music to musicians. By the 11th century, Islamic Iberia had become a center for the manufacture of instruments. These goods spread gradually throughout France, influencing French troubadours, and eventually reaching the rest of Europe. The English words lute, rebec, and naker are derived from Arabic oud, rabab, and naqareh. A number of musical instruments used in classical music are believed to have been derived from Arabic musical instruments: the lute was derived from the Oud, the rebec (ancestor of violin) from the Maghreb rebab, the guitar from qitara, which in turn was derived from the Persian Tar, naker from naqareh, adufe from al-duff, alboka from al-buq, anafil from al-nafir, exabeba from al-shabbaba (flute), atabal (bass drum) from al-tabl, atambal from al-tinbal, the balaban, the castanet from kasatan, sonajas de azófar from sunuj al-sufr, the conical bore wind instruments, the xelami from the sulami or fistula (flute or musical pipe), the shawm and dulzaina from the reed instruments zamr and al-zurna, the gaita from the ghaita, rackett from iraqya or iraqiyya, geige (violin) from ghichak, and the theorbo from the tarab. During the 1950s and the 1960s, Arabic music began to take on a more Western tone – artists Umm Kulthum, Abdel Halim Hafez, and Shadia along with composers Mohamed Abd al-Wahab and Baligh Hamdi pioneered the use of western instruments in Egyptian music. By the 1970s several other singers had followed suit and a strand of Arabic pop was born. Arabic pop usually consists of Western styled songs with Arabic instruments and lyrics. Melodies are often a mix between Eastern and Western. Beginning in the mid-1980s, Lydia Canaan, musical pioneer widely regarded as the first rock star of the Middle East Arab polytheism was the dominant religion in pre-Islamic Arabia. Gods and goddesses, including Hubal and the goddesses al-Lāt, Al-'Uzzá and Manāt, were worshipped at local shrines, such as the Kaaba in Mecca, whilst Arabs in the south, in what is today's Yemen, worshipped various gods, some of which represented the Sun or Moon. Different theories have been proposed regarding the role of Allah in Meccan religion. Many of the physical descriptions of the pre-Islamic gods are traced to idols, especially near the Kaaba, which is said to have contained up to 360 of them. Until about the fourth century, almost all Arabs practised polytheistic religions. Although significant Jewish and Christian minorities developed, polytheism remained the dominant belief system in pre-Islamic Arabia. The religious beliefs and practices of the nomadic bedouin were distinct from those of the settled tribes of towns such as Mecca. Nomadic religious belief systems and practices are believed to have included fetishism, totemism and veneration of the dead but were connected principally with immediate concerns and problems and did not consider larger philosophical questions such as the afterlife. Settled urban Arabs, on the other hand, are thought to have believed in a more complex pantheon of deities. While the Meccans and the other settled inhabitants of the Hejaz worshipped their gods at permanent shrines in towns and oases, the bedouin practised their religion on the move. Most notable Arab gods and goddesses: 'Amm, A'ra, Abgal, Allah, Al-Lat, Al-Qaum, Almaqah, Anbay, ʿAṯtar, Basamum, Dhu l-Khalasa, Dushara, Haukim, Hubal, Isāf and Nā'ila, Manaf, Manāt, Nasr, Nuha, Quzah, Ruda, Sa'd, Shams, Samas, Syn, Suwa', Ta'lab, Theandrios, al-'Uzzá, Wadd, Ya'uq, Yaghūth, Yatha, Aglibol, Astarte, Atargatis, Baalshamin, Bēl, Bes, Ēl, Ilāh, Inanna/Ishtar, Malakbel, Nabū, Nebo, Nergal, Yarhibol. The philosophical thought in the Arab world is heavily influenced by Arabic philosophy. Schools of Arabic/Islamic thought include Avicennism and Averroism. The first great Arab thinker in the Islamic tradition is widely regarded to be al-Kindi (801–873 A.D.), a Neo-Platonic philosopher, mathematician and scientist who lived in Kufa and Baghdad (modern day Iraq). After being appointed by the Abbasid Caliphs to translate Greek scientific and philosophical texts into Arabic, he wrote a number of original treatises of his own on a range of subjects, from metaphysics and ethics to mathematics and pharmacology. Much of his philosophical output focuses on theological subjects such as the nature of God, the soul and prophetic knowledge. Doctrines of the Arabic philosophers of the 9th–12th century who influenced medieval Scholasticism in Europe. The Arabic tradition combines Aristotelianism and Neoplatonism with other ideas introduced through Islam. Influential thinkers include the non-Arabs al-Farabi and Avicenna. The Arabic philosophic literature was translated into Hebrew and Latin, this contributed to the development of modern European philosophy. The Arabic tradition was developed by Moses Maimonides and Ibn Khaldun. Arabic science underwent considerable development during the Middle Ages (8th to 13th centuries CE), a source of knowledge that later spread throughout Medieval Europe and greatly influenced both medical practice and education. The language of recorded science was Arabic. Scientific treatises were composed by thinkers originating from across the Muslim world. These accomplishments occurred after Muhammad united the Arab tribes and the spread of Islam beyond the Arabian peninsula. Within a century after Muhammed's death (632 CE), an empire ruled by Arabs was established. It encompassed a large part of the planet, stretching from southern Europe to North Africa to Central Asia and on to India. In 711 CE, Arab Muslims invaded southern Spain; al-Andalus was a center of Arabic scientific accomplishment. Soon after, Sicily too joined the greater Islamic world. Another center emerged in Baghdad from the Abbasids, who ruled part of the Islamic world during a historic period later characterized as the "Golden Age" (~750 to 1258 CE). This era can be identified as the years between 692 and 945, and ended when the caliphate was marginalized by local Muslim rulers in Baghdad – its traditional seat of power. From 945 onward until the sacking of Baghdad by the Mongols in 1258, the Caliph continued on as a figurehead, with power devolving more to local subordinates. The pious scholars of Islam, men and women collectively known as the ulama, were the most influential element of society in the fields of Sharia law, speculative thought and theology. Arabic scientific achievement is not as yet fully understood, but is very large. These achievements encompass a wide range of subject areas, especially mathematics, astronomy, and medicine. Other subjects of scientific inquiry included physics, alchemy and chemistry, cosmology, ophthalmology, geography and cartography, sociology, and psychology. Al-Battani was an astronomer, astrologer and mathematician of the Islamic Golden Age. His work is considered instrumental in the development of science and astronomy. One of Al-Battani's best-known achievements in astronomy was the determination of the solar year as being 365 days, 5 hours, 46 minutes and 24 seconds which is only 2 minutes and 22 seconds off. In mathematics, al-Battānī produced a number of trigonometrical relationships. Al-Zahrawi, regarded by many as the greatest surgeon of the Middle Ages. His surgical treatise "De chirurgia" is the first illustrated surgical guide ever written. It remained the primary source for surgical procedures and instruments in Europe for the next 500 years. The book helped lay the foundation to establish surgery as a scientific discipline independent from medicine, earning al-Zahrawi his name as one of the founders of this field. Other notable Arabic contributions include among other things: the pioneering of organic chemistry by Jābir ibn Hayyān, establishing the science of cryptology and cryptanalysis by al-Kindi, the development of analytic geometry by Ibn al-Haytham, who has been described as the "world's first true scientist", the discovery of the pulmonary circulation by Ibn al-Nafis, the discovery of the itch mite parasite by Ibn Zuhr,[page needed] the first use of irrational numbers as an algebraic objects by Abū Kāmil, the first use of the positional decimal fractions by al-Uqlidisi, the development of the Arabic numerals and an early algebraic symbolism in the Maghreb, the Thabit number and Thābit theorem by Thābit ibn Qurra, the discovery of several new trigonometric identities by Ibn Yunus and al-Battani, the mathematical proof for Ceva's theorem by Ibn Hűd, the invention of the equatorium by al-Zarqali, the discovery of the physical reaction by Avempace, the identification of more than 200 new plants by Ibn al-Baitar the Arab Agricultural Revolution, and the Tabula Rogeriana, which was the most accurate world map in pre-modern times by al-Idrisi. Several universities and educational institutions of the Arab world such as the University of al-Quarawiyyin, Al-Azhar University, and Al Zaytuna University are considered to be the oldest in the world. Founded by Fatima al Fihriya in 859 as a mosque, the University of Al Quaraouiyine in Fez is the oldest existing, continually operating and the first degree awarding educational institution in the world according to UNESCO and Guinness World Records and is sometimes referred to as the oldest university. There are many scientific Arabic loanwords in Western European languages, including English, mostly via Old French. This includes traditional star names such as Aldebaran, scientific terms like alchemy (whence also chemistry), algebra, algorithm, alcohol, alkali, cipher, zenith, etc. Under Ottoman rule, cultural life and science in the Arab world declined. In the 20th and 21st centuries, Arabs who have won important science prizes include Ahmed Zewail and Elias Corey (Nobel Prize), Michael DeBakey and Alim Benabid (Lasker Award), Omar M. Yaghi (Wolf Prize), Huda Zoghbi (Shaw Prize), Zaha Hadid (Pritzker Prize), and Michael Atiyah (both Fields Medal and Abel Prize). Rachid Yazami was one of the co-inventors of the lithium-ion battery, and Tony Fadell was important in the development of the iPod and the iPhone. Arab theatre is a rich and diverse cultural form that encompasses a wide range of styles, genres, and historical influences. Its roots in the pre-Islamic era, when poetry, storytelling, and musical performances were the main forms of artistic expressionIt refers to theatrical performances that are created by Arab playwrights, actors, and directors. The roots of Arab theatre can be traced back to ancient Arabic poetry and storytelling, which often incorporated music and dance. In the early Arabic period, storytelling evolved into a more formalized art form that was performed in public gatherings and festivals. During the Islamic Golden Age in the 8th and 9th centuries, the city of Baghdad emerged as a hub of intellectual and artistic activity, including theatre. The court of the Abbasid Caliphate was home to many influential playwrights and performers, who helped to develop and popularize theatre throughout the Islamic world. Arab theatre has a long tradition of incorporating comedy and satire into its performances, often using humor to address social and political issues. Arab theatre encompasses a wide range of dramatic genres, including tragedy, melodrama, and historical plays. Many Arab playwrights have used drama to address contemporary issues, the role of women in Arab society, and the challenges facing young people in the modern world. In recent decades, many Arab theatre artists have pushed the boundaries of the form, experimenting with new styles and techniques. This has led to the emergence of a vibrant contemporary theatre scene in many Arab countries, with innovative productions and performances that challenge traditional notions of Arab identity and culture. Arab fashion and design have a rich history and cultural significance that spans centuries, each with its unique fashion and design traditions. One of the most notable aspects of Arab fashion is the use of luxurious fabrics and intricate embroidery. Traditional garments, such as the Abaya and Thobe, are often made from high-quality fabrics like silk, satin, brocade, and are embellished with intricate embroidery and beading. In recent years, Arab fashion has gained global recognition, with designers like Elie Saab, Zuhair Murad, and Reem Acra showcasing their designs on international runways. These designers incorporate traditional Arab design elements into their collections, such as ornate patterns, luxurious fabrics, and intricate embellishments. In addition to fashion, Arab design is also characterized by its intricate geometric patterns, calligraphy, and use of vibrant colors. Arabic art and architecture, with their intricate geometric patterns and motifs, have influenced Arab design for centuries. Arab designers also incorporate traditional motifs, such as the paisley and the arabesque, into their work. Overall, Arab fashion elements are rooted in the rich cultural heritage of the Arab world and continue to inspire designers today. Arabi weddings have changed greatly over the years. Original traditional Arab weddings have involved elements such as elaborate attire and traditional music, dance and ceremonies, and are in some cases unique from one region to another, even within the same country. The practice of marrying of relatives is a common feature of Arab culture. In the Arab world today, between 40% and 50% of all marriages are consanguineous or between close family members, though these figures may vary among Arab nations. In Egypt, around 40% of the population marry a cousin. A 1992 survey in Jordan found that 32% were married to a first cousin; a further 17.3% were married to more distant relatives. 67% of marriages in Saudi Arabia are between close relatives as are 54% of all marriages in Kuwait, whereas 18% of all Lebanese were between blood relatives. Due to the actions of Muhammad and the Rashidun, marriage between cousins is explicitly allowed in Islam and the Quran itself does not discourage or forbid the practice. Nevertheless, opinions vary on whether the phenomenon should be seen as exclusively based on Islamic practices as a 1992 study among Arabs in Jordan did not show significant differences between Christian Arabs or Muslim Arabs when comparing the occurrence of consanguinity. Genetics Arabs are genetically diverse, arising from admixture with indigenous peoples of pre-Islamic Middle East and North Africa, following the Islamic expansion. Genetic ancestry components related to the Arabian Peninsula display an increasing frequency pattern from west to east over North Africa. A similar frequency pattern exist across northeastern Africa with decreasing genetic affinities to groups of the Arabian Peninsula along the Nile river valley across Sudan and South Sudan the more they go south. This genetic cline of admixture is dated to the time of Arab expansion and immigration to the Maghreb and northeast Africa. Genetic research has indicated that Palestinian Arabs and Jews share common genetic ancestry and are closely related. According to a 2016 study, indigenous Arabs from the Arabian Peninsula are direct descendants of the first Eurasian populations established by Out of Africa migrations. They are also very distant from contemporary Eurasians although there is signal of European admixture. Ancient DNA analysis has confirmed the genetic relationship between Natufians and other ancient and modern Middle Easterners and the broader West Eurasian meta-population (i.e. Europeans and South-Central Asians). A 2021 study found that some modern Arab groups, such as Saudi Arabians and Yemenis, derive most of their ancestry from local Natufian hunter-gatherers and have less Neolithic Anatolian ancestry than Levantines. The presence of Neolithic Iranian ancestry among modern Arabs can be attributed to migrations during the Bronze Age. The Natufian population displays also ancestral ties to Paleolithic Taforalt samples, the makers of the Epipaleolithic Iberomaurusian culture of the Maghreb. See also References The fame of Edessa in history rests, however, mainly on its claim to have been the first kingdom to adopt Christianity as its official religion. According to the legend current for centuries throughout the civilized world, Abgar Ukkama wrote to Jesus, inviting him to visit him at Edessa to heal him from sickness. In return he received the blessing of Jesus and subsequently was converted by the evangelist Addai. There is, however, no factual evidence for Christianity at Edessa before the reign of Abgar the Great, 150 years later. Scholars are generally agreed that the legend has confused the two Abgars. It cannot be proved that Abgar the Great adopted Christianity; but his friend Bardaiṣan was a heterodox Christian, and there was a church at Edessa in 201. It is testimony to the personality of Abgar the Great that he is credited by tradition with a leading role in the evangelization of Edessa. Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Computer_monitor] | [TOKENS: 4250] |
Contents Computer monitor A computer monitor is an output device that displays information in pictorial or textual form. A discrete monitor comprises a visual display, support electronics, power supply, housing, electrical connectors, and external user controls. The display in modern monitors is typically an LCD with LED backlight, having by the 2010s replaced CCFL backlit LCDs. Before the mid-2000s, most monitors used a cathode ray tube (CRT) as the image output technology. A monitor is typically connected to its host computer via DisplayPort, HDMI, USB-C, DVI, or VGA. Monitors sometimes use other proprietary connectors and signals to connect to a computer, which is less common. Originally computer monitors were used for data processing while television sets were used for video. From the 1980s onward, computers (and their monitors) have been used for both data processing and video, while televisions have implemented some computer functionality. Since 2010, the typical display aspect ratio of both televisions and computer monitors changed from 4:3 to 16:9 Modern computer monitors are often functionally interchangeable with television sets and vice versa. As most computer monitors do not include integrated speakers, TV tuners, or remote controls, external components such as a DTA box may be needed to use a computer monitor as a TV set. History Early electronic computer front panels were fitted with an array of light bulbs where the state of each particular bulb would indicate the on/off state of a particular register bit inside the computer. This allowed the engineers operating the computer to monitor the internal state of the machine, so this panel of lights came to be known as the 'monitor'. As early monitors were only capable of displaying a very limited amount of information and were very transient, they were rarely considered for program output. Instead, a line printer was the primary output device, while the monitor was limited to keeping track of the program's operation. Computer monitors were formerly known as visual display units (VDU), particularly in British English. This term mostly fell out of use by the 1990s. Technologies Multiple technologies have been used for computer monitors. Until the 21st century most used cathode ray tubes but they have largely been superseded by LCD monitors. The first computer monitors used cathode ray tubes (CRTs). Prior to the advent of home computers in the late 1970s, it was common for a video display terminal (VDT) using a CRT to be physically integrated with a keyboard and other components of the workstation in a single large chassis, typically limiting them to emulation of a paper teletypewriter, thus the early epithet of 'glass TTY'. The display was monochromatic and far less sharp and detailed than on a modern monitor, necessitating the use of relatively large text and severely limiting the amount of information that could be displayed at one time. High-resolution CRT displays were developed for specialized military, industrial and scientific applications but they were far too costly for general use; wider commercial use became possible after the release of a slow, but affordable Tektronix 4010 terminal in 1972. Some of the earliest home computers (such as the TRS-80 and Commodore PET) were limited to monochrome CRT displays, but color display capability was already a possible feature for a few MOS 6500 series-based machines (such as introduced in 1977 Apple II computer or Atari 2600 console), and the color output was a specialty of the more graphically sophisticated Atari 8-bit computers, introduced in 1979. Either computer could be connected to the antenna terminals of an ordinary color TV set or used with a purpose-made CRT color monitor for optimum resolution and color quality. Lagging several years behind, in 1981 IBM introduced the Color Graphics Adapter, which could display four colors with a resolution of 320 × 200 pixels, or it could produce 640 × 200 pixels with two colors. In 1984 IBM introduced the Enhanced Graphics Adapter which was capable of producing 16 colors and had a resolution of 640 × 350. By the end of the 1980s color progressive scan CRT monitors were widely available and increasingly affordable, while the sharpest prosumer monitors could clearly display high-definition video, against the backdrop of efforts at HDTV standardization from the 1970s to the 1980s failing continuously, leaving consumer SDTVs to stagnate increasingly far behind the capabilities of computer CRT monitors well into the 2000s. During the following decade, maximum display resolutions gradually increased and prices continued to fall as CRT technology remained dominant in the PC monitor market into the new millennium, partly because it remained cheaper to produce. CRTs still offer color, grayscale, motion, and latency advantages over today's LCDs, but improvements to the latter have made them much less obvious. The dynamic range of early LCD panels was very poor, and although text and other motionless graphics were sharper than on a CRT, an LCD characteristic known as pixel lag caused moving graphics to appear noticeably smeared and blurry. There are multiple technologies that have been used to implement liquid-crystal displays (LCD). Throughout the 1990s, the primary use of LCD technology as computer monitors was in laptops where the lower power consumption, lighter weight, and smaller physical size of LCDs justified the higher price versus a CRT. Commonly, the same laptop would be offered with an assortment of display options at increasing price points: (active or passive) monochrome, passive color, or active matrix color (TFT). As volume and manufacturing capability have improved, the monochrome and passive color technologies were dropped from most product lines. TFT-LCD is a variant of LCD which is now the dominant technology used for computer monitors. The first standalone LCDs appeared in the mid-1990s selling for high prices. As prices declined they became more popular, and by 1997 were competing with CRT monitors. Among the first desktop LCD computer monitors were the Eizo FlexScan L66 in the mid-1990s, the SGI 1600SW, Apple Studio Display and the ViewSonic VP140 in 1998. In 2003, LCDs outsold CRTs for the first time, becoming the primary technology used for computer monitors. The physical advantages of LCD over CRT monitors are that LCDs are lighter, smaller, and consume less power. In terms of performance, LCDs produce less or no flicker, reducing eyestrain, sharper image at native resolution, and better checkerboard contrast. On the other hand, CRT monitors have superior blacks, viewing angles, and response time, can use arbitrary lower resolutions without aliasing, and flicker can be reduced with higher refresh rates, though this flicker can also be used to reduce motion blur compared to less flickery displays such as most LCDs. Many specialized fields such as vision science remain dependent on CRTs, the best LCD monitors having achieved moderate temporal accuracy, and so can be used only if their poor spatial accuracy is unimportant. High dynamic range (HDR) has been implemented into high-end LCD monitors to improve grayscale accuracy. Since around the late 2000s, widescreen LCD monitors have become popular, in part due to television series, motion pictures and video games transitioning to widescreen, which makes squarer monitors unsuited to display them correctly. Organic light-emitting diode (OLED) monitors provide most of the benefits of both LCD and CRT monitors with few of their drawbacks, though much like plasma panels or very early CRTs they suffer from burn-in, and remain very expensive. Measurements of performance The performance of a monitor is measured by the following parameters: On two-dimensional display devices such as computer monitors the display size or viewable image size is the actual amount of screen space that is available to display a picture, video or working space, without obstruction from the bezel or other aspects of the unit's design. The main measurements for display devices are width, height, total area and the diagonal. The size of a display is usually given by manufacturers diagonally, i.e. as the distance between two opposite screen corners. This method of measurement is inherited from the method used for the first generation of CRT television when picture tubes with circular faces were in common use. Being circular, it was the external diameter of the glass envelope that described their size. Since these circular tubes were used to display rectangular images, the diagonal measurement of the rectangular image was smaller than the diameter of the tube's face (due to the thickness of the glass). This method continued even when cathode ray tubes were manufactured as rounded rectangles; it had the advantage of being a single number specifying the size and was not confusing when the aspect ratio was universally 4:3. With the introduction of flat-panel technology, the diagonal measurement became the actual diagonal of the visible display. This meant that an eighteen-inch LCD had a larger viewable area than an eighteen-inch cathode ray tube. Estimation of monitor size by the distance between opposite corners does not take into account the display aspect ratio, so that for example a 16:9 21-inch (53 cm) widescreen display has less area, than a 21-inch (53 cm) 4:3 screen. The 4:3 screen has dimensions of 16.8 in × 12.6 in (43 cm × 32 cm) and an area 211 sq in (1,360 cm2), while the widescreen is 18.3 in × 10.3 in (46 cm × 26 cm), 188 sq in (1,210 cm2). Until about 2001, most computer monitors had a 4:3 aspect ratio and some had 5:4 or 8:7. Between 2001 and 2006, monitors with 16:9 and mostly 16:10 (8:5) aspect ratios became commonly available, first in laptops and later also in standalone monitors. Reasons for this transition included productive uses (i.e. field of view in video games and movie viewing) such as the word processor display of two standard letter pages side by side, as well as CAD displays of large-size drawings and application menus at the same time. In 2008 16:10 became the most common sold aspect ratio for LCD monitors and the same year 16:10 was the mainstream standard for laptops and notebook computers. In 2010, the computer industry started to move over from 16:10 to 16:9 because 16:9 was chosen to be the standard high-definition television display size, and because they were cheaper to manufacture. In 2011, non-widescreen displays with 4:3 aspect ratios were only being manufactured in small quantities. According to Samsung, this was because the "Demand for the old 'Square monitors' has decreased rapidly over the last couple of years," and "I predict that by the end of 2011, production on all 4:3 or similar panels will be halted due to a lack of demand." The resolution for computer monitors has increased over time. From 280 × 192 during the late 1970s, to 1024 × 768 during the late 1990s. Since 2009, the most commonly sold resolution for computer monitors is 1920 × 1080, shared with the 1080p of HDTV. Before 2013 mass market LCD monitors were limited to 2560 × 1600 at 30 in (76 cm), excluding niche professional monitors. By 2015 most major display manufacturers had released 3840 × 2160 (4K UHD) displays, and the first 7680 × 4320 (8K) monitors had begun shipping. Every RGB monitor has its own color gamut, bounded in chromaticity by a color triangle. Some of these triangles are smaller than the sRGB triangle, some are larger. Colors are typically encoded by 8 bits per primary color. The RGB value [255, 0, 0] represents red, but slightly different colors in different color spaces such as Adobe RGB and sRGB. Displaying sRGB-encoded data on wide-gamut devices can give an unrealistic result. The gamut is a property of the monitor; the image color space can be forwarded as Exif metadata in the picture. As long as the monitor gamut is wider than the color space gamut, correct display is possible, if the monitor is calibrated. A picture that uses colors that are outside the sRGB color space will display on an sRGB color space monitor with limitations. Still today, many monitors that can display the sRGB color space are not factory nor user-calibrated to display it correctly. Color management is needed both in electronic publishing (via the Internet for display in browsers) and in desktop publishing targeted to print. Additional features Most modern monitors will switch to a power-saving mode if no video-input signal is received. This allows modern operating systems to turn off a monitor after a specified period of inactivity. This also extends the monitor's service life. Some monitors will also switch themselves off after a time period on standby. Most modern laptops provide a method of screen dimming after periods of inactivity or when the battery is in use. This extends battery life and reduces wear. Most modern monitors have two different indicator light colors wherein if video-input signal was detected, the indicator light is green and when the monitor is in power-saving mode, the screen is black and the indicator light is orange. Some monitors have different indicator light colors and some monitors have a blinking indicator light when in power-saving mode. Many monitors have other accessories (or connections for them) integrated. This places standard ports within easy reach and eliminates the need for another separate hub, camera, microphone, or set of speakers. These monitors have advanced microprocessors which contain codec information, Windows interface drivers and other small software which help in proper functioning of these functions. Monitors that feature an aspect ratio greater than 2:1 (for instance, 21:9 or 32:9, as opposed to the more common 16:9, which resolves to 1.77:1).Monitors with an aspect ratio greater than 3:1 are marketed as super ultrawide monitors. These are typically massive curved screens intended to replace a multi-monitor deployment. These monitors use touching of the screen as an input method. Items can be selected or moved with a finger, and finger gestures may be used to convey commands. The screen will need frequent cleaning due to image degradation from fingerprints. Some displays, especially newer flat-panel monitors, replace the traditional anti-glare matte finish with a glossy one. This increases color saturation and sharpness but reflections from lights and windows are more visible. Anti-reflective coatings are sometimes applied to help reduce reflections, although this only partly mitigates the problem. Most often using nominally flat-panel display technology such as LCD or OLED, a concave rather than convex curve is imparted, reducing geometric distortion, especially in extremely large and wide seamless desktop monitors intended for close viewing range. Newer monitors are able to display a different image for each eye, often with the help of special glasses and polarizers, giving the perception of depth. An autostereoscopic screen can generate 3D images without headgear. Features for medical using or for outdoor placement. Narrow viewing angle screens are used in some security-conscious applications. Integrated screen calibration tools, screen hoods, signal transmitters; Protective screens. A combination of a monitor with a graphics tablet. Such devices are typically unresponsive to touch without the use of one or more special tools' pressure. Newer models however are now able to detect touch from any pressure and often have the ability to detect tool tilt and rotation as well. Touch and tablet sensors are often used on sample and hold displays such as LCDs to substitute for the light pen, which can only work on CRTs. The option for using the display as a reference monitor; these calibration features can give an advanced color management control for take a near-perfect image. Option for professional LCD monitors, inherent to OLED & CRT; professional feature with mainstream tendency. Near to mainstream professional feature; advanced hardware driver for backlit modules with local zones of uniformity correction. Mounting Computer monitors are provided with a variety of methods for mounting them depending on the application and environment. A desktop monitor is typically provided with a stand from the manufacturer which lifts the monitor up to a more ergonomic viewing height. The stand may be attached to the monitor using a proprietary method or may use, or be adaptable to, a VESA mount. A VESA standard mount allows the monitor to be used with more after-market stands if the original stand is removed. Stands may be fixed or offer a variety of features such as height adjustment, horizontal swivel, and landscape or portrait screen orientation. The Flat Display Mounting Interface (FDMI), also known as VESA Mounting Interface Standard (MIS) or colloquially as a VESA mount, is a family of standards defined by the Video Electronics Standards Association for mounting flat-panel displays to stands or wall mounts. It is implemented on most modern flat-panel monitors and TVs. For computer monitors, the VESA Mount typically consists of four threaded holes on the rear of the display that will mate with an adapter bracket. Rack mount computer monitors are available in two styles and are intended to be mounted into a 19-inch rack: A fixed rack mount monitor is mounted directly to the rack with the flat-panel or CRT visible at all times. The height of the unit is measured in rack units (RU) and 8U or 9U are most common to fit 17-inch or 19-inch screens. The front sides of the unit are provided with flanges to mount to the rack, providing appropriately spaced holes or slots for the rack mounting screws. A 19-inch diagonal screen is the largest size that will fit within the rails of a 19-inch rack. Larger flat-panels may be accommodated but are 'mount-on-rack' and extend forward of the rack. There are smaller display units, typically used in broadcast environments, which fit multiple smaller screens side by side into one rack mount. A stowable rack mount monitor is 1U, 2U or 3U high and is mounted on rack slides allowing the display to be folded down and the unit slid into the rack for storage as a drawer. The flat display is visible only when pulled out of the rack and deployed. These units may include only a display or may be equipped with a keyboard creating a KVM (Keyboard Video Monitor). Most common are systems with a single LCD but there are systems providing two or three displays in a single rack mount system. A panel mount computer monitor is intended for mounting into a flat surface with the front of the display unit protruding just slightly. They may also be mounted to the rear of the panel. A flange is provided around the screen, sides, top and bottom, to allow mounting. This contrasts with a rack mount display where the flanges are only on the sides. The flanges will be provided with holes for thru-bolts or may have studs welded to the rear surface to secure the unit in the hole in the panel. Often a gasket is provided to provide a water-tight seal to the panel and the front of the screen will be sealed to the back of the front panel to prevent water and dirt contamination. An open frame monitor provides the display and enough supporting structure to hold associated electronics and to minimally support the display. Provision will be made for attaching the unit to some external structure for support and protection. Open frame monitors are intended to be built into some other piece of equipment providing its own case. An arcade video game would be a good example with the display mounted inside the cabinet. There is usually an open frame display inside all end-use displays with the end-use display simply providing an attractive protective enclosure. Some rack mount monitor manufacturers will purchase desktop displays, take them apart, and discard the outer plastic parts, keeping the inner open-frame display for inclusion into their product. Security vulnerabilities According to a National Security Agency (NSA) document leaked to Der Spiegel, the NSA sometimes swaps the monitor cables on targeted computers with a bugged monitor cable to allow the NSA to remotely see what is being displayed on the targeted computer monitor. Van Eck phreaking is the process of remotely displaying the contents of a CRT or LCD by detecting its electromagnetic emissions. It is named after Dutch computer researcher Wim van Eck, who in 1985 published the first paper on it, including proof of concept. While most effective on older CRT monitors due to their strong electromagnetic emissions, it can potentially apply to LCDs as well, although modern shielding techniques significantly mitigate the risk. Phreaking more generally is the process of exploiting telephone networks. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Pause_Giant_AI_Experiments:_An_Open_Letter] | [TOKENS: 780] |
Contents Pause Giant AI Experiments: An Open Letter Pause Giant AI Experiments: An Open Letter is the title of a letter published by the Future of Life Institute in March 2023. The letter calls "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4", citing risks such as AI-generated propaganda, extreme automation of jobs, human obsolescence, and a society-wide loss of control. It received more than 30,000 signatures, including academic AI researchers and industry CEOs such as Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak and Yuval Noah Harari. Motivations The publication occurred a week after the release of OpenAI's large language model GPT-4. It asserts that current large language models are "becoming human-competitive at general tasks", referencing a paper about early experiments of GPT-4, described as having "Sparks of AGI". AGI is described as posing numerous important risks, especially in a context of race-to-the-bottom dynamics in which some AI labs may be incentivized to overlook security to deploy products more quickly. It asks to refocus AI research on making powerful AI systems "more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal". The letter also recommends more governmental regulation, independent audits before training AI systems, as well as "tracking highly capable AI systems and large pools of computational capability" and "robust public funding for technical AI safety research". FLI suggests using the "amount of computation that goes into a training run" as a proxy to for how powerful an AI is, and thus as a threshold. Reception The letter received widespread coverage, with support coming from a range of high-profile figures. As of July 2024, a pause has not been realized - instead, as FLI pointed out on the letter's one-year anniversary, AI companies have directed "vast investments in infrastructure to train ever-more giant AI systems". However, it was credited with generating a "renewed urgency within governments to work out what to do about the rapid progress of AI", and reflecting the public's increasing concern about risks presented by AI. Eliezer Yudkowsky wrote that the letter "doesn't go far enough" and argued that it should ask for an indefinite pause. He fears that finding a solution to the alignment problem might take several decades and that any misaligned AI sufficiently intelligent might cause human extinction. Some IEEE members have expressed various reasons for signing the letter, such as that "There are too many ways these systems could be abused. They are being freely distributed, and there is no review or regulation in place to prevent harm." One AI ethicist argued that the letter provides awareness to multiple issues such as voice cloning, but argued the letter was unactionable and unenforceable. The letter has been criticized for diverting attention from more immediate societal risks such as algorithmic biases. Timnit Gebru and others argued that the letter was sensationalist and amplified "some futuristic, dystopian sci-fi scenario" instead of current problems with AI today. Former Microsoft's CEO Bill Gates chose not to sign the letter, stating that he does not think "asking one particular group to pause solves the challenges". Sam Altman, CEO of OpenAI, commented that the letter was "missing most technical nuance about where we need the pause" and stated that "An earlier version of the letter claimed OpenAI is training GPT-5 right now. We are not and won't for some time." Reid Hoffman argued the letter was "virtue signalling", with no real impact. The experiments continued regardless; GPT-5 was announced in 2025. List of notable signatories Listed below are some notable signatories of the letter. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Protozoa] | [TOKENS: 4153] |
Contents Protozoa Protozoa (sg.: protozoan or protozoon; alternative plural: protozoans) are a polyphyletic group of single-celled eukaryotes, either free-living or parasitic, that feed on organic matter such as other microorganisms or organic debris. Historically, protozoans were regarded as "one-celled animals". When first introduced by Georg Goldfuss, in 1818, the taxon Protozoa was erected as a class within the Animalia, with the word 'protozoa' meaning "first animals", because they often possess animal-like behaviours, such as motility and predation, and lack a cell wall, as found in plants and many algae. This classification remained widespread in the 19th and early 20th century, and even became elevated to a variety of higher ranks, including phylum, subkingdom, kingdom, and then sometimes included within the paraphyletic Protoctista or Protista. By the 1970s, it became usual to require that all taxa be monophyletic (all members being derived from one common ancestor that is itself regarded as belonging in the taxon), and holophyletic (containing all of the known descendants of that common ancestor). The taxon 'Protozoa' fails to meet these standards, so grouping protozoa with animals, and treating them as closely related, became no longer justifiable. The term continues to be used in a loose way to describe single-celled protists (that is, eukaryotes that are not animals, plants, or fungi) that feed by heterotrophy. Traditional textbook examples of protozoa are Amoeba, Paramecium, Euglena and Trypanosoma. History of classification The word "protozoa" (singular protozoon) was coined in 1818 by zoologist Georg August Goldfuss (=Goldfuß), as the Greek equivalent of the German Urthiere, meaning "primitive, or original animals" (ur- 'proto-' + Thier 'animal'). Goldfuss created Protozoa as a class containing what he believed to be the simplest animals. Originally, the group included not only single-celled microorganisms but also some "lower" multicellular animals, such as rotifers, corals, sponges, jellyfish, bryozoans and polychaete worms. The term Protozoa is formed from the Greek words πρῶτος (prôtos), meaning "first", and ζῷα (zôia), plural of ζῷον (zôion), meaning "animal". In 1848, with better microscopes and Theodor Schwann and Matthias Schleiden's cell theory, the zoologist C. T. von Siebold proposed that the bodies of protozoa such as ciliates and amoebae consisted of single cells, similar to those from which the multicellular tissues of plants and animals were constructed. Von Siebold redefined Protozoa to include only such unicellular forms, to the exclusion of all Metazoa (animals). At the same time, he raised the group to the level of a phylum containing two broad classes of microorganisms: Infusoria (mostly ciliates) and flagellates (flagellated protists and amoebae). The definition of Protozoa as a phylum or subkingdom composed of "unicellular animals" was adopted by the zoologist Otto Bütschli—celebrated at his centenary as the "architect of protozoology". As a phylum under Animalia, the Protozoa were firmly rooted in a simplistic "two-kingdom" concept of life, according to which all living beings were classified as either animals or plants. As long as this scheme remained dominant, the protozoa were understood to be animals and studied in departments of Zoology, while photosynthetic microorganisms and microscopic fungi—the so-called Protophyta—were assigned to the Plants, and studied in departments of Botany. Criticism of this system began in the latter half of the 19th century, with the realization that many organisms met the criteria for inclusion among both plants and animals. For example, the algae Euglena and Dinobryon have chloroplasts for photosynthesis, like plants, but can also feed on organic matter and are motile, like animals. In 1860, John Hogg argued against the use of "protozoa", on the grounds that "naturalists are divided in opinion—and probably some will ever continue so—whether many of these organisms or living beings, are animals or plants." As an alternative, he proposed a new kingdom called Primigenum, consisting of both the protozoa and unicellular algae, which he combined under the name "Protoctista". In Hoggs's conception, the animal and plant kingdoms were likened to two great "pyramids" blending at their bases in the kingdom Primigenum. In 1866, Ernst Haeckel proposed a third kingdom of life, which he named Protista. At first, Haeckel included a few multicellular organisms in this kingdom, but in later work, he restricted the Protista to single-celled organisms, or simple colonies whose individual cells are not differentiated into different kinds of tissues. Despite these proposals, Protozoa emerged as the preferred taxonomic placement for heterotrophic microorganisms such as amoebae and ciliates, and remained so for more than a century. In the course of the 20th century, the old "two kingdom" system began to weaken, with the growing awareness that fungi did not belong among the plants, and that most of the unicellular protozoa were no more closely related to the animals than they were to the plants. By mid-century, some biologists, such as Herbert Copeland, Robert H. Whittaker and Lynn Margulis, advocated the revival of Haeckel's Protista or Hogg's Protoctista as a kingdom-level eukaryotic group, alongside Plants, Animals and Fungi. A variety of multi-kingdom systems were proposed, and the kingdoms Protista and Protoctista became established in biology texts and curricula. By 1954, Protozoa were classified as "unicellular animals", as distinct from the "Protophyta", single-celled photosynthetic algae, which were considered primitive plants. In the system of classification published in 1964 by B.M. Honigsberg and colleagues, the phylum Protozoa was divided according to the means of locomotion, such as by cilia or flagella. Despite awareness that the traditional Protozoa was not a clade, a natural group with a common ancestor, some authors have continued to use the name, while applying it to differing scopes of organisms. In a series of classifications by Thomas Cavalier-Smith and collaborators since 1981, the taxon Protozoa was applied to certain groups of eukaryotes, and ranked as a kingdom. A scheme presented by Ruggiero et al. in 2015, placed eight not closely related phyla within kingdom Protozoa: Euglenozoa, Amoebozoa, Metamonada, Choanozoa sensu Cavalier-Smith, Loukozoa, Percolozoa, Microsporidia and Sulcozoa. This approach excludes several major groups traditionally placed among the protozoa, such as the ciliates, dinoflagellates, foraminifera, and the parasitic apicomplexans, which were moved to other groups such as Alveolata and Stramenopiles, under the polyphyletic Chromista. The Protozoa in this scheme were paraphyletic, because it excluded some descendants of Protozoa. The continued use by some of the 'Protozoa' in its old sense highlights the uncertainty as to what is meant by the word 'Protozoa', the need for disambiguating statements such as "in the sense intended by Goldfuß", and the problems that arise when new meanings are given to familiar taxonomic terms. Some authors classify Protozoa as a subgroup of mostly motile protists. Others class any unicellular eukaryotic microorganism as protists, and make no reference to 'Protozoa'. In 2005, members of the Society of Protozoologists voted to change its name to the International Society of Protistologists. In the system of eukaryote classification published by the International Society of Protistologists in 2012, members of the old phylum Protozoa have been distributed among a variety of supergroups. Phylogenetic distribution Protists are distributed across all major groups of eukaryotes, including those that contain multicellular algae, green plants, animals, and fungi. If photosynthetic and fungal protists are distinguished from protozoa, they appear as shown in the phylogenetic tree of eukaryotic groups. The Metamonada are hard to place, being sister possibly to Discoba, possibly to Malawimonada. Ancyromonadida FLAGELLATE PROTOZOA Malawimonada FLAGELLATE PROTOZOA CRuMs PROTOZOA, often FLAGELLATE Amoebozoa AMOEBOID PROTOZOA Breviatea PARASITIC PROTOZOA Apusomonadida FLAGELLATE PROTOZOA Holomycota (inc. multicellular fungi) FUNGAL PROTISTS Holozoa (inc. multicellular animals) AMOEBOID PROTOZOA ? Metamonada FLAGELLATE PROTOZOA Discoba EUGLENOID PROTISTS (some photosynthetic), FLAGELLATE/AMOEBOID PROTOZOA Cryptista PROTISTS (algae) Rhodophyta (multicellular red algae) PROTISTS (red algae) Picozoa PROTISTS (algae) Glaucophyta PROTISTS (algae) Viridiplantae (inc. multicellular plants) PROTISTS (green algae) Hemimastigophora FLAGELLATE PROTOZOA Provora FLAGELLATE PROTOZOA Haptista PROTOZOA Telonemia FLAGELLATE PROTOZOA Rhizaria PROTOZOA, often AMOEBOID Alveolata PROTOZOA Stramenopiles FLAGELLATE PROTISTS (photosynthetic) Characteristics Reproduction in Protozoa can be sexual or asexual. Most Protozoa reproduce asexually through binary fission. Many parasitic Protozoa reproduce both asexually and sexually. However, sexual reproduction is rare among free-living protozoa and it usually occurs when food is scarce or the environment changes drastically. Both isogamy and anisogamy occur in Protozoa, anisogamy being the more common form of sexual reproduction. Protozoans, as traditionally defined, range in size from as little as 1 micrometre to several millimetres, or more. Among the largest are the deep-sea–dwelling xenophyophores, single-celled foraminifera whose shells can reach 20 cm in diameter. Free-living protozoa are common and often abundant in fresh, brackish and salt water, as well as other moist environments, such as soils and mosses. Some species thrive in extreme environments such as hot springs and hypersaline lakes and lagoons. All protozoa require a moist habitat; however, some can survive for long periods of time in dry environments, by forming resting cysts that enable them to remain dormant until conditions improve. All protozoa are heterotrophic, deriving nutrients from other organisms, either by ingesting them whole by phagocytosis or taking up dissolved organic matter or micro-particles (osmotrophy). Phagocytosis may involve engulfing organic particles with pseudopodia (as amoebae do), taking in food through a specialized mouth-like aperture called a cytostome, or using stiffened ingestion organelles Parasitic protozoa use a wide variety of feeding strategies, and some may change methods of feeding in different phases of their life cycle. For instance, the malaria parasite Plasmodium feeds by pinocytosis during its immature trophozoite stage of life (ring phase), but develops a dedicated feeding organelle (cytostome) as it matures within a host's red blood cell. Protozoa may also live as mixotrophs, combining a heterotrophic diet with some form of autotrophy. Some protozoa form close associations with symbiotic photosynthetic algae (zoochlorellae), which live and grow within the membranes of the larger cell and provide nutrients to the host. The algae are not digested, but reproduce and are distributed between division products. The organism may benefit at times by deriving some of its nutrients from the algal endosymbionts or by surviving anoxic conditions because of the oxygen produced by algal photosynthesis. Some protozoans practice kleptoplasty, stealing chloroplasts from prey organisms and maintaining them within their own cell bodies as they continue to produce nutrients through photosynthesis. The ciliate Mesodinium rubrum retains functioning plastids from the cryptophyte algae on which it feeds, using them to nourish themselves by autotrophy. The symbionts may be passed along to dinoflagellates of the genus Dinophysis, which prey on Mesodinium rubrum but keep the enslaved plastids for themselves. Within Dinophysis, these plastids can continue to function for months. Organisms traditionally classified as protozoa are abundant in aqueous environments and soil, occupying a range of trophic levels. The group includes flagellates (which move with the help of undulating and beating flagella). Ciliates (which move by using hair-like structures called cilia) and amoebae (which move by the use of temporary extensions of cytoplasm called pseudopodia). Many protozoa, such as the agents of amoebic meningitis, use both pseudopodia and flagella. Some protozoa attach to the substrate or form cysts, so they do not move around (sessile). Most sessile protozoa are able to move around at some stage in the life cycle, such as after cell division. The term 'theront' has been used for actively motile phases, as opposed to 'trophont' or 'trophozoite' that refers to feeding stages.[citation needed] Unlike plants, fungi and most types of algae, most protozoa do not have a rigid external cell wall but are usually enveloped by elastic structures of membranes that permit movement of the cell. In some protozoa, such as the ciliates and euglenozoans, the outer membrane of the cell is supported by a cytoskeletal infrastructure, known as a pellicle. The pellicle gives shape to the cell, especially during locomotion. Pellicles of protozoan organisms vary from flexible and elastic to fairly rigid. In ciliates and Apicomplexa, the pellicle includes a layer of closely packed vesicles called alveoli. In euglenids, the pellicle is formed from protein strips arranged spirally along the length of the body. Familiar examples of protists with a pellicle are the euglenoids and the ciliate Paramecium. In some protozoa, the pellicle hosts epibiotic bacteria that adhere to the surface by their fimbriae (attachment pili). Some protozoa live within loricas – loose fitting but not fully intact enclosures. For example, many collar flagellates (Choanoflagellates) have an organic lorica or a lorica made from silicous sectretions. Loricas are also common among some green euglenids, various ciliates (such as the folliculinids, various testate amoebae and foraminifera. The surfaces of a variety of protozoa are covered with a layer of scales and or spicules. Examples include the amoeba Cochliopodium, many centrohelid heliozoa, synurophytes. The layer is often assumed to have a protective role. In some, such as the actinophryid heliozoa, the scales only form when the organism encysts. The bodies of some protozoa are supported internally by rigid, often inorganic, elements (as in Acantharea, Pylocystinea, Phaeodarea – collectively the 'Radiolaria', and Ebriida). Protozoa mostly reproduce asexually by binary fission or multiple fission. Many protozoa also exchange genetic material by sexual means (typically, through conjugation), but this is generally decoupled from reproduction. Meiotic sex is widespread among eukaryotes, and must have originated early in their evolution, as it has been found in many protozoan lineages that diverged early in eukaryotic evolution. In the well-studied protozoan species Paramecium tetraurelia, the asexual line undergoes clonal aging, loses vitality and expires after about 200 fissions if the cells fail to undergo autogamy or conjugation. The functional basis for clonal aging was clarified by transplantation experiments of Aufderheide in 1986. These experiments demonstrated that the macronucleus, and not the cytoplasm, is responsible for clonal aging. Additional experiments by Smith-Sonneborn, Holmes and Holmes, and Gilley and Blackburn showed that, during clonal aging, DNA damage increases dramatically. Thus, DNA damage in the macronucleus appears to be the principal cause of clonal aging in P. tetraurelia. In this single-celled protozoan, aging appears to proceed in a manner similar to that of multicellular eukaryotes (see DNA damage theory of aging). Ecology Free-living protozoa are found in almost all ecosystems that contain free water, permanently or temporarily. They have a critical role in the mobilization of nutrients in ecosystems. Within the microbial food web they include the most important bacterivores. In part, they facilitate the transfer of bacterial and algal production to successive trophic levels, but also they solubilize the nutrients within microbial biomass, allowing stimulation of microbial growth. As consumers, protozoa prey upon unicellular or filamentous algae, bacteria, microfungi, and micro-carrion. In the context of older ecological models of the micro- and meiofauna, protozoa may be a food source for microinvertebrates. Most species of free-living protozoa live in similar habitats in all parts of the world. Many protozoan pathogens are human parasites, causing serious diseases such as malaria, giardiasis, toxoplasmosis, and sleeping sickness. Some of these protozoa have two-phase life cycles, alternating between proliferative stages (e.g., trophozoites) and resting cysts, enabling them to survive harsh conditions. A wide range of protozoa live commensally in the rumens of ruminant animals, such as cattle and sheep. These include flagellates, such as Trichomonas, and ciliated protozoa, such as Isotricha and Entodinium. The ciliate subclass Astomatia is composed entirely of mouthless symbionts adapted for life in the guts of annelid worms. Association between protozoan symbionts and their host organisms can be mutually beneficial. Flagellated protozoa such as Trichonympha and Pyrsonympha inhabit the guts of termites, where they enable their insect host to digest wood by helping to break down complex sugars into smaller, more easily digested molecules. References Bibliography External links |
======================================== |
[SOURCE: https://he.wikipedia.org/wiki/Grand_Theft_Auto:_Vice_City_Stories] | [TOKENS: 13867] |
תוכן עניינים Grand Theft Auto: Vice City Stories Grand Theft Auto: Vice City Stories הוא משחק וידאו בסדרת המשחקים Grand Theft Auto, (תאריך יציאה: 31 באוקטובר 2006), אשר פותח על ידי חברת רוקסטאר לידס (אנ') בשיתוף עם חברת רוקסטאר נורת' (אנ') ויצא לאור על ידי חברת רוקסטאר גיימס בשתי גרסאות, PSP ופלייסטיישן 2. המשחק משמש כפריקוול שעלילתו מתרחשת שנתיים לפני עלילת Grand Theft Auto: Vice City. המשחק מתרחש ב-Vice City ב-1984, שנתיים לפני אירועי Vice City. המשחק מתמקד באחיו של לאנס וואנס, ויקטור, שנורה במסגרת סחר הסמים בפתיחת Vice City. השקה מאוחרת בתחילה המשחק היה אמור לצאת בצפון אמריקה ב-17 באוקטובר 2006 ובאירופה ב-20 באוקטובר 2006, אבל בתחילת ספטמבר הודיעו המוציאים לאור שההוצאה נדחית לפחות ל-31 באוקטובר בצפון אמריקה. באותה הודעה גם הוחלט שהמשחק יצא לאור ב-10 בנובמבר 2006 באוסטרליה. באירופה (פרט לבריטניה), המשחק ספג דחיות נוספות, עד לתאריך הסופי: 10 בנובמבר 2006. המחיר הרשמי של המשחק בגרסת ה-PSP הוא 49.99$ בארצות הברית, 49.99€ באירופה ו-34.99£ בבריטניה. עלילה העלילה מתרחשת בשנת 1984, כאשר החייל ויקטור וואנס המכונה ״ויק״, מתחיל את הקריירה הצבאית שלו ב"וייס סיטי" ומתחיל לעבוד עבור המפקד שלו: סמל ג'רי מרטינז. מרטינז שולח את ויק לבצע בשבילו את כל העבודות השחורות והמלוכלכות, ואומר לו שזה יעשה אותו עשיר. אך הוא לא מרוויח דבר על כל העבודה שהוא עושה. נוסף על כך, הוא מתעצבן על מרטינז משום שהוא שולח אותו להתעסק בסמים, מה שמזיק למוניטין שלו. בין העבודות, הוא מכיר את פיל קסידי סוחר נשק צבאי המספק למרטינז תחמושת באופן קבוע ומרטינז קובע שויק ופיל יעבדו שניהם יחד עבורו. מרטינז מחליט לטמון לויק פח ושולח אותו להביא בשבילו בחורה לבסיס ובהיעדרו של ויק הוא גם מחביא מריחואנה מתחת למיטה של ויק, כאשר ויק מגיע לש"ג מפקד הבסיס שרואה את הבחורה ושמצא את המריחואנה, מחליט שויק הפר את החוקים של הצבא ומעיף אותו מהצבא. ויק משתכן בביתו של פיל ועל אף שהוא כבר לא חייל הוא נאלץ לעבוד עבור מרטינז אשר נותן לפיל את המשימות ומכיוון שויק עובד עבור פיל הוא נאלץ לבצע שוב את העבודות המלוכלכות של מרטינז, בסופו של דבר מרטינז מנסה לחסל את ויק ופיל ושולח אותם היישר אל המארב של האנשים שלו, אך ויק ופיל מצליחים לברוח ופיל מפסיק לעבוד עם ג'רי. כדי שויק יצליח להתפרנס, פיל שולח אותו לעבוד אצל גיסו מרטי ג'יי ויליאמס ומכיר את אשתו של מרטי ואחותו של פיל ולואיס. כאשר ויק רואה את היחס המרושע של מרטי לאשתו הוא משכנע את לואיס להתגרש ממרטי, לקחת איתה את התינוקת ולגור אצל אחותה, לאחר מכן הוא מפסיק לעבוד אצל מרטי בגלל היחס המזלזל שלו אליו והוא ולואיס מחליטים לעבוד ביחד ולפגוע בעסק של מרטי, מרטי הזועם פורץ לביתן של האחיות קסידי חוטף את לואיס ומנסה להרוג אותה אך ויק נחלץ לעזרתה, הורג את מרטי ולוקח שליטה על העסקים שלו, יותר מאוחר הוא גם הורג את בן דודו של מרטי אשר ניסה לפגוע בעסק של ויק כנקמה על רצח בן דודו. השליטה של ויק על עסקיו של מרטי גורמת לקרב כנופיות בין הכנופיה של ויק לכנופיית ה cholos וכתגובה הוא מחליט לעבוד עבור הגנגסטר אומברטו רובינה ולצאת לקרב נגד הצ'ולוס, בהמשך הוא מקבל שיחת טלפון מרשות שדות התעופה המודיעים לו שיש לקוח בשדה התעופה שרוצה שויק ייקח אותו ולהפתעתו הוא מגלה שהלקוח הוא אחיו לאנס. העסק מדרדר כשהצ'ולוס תוקפים אותם אבל הם מצליחים להימלט, לאחר מכן ויק ואומברטו יוצאים לקרב אחרון נגד הצ'ולוס-הם תוקפים את העסק הראשי שלהם ומצליחים להשמיד את העסק ולמחוק את הצ'ולוס מעיר. לאנס וויק חותמים על חוזה עסקי עם קונה סמים עשיר, בריאן פורבס, אך לאחר מכן הם מגלים שהוא סוכן DEA ולוקחים אותו בשבי ובזמן הזה הוא מרמה אותם ושולח אותם למארבים של חבריו. בהתחלה הוא שולח אותם לספנים באונייה אשר חוטפים את לאנס אבל ניצל על ידי ויק ולאחר מכן הוא שולח אותם לבר של אופנוענים אשר כמעט הורגת אותם. לאחר מכן בריאן מנסה לברוח אבל ויק מצליח להרוג אותו. לאחר מכן הם עושים מארב לאנשיו של מרטינז וגונבים ממנו שתי משאיות המלאות בסמים ולאחר מכן השניים נהיים עשירים אבל מסתבר שהסמים האלה שייכים לאחים מנדז שליטי הסמים הגדולים של וייס סיטי וכעת מנדז רוצים לחסלם, האחים וואנס מצליחים למכור לאחים מנדז גרסה שבה מרטינז הוא סוכן DEA אשר ניסה לפגוע בעסק שלהם ואז הם כורתים ברית. ויק מתחיל לעבוד בצמרת הסמים של וייס סיטי אך מערכת היחסים שלו עם לואיס מתחילה להשתבש, זאת כאשר היא מתמכרת לסמים וויק חושד שניהלה רומן עם לאנס. השניים נהיים ברוגז לתקופה ארוכה. ויק גם מתחיל לעבוד עבור הבמאי רני ההומוסקסואל שלאחר מכן עובר ניתוח לשינוי מין ונהפך לבחורה, ומצליח להרוויח כסף אחרי שהוא הצליח לשמור על הזמר פיל קולינס. לאחר מכן מרטינז חוזר לפעולה ושולח את אנשיו אשר חוטפים את לואיס, מענים אותה ופוצעים אותה אך ויק נחלץ לעזרתה והשניים משלימים בסופו של דבר. בהמשך מסתבר שמרטינז עבד כל הזמן עם האחים מנדז והאחים מנדז מנסים לחסל את ויק ואת לאנס אך הם מצליחים לצאת מהמלכודת, לאחר מכן ויק מתחיל לעבוד עבור ריקארדו דיאז ופותח בקרב נגד האחים מנדז, לאחר מכן האח מנדז הבכור ארמנדו מגלה על הקשר הזוגי בין ויק ללואיס וחוטף אותה. בנוסף הוא שולח את אנשיו לביתו של לאנס והם מפוצצים ללאנס את המכונית. לאנס העצבני רודף אחריהם וחודר לאחוזה של האחים מנדז אך נופל גם הוא בשבי. ויק חודר לתוך האחוזה ומנהל קרב עם ארמנדו שמשתמש בלהביור. ויק מצליח להרוג את ארמנדו, ולאחר מכן הוא בא לחדר הסמוך ורואה את לואיס הגוססת אשר נפצעה אנושות על ידי ארמנדו. לואיס מבקשת מויק שאחותה תמשיך בגידול התינוקת, מודה לויק על הקשר ששניהם ניהלו ומתה מפצעיה בסופו של דבר. כאשר ויק מגלה שמרטינז לקח חלק במותה של לואיס הוא ופיל קסידי חודרים לתוך הבסיס הצבאי וגונבים ממרטינז לבקשת דיאז מסוק קרב חדשני. לאחר מכן ויק מורה לדיאז שהוא ולאנס מסיימים את השותפות שלהם איתו לאחר המשימה וחודר עם מסוק הקרב לבניין העסקים של האחים מנדז. הוא נאבק באנשים של האח מנדז השני, דייגו מנדז, ובאנשים של מרטינז שנכנסים גם הם לקרב, לאחר מכן בעליית הגג ויק מנהל קרב עם דייגו מנדז ועם מרטינז והורג את שניהם. לאחר מכן לאנס מגיע לאסוף את ויק והשניים נהיים שליטי הסמים החזקים ביותר בעיר. שינויים במשחק עלילת המשחק בנויה באותה צורה כמו שאר משחקי Grand Theft Auto. הגרפיקה משופרת יחסית לשאר משחקי הסדרה והפעם שמו דגש על כל הפרטים הקטנים, לדוגמה, מנועי הרכבים יותר מציאותיים ולמקורות המים יש מרקם מציאותי. גרעין העלילה כולל אלמנטים של נהיגה והליכה. כשהשחקן במצב רגלי הוא יכול ללכת, לרוץ, לשחות, לקפוץ, להכות עוברי אורח ולהשתמש בכלי נשק. השחקן גם יכול לגנוב כלי תחבורה כגון מכוניות, סירות, מטוסים, מסוקים ואופנועים. במשחק זה הוסיפו גם אופנועי ים, טרקטורונים, טרקטור, אופניים ורחפת. אף על פי שביצוע משימות עלילתיות הן אמצעי להתקדמות במשחק ולפתיחת שלבים חדשים, הן לא הכרחיות למשחק. כשהשחקן מחליט לא לקחת משימה של המשחק, הוא יכול לשוטט בחופשיות ולזרוע הרס וטרור. מעשים אלה עלולים לגרור תגובה של הרשויות, שעוצמתה תהא תלויה ברמת ההרס. תחילה תופיע המשטרה, אחר כך תגיע הסיירת המשטרתית, אחריה ה-FBI ולבסוף כוחות צבא. השחקן גם יכול להשתתף במשימות משניות. משימות המשנה קיימות גם במשחקי העבר של הסדרה, אבל פה הן משתדרגות. תוספת מעניינת למשחק זה היא "פטרול בחוף הים", בה ויקטור צריך לסלק את האופנוענים מהחוף, לשמור על ביטחון המתרחצים, ולאחר השלמת המשימה מתקבלת יכולת לשחות ללא הגבלת זמן. אחד מהאלמנטים המרכזיים במשחק הוא "בניית אימפריה". זהו אלמנט חדש ומופיע לראשונה בסדרת המשחקים של ה-GTA. אלמנט זה לוקח מאפיינים מהמשחקים הקודמים. מה-Vice city הוא לוקח את ענייני הרכוש ומה-San Andreas הוא לוקח את מלחמת הכנופיות. בשביל להרוויח כסף צריך השחקן להקים עסק שיתחרה בכנופיית האויב. העסק אף יכול להיות נותן דמי חסות, מלווה בריבית, פתיחת בית זונות, סחר בסמים, הברחות ואפילו מעשי שוד. כמו אלמנטים רבים אחרים, גם הקרבות שודרגו במשחק זה לרמה הגבוהה ביותר בסדרה. הכוונות הופכות במשחק זה ל-"כוונות חכמות" והאויבים נעשים גם חכמים יותר. השינוי הגדול ביותר נעשה בתחום הקרבות פנים לפנים. אם אין לשחקן נשק הוא יכול להרביץ לאויב. אם השחקן נתפס או נהרג הוא יכול לשחד שוטרים או בתי חולים בשביל לשמור על כלי הנשק שלו כדי שלא יעלמו לו. החבילות המוחבאות במשחק מגיעות גם למשחק זה בצורה של 99 בלונים שמפוזרים בעיר. 99 הבלונים מתקשרים לשירם של להקת "ננה" משנת 1980, 99 בלונים, שהופיע במשחק הקודם (GTA VICE CITY). השיפור בגרפיקה מאז Liberty City Stories כולל אנימציות חדשות, משחק מהיר יותר, כבישים ארוכים יותר, פחות הולכי רגל איטיים ומכוניות איטיות, יותר כלי נשק ועלייה בכמות הרכבים ובדמויות שלשחקן אין שליטה עליהם. דמויות ויקטור וואנס המכונה ויק הוא הגיבור הראשי של המשחק, חייל המנסה לכלכל את משפחתו הלא מתפקדת. בבסיס הצבאי פורט באקסטר שב-Vice City, סמל ג'רי מרטינז הורס את הקריירה הצבאית של ויק, ומכריח אותו לפנות לפשע מאורגן כעבודה. כשאח של ויק, לאנס, מגיע כדי להמר בשוק הסמים הגדול של Vice City, ויק נקלע כנגד רצונו לרשת סוכנים נוכלים של ה-DEA (היחידה ללוחמה בסמים), כמו כן הוא נקלע לסחיטה ומלחמה קצרה עם שליטי הסמים בעיר, האחים מנדז. בהתחלה הוא מחסל את ארמנדו מנדז, ואחרי שלואיס מתה הוא מחסל את ג'רי מרטינז ואת דייגו מנדז ביחד במשימה האחרונה במשחק ומחליט לגמור את השותפות שלו ושל לאנס עם דיאז, כעבור שנתיים בזמן שביצע עסקת סמים עם משפחת פורלי הוא נרצח על ידי מארב של חוליית המתנקשים של דיאז. לאנס הוא אחיו הצעיר של ויקטור. ויקטור נוהג לזלזל בלאנס, זאת מכיוון שלאנס כמעט אף פעם לא רציני. בפעם הראשונה שלאנס פוגש את אחיו בוייס סיטי, הם נאלצים לברוח מכנופיית ה'צ'ולוס',תוך כדי שלאנס נוהג בדרכים לא שגרתיות. לאחר מכן לאנס מכניס את אחיו לענייני הסמים, דבר שגורם ליותר צרות, מכיוון שלאחים מנדז יש מונופול על עסקות הסמים בעיר. לאחר שנתיים נרצח על ידי טומי ורסטי (VICE CITY) לאחר שבגד בו. לואיס היא אחותו הקטנה של פיל קסידי ואשתו של מרטי ג'יי ויליאמס אשר יש להם תינוקת משותפת. לואיס סובלת מפגיעות מילוליות ופיזיות ממרטי הנוהג להכות אותה קבוע, כאשר היא פוגשת באקראי את ויק היא מחליטה להתגרש ממרטי ולוקחת איתה את התינוקת ועוברת לגור עם אחותה וחוברת עם ויק נגד מרטי ופוגעים בנכסים שלו אך כשמרטי מגלה את מעלליה הוא חוטף אותה ומנסה להרוג אותה אך ויק מציל אותה והורג את מרטי, כאשר ויק ואחיו לאנס נהיים עשירים לואיס חושבת שלויק כבר לא אכפת ממנה ומתחילה להיגרר לסמים ונכנסת לוויכוח סוער עם ויק, לאחר מכן היא נחטפת על ידי שכירי חרב של ג'רי מרטינז אשר פוצעים אותה אנושות אבל נחלצת על ידי ויק אשר מוביל אותה לבית חולים ואז מחזיר אותה לביתה ומשלימים בסופו של דבר, בעקבות המלחמה של האחים וואנס עם האחים מנדז האח הבכור ארמנדו מנדז שמגלה על הקשר הזוגי של ויק עם לואיס שולח את אנשיו לעשות מארב על ביתה וחוטפים אותה כאשר הייתה אמורה לצאת לדייט עם ויק, לאחר מכן באחוזתו ארמנדו פוצע אותה אנושות וגורם לה לגסוס כאשר ויק מגיע לחילוצה והורג את ארמנדו היא מבקשת ממנו לסייע לאחותה בגידול התינוק ומודה על הקשר שיש ביניהם ולבסוף מתה מפצעיה, ויק השבור מחליט לנקום את מותה והורג את האח מנדז השני דייגו ואת מרטינז לאחר שגילה שלקחו גם הם חלק במוות שלה. ג'רי הוא סמל בצבא והאויב הראשי במשחק, מרטינז לא לוקח את תפקידו ברצינות ומשתמש בכוחו לרעה. חובב נשים ומעשן סמים בקביעות, הוא לא מהסס לנצל אחרים בשביל לעשות את העבודה השחורה. הוא גרם לכך שוויקטור סולק מהצבא, לאחר שביקש מוויקטור להביא זונה לבסיס. לאחר מכן הוא ניסה להתנקש בחייו של ויקטור, אבל ללא הצלחה. כאשר ויק מגלה שמרטינז לקח חלק במותה של לואיס הוא הורג את מרטינז ואת דייגו בבניין העסק של מנדז. דייגו וארמנדו מנדז הם השניים שמתעסקים בסמים יותר מכל הפושעים האחרים בvice city. דיגו יותר שקט מארמנדו ומדבר רק ספרדית, למרות שהוא מבין אנגלית. האחים נפגשים לראשונה אחרי שוויקטור ולאנס גונבים משלוח סמים שהיה שייך להם. אך לאחר זמן מה הם מחליטים לכרות ברית. לקראת סוף המשחק הם בוגדים בלאנס ובויקטור ואז הם משתפים פעולה עם מרטינז וחוטפים את לואיס על מנת למשוך את ויקטור אחריהם, את ארמנדו ויק הורג באחוזה שלו ואת דייגו הוא הורג יותר מאוחר בבניין העסקים שלהם. פיל הוא סוחר נשק מטעם הצבא ואחיה הבכור של לואיס, ג'רי מרטינז מפגיש אותו עם ויק והשניים נאלצים לבצע עבודות מלוכלכות עבורו עד שלבסוף מרטינז בוגד בהם ומנסה לחסלם אך השניים מצליחים לברוח לאחר מכן גם פיל עוזב את הצבא, פיל הוא חבר נאמן של ויק אשר עוזר לו בכל מה שצריך ובנוסף הוא גם תומך בקשר הזוגי שיש בין ויק לאחותו, אחרי שלואיס מתה פיל מחליט לעזור לויק לנקום את מותה כאשר הוא מבצע פעולת הסחה ומאפשר לויק להתגנב לתוך הבסיס הצבאי כדי לגנוב ממרטינז מסוק קרב חדשני. אומברטו הוא הראש המנהיג של הכנופיה הקובנית. הוא חבר של ויקטור, ומעריך אותו. אומברטו נוהג להפיח אומץ באלא שעובדים איתו על ידי אמירות "אם יש לך ביצים" מרובות, דבר אשר גורם אי נעימויות לרוב האנשים. הוא כמעט לעולם לא עושה עבודה מלוכלכת. בריאן עובד בשביל ה-DEA (היחידה במלחמה בסמים) הוא מבקש מוויקטור ולאנס לעשות כמה משימות עבורו. כאשר ויקטור ולאנס מגלים שהוא קשור למשטרה הם תופסים וקושרים אותו בשביל להוציא ממנו מידע. הוא נהרג על ידי ויקטור אחרי שניסה לברוח. מרטי נשוי ללואיס ולהם תינוקת משותפת. מרטי הוא אלכוהוליסט ולכן הוא הפך לאיש אגרסיבי ואלים, הוא נוהג להכות את לואיס ולהשמיץ אותה עד שבסוף היא נשברת ומחליטה להתגרש ממנו, כאשר הוא מגלה שלואיס ניסתה לפגוע בעסקים שלו הוא פורץ לביתם של לואיס ואחותה חוטף את לואיס ולוקח אותה לאחד מבתי העסק שלו על מנת להרוג אותה אך בדרך ויק מצליח לבלום אותו מחלץ את לואיס ומרטי מנסה להרוג את ויק אך ויק מצליח להרוג אותו בסוף. רני הוא במאי הומוסקסואל-טרנסג'נדר. ויק עובד הרבה בשבילו ומציל אותו במשימה "סו לונג שלונג" שם דייגו מנדז שניהל עם רני רומן כועס על כך שהוא גרם לויק ולדיאז לחבור נגדם. הם תוקפים את הסטודיו שלו ואורבים לו ליד מועדון המאליבו. ויק מציל אותו. ריקארדו הוא עוד אחד מסוחרי הסמים הגדולים בוייס סיטי: האחים וואנס, האחים מנדז והוא עצמו. רני הכיר לויק את ריקארדו במשימה "סטיל דה דיל" (לגנוב את העסקה) דיאז נמצא ביריבות עמוקה עם האחים מנדז והוא שלח את ויק להרוס את בניין העסקים שלהם במשימה "לאסט סטאנד"(עמידה אחרונה, מעמד אחרון) (המשימה האחרונה במשחק). נהרג ב-1986 על ידי טומי ורסטי ולאנס וואנס כנקמה על מותו של ויק והפגיעה בעסקה. (ראה GTA VICE CITY) גונזאלס היה חברו הטוב ביותר של הקולונל קורטז מוייס סיטי, שם הוא נהרג משום שבגד בשותפות שלהם. גונזאלס הוא גם סוחר סמים גדול, אבל הוא לא ביריבות עם דיאז כי הוא חבר של קורטז (קורטז הוא חבר מאוד טוב של דיאז) הוא שלח את ויק לגנוב המון סמים ולבסוף שלח את ויק לגנוב את הקוקאין שקורטז מאחסן. קורטז הכועס מגלה זאת וגונזאלס בורח לזמן מה מקורטז ודיאז (שהפך לאויב שלו אחרי הגנבה) בפנטהאוז שלו ולבסוף קורטז מוצא אותו ושולח את טומי ורסטי להרוג אותו עם מסור חשמלי במשימה "טרצ'רוס סווין". פיל קולינס הוא זמר בריטי, שהגיע ל-Vice City להופעה הקרובה באצטדיון היימן ממוריאל. לרוע המזל, חייו מאוימים על ידי משפחת הפשע פורלי, שכן המנהל שלו, בארי מיקלתוייט, חייב להם 3,000,000 דולר. ויק מציל את פיל מכמה ניסיונות התנקשות, ובסופו של דבר, קולינס מסיים את הקונצרט שלו ללא פגע. הדמות מבוססת על המוזיקאי הבריטי בעל אותו השם, ואף מדובבת על ידו. בארי מיקל'ווייט הוא המנהל של פיל קולינס ונמצא בחובות עמוקים לג'ורג'יו פורלי, העומד בראש משפחת פורלי. ג'ורג'יו מאיים על חייו של קולינס במספר ניסיונות התנקשות, שכולם מסוכלים למרבה המזל על ידי ויק. לאחר שהקונצרט הצליח, בארי מחליט לבסוף להחזיר את חובותיו למשפחת פורלי. אימפריות ה"אימפריה" זו האפשרות החדשה במשחק, שמזכירה במקצת את משחקיות ה"טריטוריות" בסאן אנדראס. בכול רחבי העיר ישנם כמה עסקים, החל מעסקים קטנים, ועד לבנייני ענק. ישנם כמה כנופיות במשחק עם עסקים, ולכול אחת יש כלי רכב מחוץ לעסק. תקיפת כלי הרכב תגרום להפעלת משימה מיוחדת, בה השחקן יאלץ להרוג את חברי הכנופיה, ואז להרוס את העסק. לאחר מכן הוא יכול לקנות את העסק, וקבוע מה הוא יהיה, ובאיזה רמה. ככל שהעסק יקר יותר, הוא יביא יותר כסף. לכול סוג של עסק יש מערכת משימות משלו, וכמו כן כול עסק ברמה גבוהה גם יתן לשחקן תלבושת הקשורה לאותו העסק. רשימת הכנופיות מהם אפשר לגנוב עסקים: לאחר שהשחקן כובש עסק, הכנופיה שממנה הוא כבש את העסק תשלח אליו מתנקשים, שהם חברי כנופיה על הרכב המיוחד של אותה כנופיה. הכרישים שלוחים את הראנצ'ר שלהם, עם ארבעה מתנקשים עם AK-47, הבייקר שולחים שני מתנקשים על הבייקר איינג'ל, אשר חמושים ברובה סקורפיון ועם SMG, והצ'ולוס שולחים את המכונית שלהם, צ'ולוס סייבר עם שני אנשים חמושים ברובה סקורפיון ובאקדח. קישורים חיצוניים |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Blockly] | [TOKENS: 312] |
Contents Blockly Blockly is a client-side library for the programming language JavaScript for creating block-based visual programming languages (VPLs) and editors. A project of Google, it is free and open-source software released under the Apache License 2.0. It typically runs in a web browser. Blockly uses visual blocks that help simplify programming, and can generate code in JavaScript, Lua, Dart, Python, or PHP. History Blockly development began in summer 2011. In October 2025, the Raspberry Pi Foundation announced that from 10 November 2025, the Blockly open source library and assets, and key members of the Blockly team will transition from Google to the Raspberry Pi Foundation. User interface The default graphical user interface (GUI) of the Blockly editor consists of a toolbox, and a workspace, where a user can drag and drop and rearrange blocks. The workspace also includes, by default. The editor can be modified easily to customize and limit the available editing features and blocks. Customization Blockly includes a set of visual blocks for common operations, and can be customized by adding more blocks. New blocks require a block definition and a generator. The definition describes the block's appearance (user interface) and the generator describes the block's translation to executable code. Definitions and generators can be written in JavaScript, or using a visual set of blocks, the Block Factory, which allows new blocks to be described using extant visual blocks; the intent is to make creating new blocks easier. Applications Blockly is used in several notable projects, including: Features References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Python_(programming_language)#cite_note-bini-32] | [TOKENS: 4314] |
Contents Python (programming language) Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically type-checked and garbage-collected. It supports multiple programming paradigms, including structured (particularly procedural), object-oriented and functional programming. Guido van Rossum began working on Python in the late 1980s as a successor to the ABC programming language. Python 3.0, released in 2008, was a major revision and not completely backward-compatible with earlier versions. Beginning with Python 3.5, capabilities and keywords for typing were added to the language, allowing optional static typing. As of 2026[update], the Python Software Foundation supports Python 3.10, 3.11, 3.12, 3.13, and 3.14, following the project's annual release cycle and five-year support policy. Python 3.15 is currently in the alpha development phase, and the stable release is expected to come out in October 2026. Earlier versions in the 3.x series have reached end-of-life and no longer receive security updates. Python has gained widespread use in the machine learning community. It is widely taught as an introductory programming language. Since 2003, Python has consistently ranked in the top ten of the most popular programming languages in the TIOBE Programming Community Index, which ranks based on searches in 24 platforms. History Python was conceived in the late 1980s by Guido van Rossum at Centrum Wiskunde & Informatica (CWI) in the Netherlands. It was designed as a successor to the ABC programming language, which was inspired by SETL, capable of exception handling and interfacing with the Amoeba operating system. Python implementation began in December 1989. Van Rossum first released it in 1991 as Python 0.9.0. Van Rossum assumed sole responsibility for the project, as the lead developer, until 12 July 2018, when he announced his "permanent vacation" from responsibilities as Python's "benevolent dictator for life" (BDFL); this title was bestowed on him by the Python community to reflect his long-term commitment as the project's chief decision-maker. (He has since come out of retirement and is self-titled "BDFL-emeritus".) In January 2019, active Python core developers elected a five-member Steering Council to lead the project. The name Python derives from the British comedy series Monty Python's Flying Circus. (See § Naming.) Python 2.0 was released on 16 October 2000, featuring many new features such as list comprehensions, cycle-detecting garbage collection, reference counting, and Unicode support. Python 2.7's end-of-life was initially set for 2015, and then postponed to 2020 out of concern that a large body of existing code could not easily be forward-ported to Python 3. It no longer receives security patches or updates. While Python 2.7 and older versions are officially unsupported, a different unofficial Python implementation, PyPy, continues to support Python 2, i.e., "2.7.18+" (plus 3.11), with the plus signifying (at least some) "backported security updates". Python 3.0 was released on 3 December 2008, and was a major revision and not completely backward-compatible with earlier versions, with some new semantics and changed syntax. Python 2.7.18, released in 2020, was the last release of Python 2. Several releases in the Python 3.x series have added new syntax to the language, and made a few (considered very minor) backward-incompatible changes. As of January 2026[update], Python 3.14.3 is the latest stable release. All older 3.x versions had a security update down to Python 3.9.24 then again with 3.9.25, the final version in 3.9 series. Python 3.10 is, since November 2025, the oldest supported branch. Python 3.15 has an alpha released, and Android has an official downloadable executable available for Python 3.14. Releases receive two years of full support followed by three years of security support. Design philosophy and features Python is a multi-paradigm programming language. Object-oriented programming and structured programming are fully supported, and many of their features support functional programming and aspect-oriented programming – including metaprogramming and metaobjects. Many other paradigms are supported via extensions, including design by contract and logic programming. Python is often referred to as a 'glue language' because it is purposely designed to be able to integrate components written in other languages. Python uses dynamic typing and a combination of reference counting and a cycle-detecting garbage collector for memory management. It uses dynamic name resolution (late binding), which binds method and variable names during program execution. Python's design offers some support for functional programming in the "Lisp tradition". It has filter, map, and reduce functions; list comprehensions, dictionaries, sets, and generator expressions. The standard library has two modules (itertools and functools) that implement functional tools borrowed from Haskell and Standard ML. Python's core philosophy is summarized in the Zen of Python (PEP 20) written by Tim Peters, which includes aphorisms such as these: However, Python has received criticism for violating these principles and adding unnecessary language bloat. Responses to these criticisms note that the Zen of Python is a guideline rather than a rule. The addition of some new features had been controversial: Guido van Rossum resigned as Benevolent Dictator for Life after conflict about adding the assignment expression operator in Python 3.8. Nevertheless, rather than building all functionality into its core, Python was designed to be highly extensible via modules. This compact modularity has made it particularly popular as a means of adding programmable interfaces to existing applications. Van Rossum's vision of a small core language with a large standard library and easily extensible interpreter stemmed from his frustrations with ABC, which represented the opposite approach. Python claims to strive for a simpler, less-cluttered syntax and grammar, while giving developers a choice in their coding methodology. Python lacks do .. while loops, which Rossum considered harmful. In contrast to Perl's motto "there is more than one way to do it", Python advocates an approach where "there should be one – and preferably only one – obvious way to do it". In practice, however, Python provides many ways to achieve a given goal. There are at least three ways to format a string literal, with no certainty as to which one a programmer should use. Alex Martelli is a Fellow at the Python Software Foundation and Python book author; he wrote that "To describe something as 'clever' is not considered a compliment in the Python culture." Python's developers typically prioritize readability over performance. For example, they reject patches to non-critical parts of the CPython reference implementation that would offer increases in speed that do not justify the cost of clarity and readability.[failed verification] Execution speed can be improved by moving speed-critical functions to extension modules written in languages such as C, or by using a just-in-time compiler like PyPy. Also, it is possible to transpile to other languages. However, this approach either fails to achieve the expected speed-up, since Python is a very dynamic language, or only a restricted subset of Python is compiled (with potential minor semantic changes). Python is meant to be a fun language to use. This goal is reflected in the name – a tribute to the British comedy group Monty Python – and in playful approaches to some tutorials and reference materials. For instance, some code examples use the terms "spam" and "eggs" (in reference to a Monty Python sketch), rather than the typical terms "foo" and "bar". A common neologism in the Python community is pythonic, which has a broad range of meanings related to program style: Pythonic code may use Python idioms well; be natural or show fluency in the language; or conform with Python's minimalist philosophy and emphasis on readability. Syntax and semantics Python is meant to be an easily readable language. Its formatting is visually uncluttered and often uses English keywords where other languages use punctuation. Unlike many other languages, it does not use curly brackets to delimit blocks, and semicolons after statements are allowed but rarely used. It has fewer syntactic exceptions and special cases than C or Pascal. Python uses whitespace indentation, rather than curly brackets or keywords, to delimit blocks. An increase in indentation comes after certain statements; a decrease in indentation signifies the end of the current block. Thus, the program's visual structure accurately represents its semantic structure. This feature is sometimes termed the off-side rule. Some other languages use indentation this way; but in most, indentation has no semantic meaning. The recommended indent size is four spaces. Python's statements include the following: The assignment statement (=) binds a name as a reference to a separate, dynamically allocated object. Variables may subsequently be rebound at any time to any object. In Python, a variable name is a generic reference holder without a fixed data type; however, it always refers to some object with a type. This is called dynamic typing—in contrast to statically-typed languages, where each variable may contain only a value of a certain type. Python does not support tail call optimization or first-class continuations; according to Van Rossum, the language never will. However, better support for coroutine-like functionality is provided by extending Python's generators. Before 2.5, generators were lazy iterators; data was passed unidirectionally out of the generator. From Python 2.5 on, it is possible to pass data back into a generator function; and from version 3.3, data can be passed through multiple stack levels. Python's expressions include the following: In Python, a distinction between expressions and statements is rigidly enforced, in contrast to languages such as Common Lisp, Scheme, or Ruby. This distinction leads to duplicating some functionality, for example: A statement cannot be part of an expression; because of this restriction, expressions such as list and dict comprehensions (and lambda expressions) cannot contain statements. As a particular case, an assignment statement such as a = 1 cannot be part of the conditional expression of a conditional statement. Python uses duck typing, and it has typed objects but untyped variable names. Type constraints are not checked at definition time; rather, operations on an object may fail at usage time, indicating that the object is not of an appropriate type. Despite being dynamically typed, Python is strongly typed, forbidding operations that are poorly defined (e.g., adding a number and a string) rather than quietly attempting to interpret them. Python allows programmers to define their own types using classes, most often for object-oriented programming. New instances of classes are constructed by calling the class, for example, SpamClass() or EggsClass()); the classes are instances of the metaclass type (which is an instance of itself), thereby allowing metaprogramming and reflection. Before version 3.0, Python had two kinds of classes, both using the same syntax: old-style and new-style. Current Python versions support the semantics of only the new style. Python supports optional type annotations. These annotations are not enforced by the language, but may be used by external tools such as mypy to catch errors. Python includes a module typing including several type names for type annotations. Also, mypy supports a Python compiler called mypyc, which leverages type annotations for optimization. 1.33333 frozenset() Python includes conventional symbols for arithmetic operators (+, -, *, /), the floor-division operator //, and the modulo operator %. (With the modulo operator, a remainder can be negative, e.g., 4 % -3 == -2.) Also, Python offers the ** symbol for exponentiation, e.g. 5**3 == 125 and 9**0.5 == 3.0. Also, it offers the matrix‑multiplication operator @ . These operators work as in traditional mathematics; with the same precedence rules, the infix operators + and - can also be unary, to represent positive and negative numbers respectively. Division between integers produces floating-point results. The behavior of division has changed significantly over time: In Python terms, the / operator represents true division (or simply division), while the // operator represents floor division. Before version 3.0, the / operator represents classic division. Rounding towards negative infinity, though a different method than in most languages, adds consistency to Python. For instance, this rounding implies that the equation (a + b)//b == a//b + 1 is always true. Also, the rounding implies that the equation b*(a//b) + a%b == a is valid for both positive and negative values of a. As expected, the result of a%b lies in the half-open interval [0, b), where b is a positive integer; however, maintaining the validity of the equation requires that the result must lie in the interval (b, 0] when b is negative. Python provides a round function for rounding a float to the nearest integer. For tie-breaking, Python 3 uses the round to even method: round(1.5) and round(2.5) both produce 2. Python versions before 3 used the round-away-from-zero method: round(0.5) is 1.0, and round(-0.5) is −1.0. Python allows Boolean expressions that contain multiple equality relations to be consistent with general usage in mathematics. For example, the expression a < b < c tests whether a is less than b and b is less than c. C-derived languages interpret this expression differently: in C, the expression would first evaluate a < b, resulting in 0 or 1, and that result would then be compared with c. Python uses arbitrary-precision arithmetic for all integer operations. The Decimal type/class in the decimal module provides decimal floating-point numbers to a pre-defined arbitrary precision with several rounding modes. The Fraction class in the fractions module provides arbitrary precision for rational numbers. Due to Python's extensive mathematics library and the third-party library NumPy, the language is frequently used for scientific scripting in tasks such as numerical data processing and manipulation. Functions are created in Python by using the def keyword. A function is defined similarly to how it is called, by first providing the function name and then the required parameters. Here is an example of a function that prints its inputs: To assign a default value to a function parameter in case no actual value is provided at run time, variable-definition syntax can be used inside the function header. Code examples "Hello, World!" program: Program to calculate the factorial of a non-negative integer: Libraries Python's large standard library is commonly cited as one of its greatest strengths. For Internet-facing applications, many standard formats and protocols such as MIME and HTTP are supported. The language includes modules for creating graphical user interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic with arbitrary-precision decimals, manipulating regular expressions, and unit testing. Some parts of the standard library are covered by specifications—for example, the Web Server Gateway Interface (WSGI) implementation wsgiref follows PEP 333—but most parts are specified by their code, internal documentation, and test suites. However, because most of the standard library is cross-platform Python code, only a few modules must be altered or rewritten for variant implementations. As of 13 March 2025,[update] the Python Package Index (PyPI), the official repository for third-party Python software, contains over 614,339 packages. Development environments Most[which?] Python implementations (including CPython) include a read–eval–print loop (REPL); this permits the environment to function as a command line interpreter, with which users enter statements sequentially and receive results immediately. Also, CPython is bundled with an integrated development environment (IDE) called IDLE, which is oriented toward beginners.[citation needed] Other shells, including IDLE and IPython, add additional capabilities such as improved auto-completion, session-state retention, and syntax highlighting. Standard desktop IDEs include PyCharm, Spyder, and Visual Studio Code; there are web browser-based IDEs, such as the following environments: Implementations CPython is the reference implementation of Python. This implementation is written in C, meeting the C11 standard since version 3.11. Older versions use the C89 standard with several select C99 features, but third-party extensions are not limited to older C versions—e.g., they can be implemented using C11 or C++. CPython compiles Python programs into an intermediate bytecode, which is then executed by a virtual machine. CPython is distributed with a large standard library written in a combination of C and native Python. CPython is available for many platforms, including Windows and most modern Unix-like systems, including macOS (and Apple M1 Macs, since Python 3.9.1, using an experimental installer). Starting with Python 3.9, the Python installer intentionally fails to install on Windows 7 and 8; Windows XP was supported until Python 3.5, with unofficial support for VMS. Platform portability was one of Python's earliest priorities. During development of Python 1 and 2, even OS/2 and Solaris were supported; since that time, support has been dropped for many platforms. All current Python versions (since 3.7) support only operating systems that feature multithreading, by now supporting not nearly as many operating systems (dropping many outdated) than in the past. All alternative implementations have at least slightly different semantics. For example, an alternative may include unordered dictionaries, in contrast to other current Python versions. As another example in the larger Python ecosystem, PyPy does not support the full C Python API. Creating an executable with Python often is done by bundling an entire Python interpreter into the executable, which causes binary sizes to be massive for small programs, yet there exist implementations that are capable of truly compiling Python. Alternative implementations include the following: Stackless Python is a significant fork of CPython that implements microthreads. This implementation uses the call stack differently, thus allowing massively concurrent programs. PyPy also offers a stackless version. Just-in-time Python compilers have been developed, but are now unsupported: There are several compilers/transpilers to high-level object languages; the source language is unrestricted Python, a subset of Python, or a language similar to Python: There are also specialized compilers: Some older projects existed, as well as compilers not designed for use with Python 3.x and related syntax: A performance comparison among various Python implementations, using a non-numerical (combinatorial) workload, was presented at EuroSciPy '13. In addition, Python's performance relative to other programming languages is benchmarked by The Computer Language Benchmarks Game. There are several approaches to optimizing Python performance, despite the inherent slowness of an interpreted language. These approaches include the following strategies or tools: Language Development Python's development is conducted mostly through the Python Enhancement Proposal (PEP) process; this process is the primary mechanism for proposing major new features, collecting community input on issues, and documenting Python design decisions. Python coding style is covered in PEP 8. Outstanding PEPs are reviewed and commented on by the Python community and the steering council. Enhancement of the language corresponds with development of the CPython reference implementation. The mailing list python-dev is the primary forum for the language's development. Specific issues were originally discussed in the Roundup bug tracker hosted by the foundation. In 2022, all issues and discussions were migrated to GitHub. Development originally took place on a self-hosted source-code repository running Mercurial, until Python moved to GitHub in January 2017. CPython's public releases have three types, distinguished by which part of the version number is incremented: Many alpha, beta, and release-candidates are also released as previews and for testing before final releases. Although there is a rough schedule for releases, they are often delayed if the code is not ready yet. Python's development team monitors the state of the code by running a large unit test suite during development. The major academic conference on Python is PyCon. Also, there are special Python mentoring programs, such as PyLadies. Naming Python's name is inspired by the British comedy group Monty Python, whom Python creator Guido van Rossum enjoyed while developing the language. Monty Python references appear frequently in Python code and culture; for example, the metasyntactic variables often used in Python literature are spam and eggs, rather than the traditional foo and bar. Also, the official Python documentation contains various references to Monty Python routines. Python users are sometimes referred to as "Pythonistas". Languages influenced by Python See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Printer_(computing)] | [TOKENS: 4529] |
Contents Printer (computing) A printer is a peripheral machine which makes a durable representation of graphics or text, usually on paper. While most output is human-readable, bar code printers are an example of an expanded use for printers. Different types of printers include 3D printers, inkjet printers, laser printers, and thermal printers. History The first computer printer designed was a mechanically driven apparatus by Charles Babbage for his difference engine in the 19th century; however, his mechanical printer design was not built until 2000. He also had plans for a curve plotter, which would have been the first computer graphics printer if it was built. The first patented printing mechanism for applying a marking medium to a recording medium or more particularly an electrostatic inking apparatus and a method for electrostatically depositing ink on controlled areas of a receiving medium, was in 1962 by C. R. Winston, Teletype Corporation, using continuous inkjet printing. The ink was a red stamp-pad ink manufactured by Phillips Process Company of Rochester, NY under the name Clear Print. This patent (US3060429) led to the Teletype Inktronic Printer product delivered to customers in late 1966. The first compact, lightweight digital printer was the EP-101, invented by Japanese company Epson and released in 1968, according to Epson. The first commercial printers generally used mechanisms from electric typewriters and Teletype machines. The demand for higher speed led to the development of new systems specifically for computer use. In the 1980s there were daisy wheel systems similar to typewriters, line printers that produced similar output but at much higher speed, and dot-matrix systems that could mix text and graphics but produced relatively low-quality output. The plotter was used for those requiring high-quality line art like blueprints. The introduction of the low-cost laser printer in 1984, with the first HP LaserJet, and the addition of PostScript in next year's Apple LaserWriter set off a revolution in printing known as desktop publishing. Laser printers using PostScript mixed text and graphics, like dot-matrix printers, but at quality levels formerly available only from commercial typesetting systems. By 1990, most simple printing tasks like fliers and brochures were now created on personal computers and then laser printed; expensive offset printing systems were being dumped as scrap. The HP Deskjet of 1988 offered the same advantages as a laser printer in terms of flexibility, but produced somewhat lower-quality output (depending on the paper) from much less-expensive mechanisms. Inkjet systems rapidly displaced dot-matrix and daisy-wheel printers from the market. By the 2000s, high-quality printers of this sort had fallen under the $100 price point and became commonplace. The rapid improvement of internet email through the 1990s and into the 2000s has largely displaced the need for printing as a means of moving documents, and a wide variety of reliable storage systems means that a "physical backup" is of little benefit today. Starting around 2010, 3D printing became an area of intense interest, allowing the creation of physical objects with the same sort of effort as an early laser printer required to produce a brochure. As of the 2020s, 3D printing has become a widespread hobby due to the abundance of cheap 3D printer kits, with the most common process being Fused deposition modeling. Types Technology The choice of print technology has a great effect on the cost of the printer and cost of operation, speed, quality and permanence of documents, and noise. Some printer technologies do not work with certain types of physical media, such as carbon paper or transparencies. A second aspect of printer technology that is often forgotten is resistance to alteration: liquid ink, such as from an inkjet head or fabric ribbon, becomes absorbed by the paper fibers, so documents printed with liquid ink are more difficult to alter than documents printed with toner or solid inks, which do not penetrate below the paper surface. Cheques can be printed with liquid ink or on special cheque paper with toner anchorage so that alterations may be detected. The machine-readable lower portion of a cheque must be printed using MICR toner or ink. Banks and other clearing houses employ automation equipment that relies on the magnetic flux from these specially printed characters to function properly. The following printing technologies are routinely found in modern printers: A laser printer rapidly produces high quality text and graphics. As with digital photocopiers and multifunction printers (MFPs), laser printers employ a xerographic printing process but differ from analog photocopiers in that the image is produced by the direct scanning of a laser beam across the printer's photoreceptor. Another toner-based printer is the LED printer which uses an array of LEDs instead of a laser to cause toner adhesion to the print drum. Inkjet printers operate by propelling variably sized droplets of liquid ink onto almost any sized page. They are the most common type of computer printer used by consumers. Solid ink printers, also known as phase-change ink or hot-melt ink printers, are a type of thermal transfer printer, graphics sheet printer or 3D printer . They use solid sticks, crayons, pearls or granular ink materials. Common inks are CMYK-colored ink, similar in consistency to candle wax, which are melted and fed into a piezo crystal operated print-head. A Thermal transfer printhead jets the liquid ink on a rotating, oil coated drum. The paper then passes over the print drum, at which time the image is immediately transferred, or transfixed, to the page. Solid ink printers are most commonly used as color office printers and are excellent at printing on transparencies and other non-porous media. Solid ink is also called phase-change or hot-melt ink and was first used by Data Products and Howtek, Inc., in 1984. Solid ink printers can produce excellent results with text and images. Some solid ink printers have evolved to print 3D models, for example, Visual Impact Corporation of Windham, NH was started by retired Howtek employee, Richard Helinski whose 3D patents US4721635 and then US5136515 was licensed to Sanders Prototype, Inc., later named Solidscape, Inc. Acquisition and operating costs are similar to laser printers. Drawbacks of the technology include high energy consumption and long warm-up times from a cold state. Also, some users complain that the resulting prints are difficult to write on, as the wax tends to repel inks from pens, and are difficult to feed through automatic document feeders, but these traits have been significantly reduced in later models. This type of thermal transfer printer is only available from one manufacturer, Xerox, manufactured as part of their Xerox Phaser office printer line. Previously, solid ink printers were manufactured by Tektronix, but Tektronix sold the printing business to Xerox in 2001. A dye-sublimation printer (or dye-sub printer) is a printer that employs a printing process that uses heat to transfer dye to a medium such as a plastic card, paper, or canvas. The process is usually to lay one color at a time using a ribbon that has color panels. Dye-sub printers are intended primarily for high-quality color applications, including color photography; and are less well-suited for text. While once the province of high-end print shops, dye-sublimation printers are now increasingly used as dedicated consumer photo printers. Thermal printers work by selectively heating regions of special heat-sensitive paper. Monochrome thermal printers are used in cash registers, ATMs, gasoline dispensers and some older inexpensive fax machines. Colors can be achieved with special papers and different temperatures and heating rates for different colors; these colored sheets are not required in black-and-white output. One example is Zink (a portmanteau of "zero ink"). The following technologies are either obsolete, or limited to special applications though most were, at one time, in widespread use. Impact printers rely on a forcible impact to transfer ink to the media. The impact printer uses a print head that either hits the surface of the ink ribbon, pressing the ink ribbon against the paper (similar to the action of a typewriter), or, less commonly, hits the back of the paper, pressing the paper against the ink ribbon (the IBM 1403 for example). All but the dot matrix printer rely on the use of fully formed characters, letterforms that represent each of the characters that the printer was capable of printing. In addition, most of these printers were limited to monochrome, or sometimes two-color, printing in a single typeface at one time, although bolding and underlining of text could be done by "overstriking", that is, printing two or more impressions either in the same character position or slightly offset. Impact printers varieties include typewriter-derived printers, teletypewriter-derived printers, daisywheel printers, dot matrix printers, and line printers. Dot-matrix printers remain in common use in businesses where multi-part forms are printed. An overview of impact printing contains a detailed description of many of the technologies used. Several different computer printers were simply computer-controllable versions of existing electric typewriters. The Friden Flexowriter and IBM Selectric-based printers were the most-common examples. The Flexowriter printed with a conventional typebar mechanism while the Selectric used IBM's well-known "golf ball" printing mechanism. In either case, the letter form then struck a ribbon which was pressed against the paper, printing one character at a time. The maximum speed of the Selectric printer (the faster of the two) was 15.5 characters per second. The common teleprinter could easily be interfaced with the computer and became very popular except for those computers manufactured by IBM. Some models used a "typebox" that was positioned, in the X- and Y-axes, by a mechanism, and the selected letter form was struck by a hammer. Others used a type cylinder in a similar way as the Selectric typewriters used their type ball. In either case, the letter form then struck a ribbon to print the letterform. Most teleprinters operated at ten characters per second although a few achieved 15 CPS. Daisy wheel printers operate in much the same fashion as a typewriter. A hammer strikes a wheel with petals, the "daisy wheel", each petal containing a letter form at its tip. The letter form strikes a ribbon of ink, depositing the ink on the page and thus printing a character. By rotating the daisy wheel, different characters are selected for printing. These printers were also referred to as letter-quality printers because they could produce text which was as clear and crisp as a typewriter. The fastest letter-quality printers printed at 30 characters per second. The term dot matrix printer is used for impact printers that use a matrix of small pins to transfer ink to the page. The advantage of dot matrix over other impact printers is that they can produce graphical images in addition to text; however the text is generally of poorer quality than impact printers that use letterforms (type). Dot-matrix printers can be broadly divided into two major classes: Dot matrix printers can either be character-based or line-based (that is, a single horizontal series of pixels across the page), referring to the configuration of the print head. In the 1970s and '80s, dot matrix printers were one of the more common types of printers used for general use, such as for home and small office use. Such printers normally had either 9 or 24 pins on the print head (early 7 pin printers also existed, which did not print descenders). There was a period during the early home computer era when a range of printers were manufactured under many brands such as the Commodore VIC-1525 using the Seikosha Uni-Hammer system. This used a single solenoid with an oblique striker that would be actuated 7 times for each column of 7 vertical pixels while the head was moving at a constant speed. The angle of the striker would align the dots vertically even though the head had moved one dot spacing in the time. The vertical dot position was controlled by a synchronized longitudinally ribbed platen behind the paper that rotated rapidly with a rib moving vertically seven dot spacings in the time it took to print one pixel column. 24-pin print heads were able to print at a higher quality and started to offer additional type styles and were marketed as Near Letter Quality by some vendors. Once the price of inkjet printers dropped to the point where they were competitive with dot matrix printers, dot matrix printers began to fall out of favour for general use. Some dot matrix printers, such as the NEC P6300, can be upgraded to print in color. This is achieved through the use of a four-color ribbon mounted on a mechanism (provided in an upgrade kit that replaces the standard black ribbon mechanism after installation) that raises and lowers the ribbons as needed. Color graphics are generally printed in four passes at standard resolution, thus slowing down printing considerably. As a result, color graphics can take up to four times longer to print than standard monochrome graphics, or up to 8–16 times as long at high resolution mode. Dot matrix printers are still commonly used in low-cost, low-quality applications such as cash registers, or in demanding, very high volume applications like invoice printing. Impact printing, unlike laser printing, allows the pressure of the print head to be applied to a stack of two or more forms to print multi-part documents such as sales invoices and credit card receipts using continuous stationery with carbonless copy paper. It also has security advantages as ink impressed into a paper matrix by force is harder to erase invisibly. Dot-matrix printers were being superseded even as receipt printers after the end of the twentieth century. Line printers print an entire line of text at a time. Four principal designs exist. In each case, to print a line, precisely timed hammers strike against the back of the paper at the exact moment that the correct character to be printed is passing in front of the paper. The paper presses forward against a ribbon which then presses against the character form and the impression of the character form is printed onto the paper. Each system could have slight timing issues, which could cause minor misalignment of the resulting printed characters. For drum or typebar printers, this appeared as vertical misalignment, with characters being printed slightly above or below the rest of the line. In chain or bar printers, the misalignment was horizontal, with printed characters being crowded closer together or farther apart. This was much less noticeable to human vision than vertical misalignment, where characters seemed to bounce up and down in the line, so they were considered as higher quality print. Line printers are the fastest of all impact printers and are used for bulk printing in large computer centres. A line printer can print at 1100 lines per minute or faster, frequently printing pages more rapidly than many current laser printers. On the other hand, the mechanical components of line printers operate with tight tolerances and require regular preventive maintenance (PM) to produce a top quality print. They are virtually never used with personal computers and have now been replaced by high-speed laser printers. The legacy of line printers lives on in many operating systems, which use the abbreviations "lp", "lpr", or "LPT" to refer to printers. Liquid ink electrostatic printers use a chemical coated paper, which is charged by the print head according to the image of the document. The paper is passed near a pool of liquid ink with the opposite charge. The charged areas of the paper attract the ink and thus form the image. This process was developed from the process of electrostatic copying. Color reproduction is very accurate, and because there is no heating the scale distortion is less than ±0.1%. (All laser printers have an accuracy of ±1%.) Worldwide, most survey offices used this printer before color inkjet plotters become popular. Liquid ink electrostatic printers were mostly available in 36 to 54 inches (910 to 1,370 mm) width and also 6 color printing. These were also used to print large billboards. It was first introduced by Versatec, which was later bought by Xerox. 3M also used to make these printers. Pen-based plotters were an alternate printing technology once common in engineering and architectural firms. Pen-based plotters rely on contact with the paper (but not impact, per se) and special purpose pens that are mechanically run over the paper to create text and images. Since the pens output continuous lines, they were able to produce technical drawings of higher resolution than was achievable with dot-matrix technology. Some plotters used roll-fed paper, and therefore had a minimal restriction on the size of the output in one dimension. These plotters were capable of producing quite sizable drawings. A number of other sorts of printers are important for historical reasons, or for special purpose uses. Attributes Printers can be connected to computers in many ways: directly by a dedicated data cable such as the USB, through a short-range radio like Bluetooth, a local area network using cables (such as the Ethernet) or radio (such as WiFi), or on a standalone basis without a computer, using a memory card or other portable data storage device. Most printers other than line printers accept control characters or unique character sequences to control various printer functions. These may range from shifting from lower to upper case or from black to red ribbon on typewriter printers to switching fonts and changing character sizes and colors on raster printers. Early printer controls were not standardized, with each manufacturer's equipment having its own set. The IBM Personal Printer Data Stream (PPDS) became a commonly used command set for dot-matrix printers. Today, most printers accept one or more page description languages (PDLs). Laser printers with greater processing power frequently offer support for variants of Hewlett-Packard's Printer Command Language (PCL), PostScript or XML Paper Specification. Most inkjet devices support manufacturer proprietary PDLs such as ESC/P. The diversity in mobile platforms have led to various standardization efforts around device PDLs such as the Printer Working Group (PWG's) PWG Raster. The speed of early printers was measured in units of characters per minute (cpm) for character printers, or lines per minute (lpm) for line printers. Modern printers are measured in pages per minute (ppm). These measures are used primarily as a marketing tool, and are not as well standardised as toner yields. Usually pages per minute refers to sparse monochrome office documents, rather than dense pictures which usually print much more slowly, especially color images. Speeds in ppm usually apply to A4 paper in most countries in the world, and letter paper size, about 6% shorter, in North America. The data received by a printer may be: Some printers can process all four types of data, others not. Today it is possible to print everything (even plain text) by sending ready bitmapped images to the printer. This allows better control over formatting, especially among machines from different vendors. Many printer drivers do not use the text mode at all, even if the printer is capable of it. A monochrome printer can only produce monochrome images, with only shades of a single color. Most printers can produce only two colors, black (ink) and white (no ink). With half-tonning techniques, however, such a printer can produce acceptable grey-scale images too A color printer can produce images of multiple colors. A photo printer is a color printer that can produce images that mimic the color range (gamut) and resolution of prints made from photographic film. The page yield is the number of pages that can be printed from a toner cartridge or ink cartridge—before the cartridge needs to be refilled or replaced. The actual number of pages yielded by a specific cartridge depends on a number of factors. For a fair comparison, many laser printer manufacturers use the ISO/IEC 19752 process to measure the toner cartridge yield. In order to fairly compare operating expenses of printers with a relatively small ink cartridge to printers with a larger, more expensive toner cartridge that typically holds more toner and so prints more pages before the cartridge needs to be replaced, many people prefer to estimate operating expenses in terms of cost per page (CPP). Retailers often apply the "razor and blades" model: a company may sell a printer at cost and make profits on the ink cartridge, paper, or some other replacement part. This has caused legal disputes regarding the right of companies other than the printer manufacturer to sell compatible ink cartridges. To protect their business model, several manufacturers invest heavily in developing new cartridge technology and patenting it. Other manufacturers, in reaction to the challenges from using this business model, choose to make more money on printers and less on ink, promoting the latter through their advertising campaigns. Finally, this generates two clearly different proposals: "cheap printer – expensive ink" or "expensive printer – cheap ink". Ultimately, the consumer decision depends on their reference interest rate or their time preference. From an economics viewpoint, there is a clear trade-off between cost per copy and cost of the printer. Printer steganography is a type of steganography – "hiding data within data" – produced by color printers, including Brother, Canon, Dell, Epson, HP, IBM, Konica Minolta, Kyocera, Lanier, Lexmark, Ricoh, Toshiba and Xerox brand color laser printers, where tiny yellow dots are added to each page. The dots are barely visible and contain encoded printer serial numbers, as well as date and time stamps. Manufacturers and market share As of 2020–2021, the largest worldwide vendor of printers is Hewlett-Packard, followed by Canon, Brother, Seiko Epson and Kyocera. Other known vendors include NEC, Ricoh, Xerox, Lexmark, OKI, Sharp, Konica Minolta, Samsung, Kodak, Dell, Toshiba, Star Micronics, Citizen and Panasonic. See also References External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.