text
stringlengths
0
473k
[SOURCE: https://github.com/premium-support] | [TOKENS: 2134]
Navigation Menu Search code, repositories, users, issues, pull requests... Provide feedback We read every piece of feedback, and take your input very seriously. Saved searches Use saved searches to filter your results more quickly To see all available qualifiers, see our documentation. Get 24/7 support for your business with GitHub Premium Support Protect your business and manage IT health with a comprehensive support plan. Drive operational efficiency and boost uptime with dedicated GitHub experts. Empower your team and meet your goals with the resources to maximize your investment. Looking for general support questions? Contact GitHub Support. Already an existing customer? Learn more about GitHub Premium Support and GitHub Premium Plus Support to discover the plan that’s right for you. Why choose GitHub Premium Support? Hal Stanley // VP Service Offer Management Research & Advisory, TSIA Discover the plan that’s right for you Use the dropdown filters to reflect your organization’s properties. Included with Enterprise Cloud and Enterprise Server Available for Enterprise Cloud and Enterprise Server Available for Enterprise Cloud and Enterprise Server Included with Enterprise Cloud and Enterprise Server Available for Enterprise Cloud and Enterprise Server Available for Enterprise Cloud and Enterprise Server 24/5 24/7 24/7 24/5 24/7 24/7 < 8 hours 30 minutes for Urgent (including initial troubleshooting) 4 hours for High 48 hours for Normal 48 hours for Low 30 minutes for Urgent (including initial troubleshooting) 4 hours for High 24 hours for Normal 48 hours for Low < 8 hours 30 minutes for Urgent (including initial troubleshooting) 4 hours for High 48 hours for Normal 48 hours for Low 30 minutes for Urgent (including initial troubleshooting) 4 hours for High 24 hours for Normal 48 hours for Low Access to premium content Access to premium content 1 virtual training class per year One virtual training class is offered per year, with topics such as “GitHub for developers” and “GitHub for admins”. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Access to premium content Access to premium content 1 virtual training class per year 20 40 These members determine if incoming inquiries can be addressed via their company’s admin or only by GitHub. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. 20 40 Priority ticket handling Priority ticket handling + named Customer Reliability Engineer Priority ticket handling Priority ticket handling + named Customer Reliability Engineer For High and Urgent priority tickets For High and Urgent priority tickets For High and Urgent priority tickets For High and Urgent priority tickets For Urgent priority tickets, as needed Ensures you have the technical resources needed for case resolution, and is available 24/7. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. For Urgent priority tickets, as needed Unlimited automated Health Check reports (see “Generating a Health Check for your enterprise”) Unlimited automated Health Check reports (see “Generating a Health Check for your enterprise”) Quarterly enhanced health checks with findings, interpretations, and recommendations from a CRE (by request) Unlimited automated Health Check reports (see “Generating a Health Check for your enterprise”) Unlimited automated Health Check reports (see “Generating a Health Check for your enterprise”) Quarterly enhanced health checks with findings, interpretations, and recommendations from a CRE (by request) Up to four sessions about reliability best practices, preparing for a potential incident, and efficiently interacting with GitHub Support. Up to four sessions about reliability best practices, preparing for a potential incident, and efficiently interacting with GitHub Support. 12 hours per quarter Hours can be scheduled at your discretion. You can use them for technical tasks, such as prepping for a GitHub Enterprise Server upgrade. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. 12 hours per quarter By request Delivered upon request via our Customer Reliability Engineers. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. By request By request Delivered upon request via our Customer Reliability Engineers. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. By request Dedicated senior support engineer coverage for high-traffic or high-risk events. Support is available in 24-hour increments, with the most common packages offering 24- or 48-hour coverage. See what customers are saying about GitHub Premium Support Our support engineer was very helpful in pointing me to the exact resource I needed in minutes. The support agent took care of the request with a great sense of urgency and addressed the issue well. I'd like to applaud our engineer for how he exceptionally responded to our query. This ticket was not a simple, one-answer investigation and I was very happy with how he explored multiple angles to investigate. Our engineer was super helpful and spot on with figuring out the problem. The steps he suggested helped me figure out the problem quickly. Kudos! Meet your dedicated GitHub Premium Support team Premium Support Engineers, available only for GitHub Premium Support customers, are dedicated resources who manage and coordinate your entire GitHub Premium Support experience. Support Incident Coordinators are responsible for any major incident management, from initiation until resolution, and are available to you 24/7. Customer Reliability Engineers (CREs), available only for GitHub Premium Plus Support customers, know your customer account in detail and can provide answers faster than Premium Support Engineers. Frequently asked questions GitHub Premium Support helps customers implement GitHub Enterprise quickly and effectively across the organization with 24/7 support. For pricing information, please get in touch with the GitHub Premium Support Sales sales team. There are three levels of support: Please refer to our plan comparison table for more details. If you are an existing GitHub Premium Support customer, please sign in to our support portal. If you don’t already have GitHub Premium Support, please contact sales. Escalation and incident management is the ability to escalate ticket progression in the GitHub support portal. After someone escalates a ticket, Support Incident Coordinators orchestrate all necessary parties to resolve the ticket. Additionally, Senior Escalation Engineers (SEEs) facilitate GitHub-internal technical communications and liaise with the rest of GitHub to improve the support team’s capability in similar future circumstances. Incident response management helps manage the technical resources needed for case resolution. Support Incident Coordinators are available for incident response management 24/7. GitHub Premium Support and GitHub Premium Plus Support customers have SLAs. For urgent priority tickets, your SLA guarantees a 30-minute initial response time, which includes troubleshooting. For high priority tickets, your SLA provides a four-hour initial response time. For initial troubleshooting, the assigned Premium Support Engineer/Customer Reliability Engineer will review and acknowledge your ticket. To better understand the issue and start troubleshooting, the engineer may ask for additional information such as screenshots, error messages, log files, diagnostics files, support bundles, or the output of specific console commands. They may also collaborate with others in support, engineering, or in the regional incident commander. If a callback was requested, the engineer will determine if screen sharing is the most effective way to drive ticket resolution. If so, they will invite you to join a screen-sharing session. GitHub Premium Support and GitHub Premium Plus Support customers are entitled to unlimited automated health check reports. Additionally, GitHub Premium Plus Support customers can request quarterly enhanced health checks with findings, interpretations, and recommendations from a Customer Reliability Engineer (CRE). Crisis prevention allows GitHub Enterprise Server customers to prepare for — and experience — an incident without risk. Your Customer Reliability Engineer (CRE) guides your team through an incident simulation in a safe and controlled environment. Crisis Prevention consists of up to four sessions about reliability best practices, preparing for a potential incident, and efficiently interacting with GitHub Support. After the incident simulation, your CRE will run a detailed retrospective, uncovering lessons learned and improvement suggestions for the future. GitHub Premium Support and GitHub Premium Plus Support Plus customers have service-level agreements (SLAs) for initial response. For urgent priority tickets, the initial response SLA guarantees a 30-minute initial response time, which includes troubleshooting. For high-priority tickets, the initial response SLA provides a four-hour time. We currently do not provide estimates for time to resolution, as the complexity of tickets varies. However, we review these metrics on a regular basis and reduce times whenever possible. You can get support via online ticket submission if you’re using the basic plan included with your GitHub Enterprise license. If you have GitHub Premium Support or GitHub Premium Plus Support, you can submit a ticket online. For urgent tickets, GitHub Premium Support and GitHub Premium Plus Support customers can request a callback and have a screen-sharing session with one of our Premium Support Engineers or Customer Reliability Engineers (CREs). Please refer to our plan comparison table for more details. Premium Plus customers may use up to 12 technical advisory hours per quarter. Unused technical advisory hours may not be carried over into the next quarter. There are multiple ways you can use technical advisory hours, including but not limited to: Yes! GitHub Premium Support and GitHub Premium Plus Support customers receive 24/7 support. Customers get access to one virtual training class per year. Topics include: We recommend limiting training sessions to a maximum of 16 participants to ensure an optimal provider-to-participant ratio and a high-quality delivery experience. However, in specific cases where it makes sense, we can accommodate up to 20–25 participants while maintaining our commitment to delivering a valuable training experience for your team. GitHub Premium Plus Support customers get an assigned Customer Reliability Engineer (CRE), quarterly enhanced health checks, access to crisis prevention, technical advisory hours, and many additional benefits, which you can review in our plan comparison table. A CRE knows your customer account in detail and can help you expedite case resolution faster than a Premium Support Engineer. Most customers upgrade to GitHub Premium Support because they need initial response SLAs for urgent and high priority requests, phone support, screen share support for critical issues, and health checks. To see a full list of features for GitHub Premium Support packages, please refer to our plan comparison table. Ready to maximize your investment? Get in touch with a GitHub Premium Support specialist today. Click below to fill out the form, and our management team will contact you within 48 hours. Site-wide Links Get tips, technical guides, and best practices. Twice a month.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Virgil] | [TOKENS: 6423]
Contents Virgil Publius Vergilius Maro (Classical Latin: [ˈpuːbliʊs wɛrˈɡɪliʊs ˈmaroː]; 15 October 70 BC – 21 September 19 BC), usually called Virgil or Vergil (/ˈvɜːrdʒɪl/ VUR-jil) in English, was an ancient Roman poet of the Augustan period. He composed three of the most famous poems in Latin literature: the Eclogues (or Bucolics), the Georgics, and the epic Aeneid. Some minor poems, collected in the Appendix Vergiliana, were attributed to him in ancient times, but modern scholars regard these as spurious, with the possible exception of some short pieces. Already acclaimed in his lifetime as a classic author, Virgil rapidly replaced Ennius and other earlier authors as a standard school text, and stood as the most popular Latin poet through late antiquity, the Middle Ages, and early modernity, exerting major influence on Western literature. Geoffrey Chaucer assigned Virgil a uniquely prominent position in history in The House of Fame (1374–85), describing him as standing on a pilere / that was of tinned yren clere ("on a pillar that was of bright tin-plated iron"), and in the Divine Comedy, in which Virgil appears as the author's guide through Hell and Purgatory, Dante pays tribute to Virgil with the words tu se' solo colui da cu'io tolsi / lo bello stile che m'ha fatto onore (Inf. I.86–7) ("thou art alone the one from whom I took the beautiful style that has done honour to me"). In the 20th century, T. S. Eliot famously began a lecture on the subject "What Is a Classic?" by asserting as self-evidently true that "whatever the definition we arrive at, it cannot be one which excludes Virgil – we may say confidently that it must be one which will expressly reckon with him". Traditional biography Biographical information about Virgil is transmitted chiefly in vitae ("lives") of the poet, prefixed to commentaries on his work by Probus, Donatus, and Servius. The life given by Donatus is considered to closely reproduce the life of Virgil from a lost work of Suetonius on the lives of famous authors, just as Donatus used it for the poet's life in his commentary on Terence, where Suetonius is explicitly credited. The far shorter life given by Servius likewise seems to be an abridgement of Suetonius except for one or two statements. Varius is said to have written a memoir of his friend Virgil, and Suetonius likely drew on this lost work and other sources contemporary with the poet. A life written in verse by the grammarian Phocas (probably active in the 4th to 5th centuries AD) differs in some details from Donatus and Servius. Henry Nettleship believed the life attributed to Probus may have drawn independently from the same sources as Suetonius, but it is attributed by other authorities to an anonymous author of the 5th or 6th century AD who drew on Donatus, Servius, and Phocas. The Servian life was the principal source of Virgil's biography for medieval readers, while the Donatian life enjoyed a more limited circulation, and the lives of Phocas and Probus remained largely unknown. Although the commentaries record much factual information about Virgil, some of their evidence can be shown to rely on allegorizing and on inferences drawn from his poetry. For this reason, details regarding Virgil's life story are considered somewhat problematic.: 1602 According to the ancient vitae, Publius Vergilius Maro was born on the Ides of October during the consulship of Pompey and Crassus (15 October 70 BC) in the village of Andes, near Mantua in Cisalpine Gaul (northern Italy, added to Italy proper during his lifetime). The Donatian life reports that some say Virgil's father was a potter, but most say he was an employee of an apparitor named Magius, whose daughter he married. According to Phocas and Probus, the name of Virgil's mother was Magia Polla. The gentilicium of Virgil's maternal family, Magius, and failure to distinguish the genitive form of this name (Magi) in Servius' life, from the genitive magi of the noun magus ("magician"), probably contributed to the rise of the medieval legend that Virgil's father was employed by a certain itinerant magician, and that Virgil was a magician. Analysis of his name has led some to believe he descended from earlier Roman colonists. Modern speculation is not supported by narrative evidence from his writings or later biographers. A tradition of obscure origin, which was accepted by Dante, identifies Andes with modern Pietole, two or three miles southeast of Mantua. The ancient biography attributed to Probus records that Andes was thirty Roman miles (about 45 kilometres or 28 miles) from Mantua. There are eight or nine references to the gens to which Vergil belonged, gens Vergilia, in inscriptions from Northern Italy. Out of these, four are from townships remote from Mantua, three appear in inscriptions from Verona, and one in an inscription from Calvisano, a votive offering to the Matronae (a group of deities) by a woman called Vergilia, asking the goddesses to deliver from danger another woman, called Munatia. A tomb erected by a member of the gens Magia, to which Virgil's mother belonged, is found at Casalpoglio, just 12 kilometres (7.5 mi) from Calvisano. In 1915, G. E. K. Braunholtz drew attention to the proximity of these inscriptions to each other, and the fact that Calvisano is exactly 30 Roman miles from Mantua, which led Robert Seymour Conway to theorize that these inscriptions have to do with relatives of Virgil, and Calvisano or Carpenedolo, not Pietole, is the site of Andes. E. K. Rand defended the traditional site at Pietole, noting that Egnazio's 1507 edition of Probus's commentary, supposedly based on a "very ancient codex" from Bobbio Abbey which can no longer be found, says that Andes was three miles from Mantua, and arguing this is the correct reading. Conway replied that Egnazio's manuscript cannot be trusted to have been as ancient as Egnazio claimed it was, nor can we be sure that the reading "three" is not Egnazio's conjectural correction of his manuscript to harmonize it with the Pietole tradition, and all other evidence strongly favours the unanimous reading of the other witnesses of "thirty miles". Other studies claim that today's consideration for ancient Andes should be sought in the Casalpoglio area of Castel Goffredo. By the 4th or 5th century AD the original spelling Vergilius had been changed to Virgilius, and the latter spelling spread to modern European languages. This latter spelling persisted even though, as early as the 15th century, the classical scholar Poliziano had shown Vergilius to be the original spelling. Today, the anglicisations Vergil and Virgil are both considered acceptable. There is speculation that the spelling Virgilius might have arisen due to a pun, since virg- carries an echo of the Latin word for "wand" (uirga), Virgil being particularly associated with magic in the Middle Ages. There is also a possibility that virg- is meant to evoke the Latin virgo ("virgin"); this would be a reference to the fourth Eclogue, which has a history of Christian, and specifically Messianic, interpretations.[i] Virgil spent his boyhood in Cremona until his 15th year (55 BC), when he is said to have received the toga virilis on the very day Lucretius died. From Cremona, he moved to Milan, and shortly afterwards to Rome. After briefly considering a career in rhetoric and law, Virgil turned his talents to poetry. Despite the biographers' statements that Virgil's family was of modest means, these accounts of his education, as well as of his ceremonial assumption of the toga virilis, suggest his father was a wealthy equestrian landowner. He is said to have been tall and stout, with a swarthy complexion and a rustic appearance. Virgil seems to have suffered bad health throughout his life and in some ways lived the life of an invalid. Schoolmates considered Virgil shy and reserved, and he was nicknamed "Parthenias" ("virgin") because of his aloofness. The biographical tradition asserts that Virgil began the hexameter Eclogues (or Bucolics) in 42 BC and it is thought the collection was published around 39–38 BC, although this is controversial.: 1602 After defeating the army led by the assassins of Julius Caesar in the Battle of Philippi (42 BC), Octavian tried to pay off his veterans with land expropriated from towns in northern Italy, which—according to tradition—included an estate near Mantua belonging to Virgil. The loss of Virgil's family farm and the attempt through poetic petitions to regain his property, were seen as his motives in the composition of the Eclogues. This is now thought to be an unsupported inference from interpretations of the Eclogues. In Eclogues 1 and 9, Virgil indeed dramatizes the contrasting feelings caused by the brutality of the land expropriations through pastoral idiom, but offers no indisputable evidence of the supposed biographic incident. Sometime after the publication of the Eclogues, probably before 37 BC,: 1603 Virgil became part of the circle of Gaius Maecenas, Octavian's capable political adviser, who sought to counter sympathy for Antony among the leading families by rallying Roman literary figures to Octavian's side. Virgil came to know many other leading literary figures of the time, including Horace, in whose poetry he is often mentioned, and Varius Rufus, who later helped finish the Aeneid. At Maecenas's insistence, according to the tradition, Virgil spent the ensuing years (perhaps 37–29 BC) on the long dactylic hexameter poem called the Georgics (from Greek, "On Working the Earth"), which he dedicated to Maecenas. Virgil worked on the Aeneid during the last eleven years of his life (29–19 BC), commissioned, according to Propertius, by Augustus. According to the tradition, Virgil traveled to the senatorial province of Achaea in Greece, in about 19 BC, to revise the Aeneid. After meeting Augustus in Athens and deciding to return home, Virgil caught a fever while visiting a town near Megara. After crossing to Italy by ship, weakened with disease, Virgil died in Apulia on 21 September 19 BC. Augustus ordered Virgil's literary executors, Lucius Varius Rufus and Plotius Tucca, to disregard Virgil's wish that the poem be burned, instead ordering it to be published with as few editorial changes as possible.: 112 After his death at Brundisium according to Donatus, or Taranto according to late manuscripts of Servius, Virgil's remains were transported to Naples, where his tomb was engraved with an epitaph he had composed: Mantua me genuit; Calabri rapuere; tenet nunc Parthenope. Cecini pascua, rura, duces; "Mantua gave me life, the Calabrians took it away, Naples holds me now; I sang of pastures, farms, and commanders." (transl. Bernard Knox) Martial reports that Silius Italicus annexed the site to his estate (11.48, 11.50), and Pliny the Younger says that Silius "would visit Virgil's tomb as if it were a temple" (Epistulae 3.7.8). The structure known as Virgil's tomb is found at the entrance of an ancient Roman tunnel (grotta vecchia) in Piedigrotta, a district 1.9 mi (3 km) from the centre of Naples, near the Mergellina harbour, on the road heading north along the coast to Pozzuoli. While Virgil was already the object of literary admiration and veneration before his death, in the Middle Ages his name became associated with miraculous powers, and for a couple of centuries his tomb was the destination of pilgrimages and veneration. A famous medieval legend that Paul the Apostle had visited Virgil's tomb and wept that so great a poet had died without the Christian faith is referenced in a liturgical hymn said to have been used on Paul's feast day at Mantua: Ad Maronis mausoleum Ductus, fudit super eum Piæ rorem lacrymæ; Quem te, inquit, reddidissem, Si te vivum invenissem, Poetarum maxime! When to Maro's tomb they brought him, Tender grief and pity wrought him To bedew the stone with tears; "What a saint I might have crowned thee Had I only living found thee, Poet first and without peers!" However, Johann Friedrich Heinrich Schlosser was unable to find a manuscript of this hymn, and reported that he had only heard these verses recited from memory by a brother who had lived at Mantua. Through the 19th century, the supposed tomb attracted travellers on the Grand Tour, and still draws visitors. Works According to the commentators, Virgil received his first education when he was five and later went to Cremona, Milan, and finally Rome to study rhetoric, medicine, and astronomy, which he would abandon for philosophy. From Virgil's admiring references to the neoteric writers Asinius Pollio and Cinna, it has been inferred that he was, for a time, associated with Catullus's neoteric circle. According to the Catalepton, he began to write poetry while in the Epicurean school of Siro in Naples. A group of small works attributed to the youthful Virgil by the commentators survive collected under the title Appendix Vergiliana, but are considered spurious by scholars. One, the Catalepton, consists of fourteen short poems,: 1602 some of which may be Virgil's, and a short narrative poem Culex ("The Gnat"), was attributed to Virgil as early as the 1st century AD. The Eclogues (from the Greek for "selections") are a group of ten poems roughly modeled on the bucolic ("pastoral" or "rural") poetry of the Hellenistic poet Theocritus, which were written in dactylic hexameter. While some readers have identified Virgil with various characters and their vicissitudes, whether gratitude by an old rustic to a new god (Ecl. 1), frustrated love by a rustic singer for a distant boy (his master's pet, Ecl. 2), or a master singer's claim to have composed several eclogues (Ecl. 5), modern scholars largely reject such efforts to garner biographical details from fiction, preferring to interpret an author's characters and themes as illustrations of contemporary life and thought. The ten Eclogues present traditional pastoral themes with a fresh perspective. Eclogues 1 and 9 address the land confiscations and their effects on the Italian countryside. 2 and 3 are pastoral and erotic, discussing homosexual love (Ecl. 2) and attraction toward people of any gender (Ecl. 3). Eclogue 4, addressed to Asinius Pollio, the so-called "Messianic Eclogue", uses the imagery of the golden age in connection with the birth of a child (the child's identity has been debated). 5 and 8 describe the myth of Daphnis in a song contest, 6, the cosmic and mythological song of Silenus; 7, a heated poetic contest, and 10 the sufferings of the contemporary elegiac poet Cornelius Gallus. Virgil in his Eclogues is credited with establishing Arcadia as a poetic ideal that still resonates in literature and visual arts and with setting the stage for the development of Latin pastoral by Calpurnius Siculus, Nemesianus and later writers. The ostensible theme of the Georgics is instruction in the methods of running a farm. In handling this, Virgil follows in the didactic ("how to") tradition of the Greek poet Hesiod's Works and Days and works of the later Hellenistic poets. The four books of the Georgics focus respectively on: Well-known passages include the beloved Laus Italiae of Book 2, the prologue description of the temple in Book 3, and the description of the plague at the end of Book 3. Book 4 concludes with a long mythological narrative, in the form of an epyllion, which describes vividly the discovery of beekeeping by Aristaeus, and the story of Orpheus' journey to the underworld. Ancient scholars, such as Servius, conjectured that the Aristaeus episode replaced, at the emperor's request, a long section in praise of Virgil's friend, the poet Gallus, who was disgraced by Augustus, and committed suicide in 26 BC. The tone of the Georgics wavers between optimism and pessimism, sparking critical debate on the poet's intentions,: 1605 but the work lays the foundations for later didactic poetry. Virgil and Maecenas are said to have taken turns reading the Georgics to Octavian upon his return from defeating Antony and Cleopatra at the Battle of Actium in 31 BC. The Aeneid is widely considered Virgil's finest work, and one of the most important poems in the history of literature (T. S. Eliot referred to it as "the classic of all Europe"). The work, modelled after Homer's Iliad and Odyssey, chronicles the journey of a warrior and refugee of the Trojan War, named Aeneas, as he struggles to fulfill his destiny. After fleeing the sack of Troy, he travels to Italy, where he battles with Turnus, and his descendants Romulus and Remus found the city of Rome. The epic poem consists of 12 books in dactylic hexameter verse. The Aeneid's first six books describe the journey of Aeneas from Troy to Rome. Virgil made use of several models in the composition of his epic;: 1603 Homer, the pre-eminent author of classical epic, is everywhere present, but Virgil also makes special use of the Latin poet Ennius and the Hellenistic poet Apollonius of Rhodes, among other writers to whom he alludes. Although the Aeneid casts itself firmly into the epic mode, it often expands the genre by including elements of other genres, such as tragedy and aetiological poetry. Ancient commentators noted that Virgil seems to divide the Aeneid into two sections based on the poetry of Homer; the first six books were viewed as employing the Odyssey as a model while the last six were connected to the Iliad. Book 1,[ii] at the head of the Odyssean section, opens with a storm which Juno, Aeneas's enemy throughout the poem, stirs up against the fleet. The storm drives the hero to the coast of Carthage, which was Rome's deadliest foe. The queen, Dido, welcomes the ancestor of the Romans, and under the influence of the gods falls deeply in love with him. At a banquet in Book 2, Aeneas tells the story of the sack of Troy, the death of his wife, and his escape, to the enthralled Carthaginians, while in Book 3 he recounts to them his wanderings over the Mediterranean in search of a suitable new home. Jupiter in Book 4 recalls the lingering Aeneas to his duty to found a new city, and he slips away from Carthage, leaving Dido to commit suicide, cursing Aeneas and calling down revenge in symbolic anticipation of the fierce wars between Carthage and Rome. In Book 5, funeral games are celebrated for Aeneas's father Anchises, who had died a year before. On reaching Cumae, in Italy in Book 6, Aeneas consults the Cumaean Sibyl, who conducts him through the Underworld where Aeneas meets the dead Anchises who reveals Rome's destiny to his son. Book 7, beginning the Iliadic half, opens with an address to the muse and recounts Aeneas's arrival in Italy and betrothal to Lavinia, daughter of King Latinus. Lavinia had already been promised to Turnus, the king of the Rutulians, who is roused to war by the Fury Allecto and Amata, Lavinia's mother. In Book 8, Aeneas allies with King Evander, who occupies the future site of Rome, and is given new armor and a shield depicting Roman history. Book 9 records an assault by Nisus and Euryalus on the Rutulians; Book 10, the death of Evander's young son Pallas; and 11 the death of the Volscian warrior princess Camilla and the decision to settle the war with a duel between Aeneas and Turnus. The Aeneid ends in Book 12 with the taking of Latinus's city, the death of Amata, and Aeneas's defeat and killing of Turnus, whose pleas for mercy are spurned. The final book ends with the image of Turnus's soul lamenting as it flees to the underworld. Critics of the Aeneid focus on a variety of issues.[iii] The tone as a whole is a particular matter of debate; some see the poem as ultimately pessimistic and politically subversive to the Augustan regime, while others view it as a celebration of the new imperial dynasty. Virgil makes use of the symbolism of the regime, and some scholars see strong associations between Augustus and Aeneas, the one as founder and the other as re-founder of Rome. A strong teleology, or drive towards a climax, has been detected. The Aeneid is full of prophecies about the future of Rome, the deeds of Augustus, his ancestors, and famous Romans, and the Carthaginian Wars; the shield of Aeneas even depicts Augustus's victory at Actium against Mark Antony and Cleopatra in 31 BC. A further focus of study is the character of Aeneas. As the protagonist, Aeneas seems to waver constantly between his emotions and commitment to his prophetic duty to found Rome; critics note the breakdown of Aeneas's emotional control in the last sections of the poem where the "pious" and "righteous" Aeneas mercilessly slaughters Turnus. The Aeneid appears to have been a great success. Virgil is said to have recited Books 2, 4, and 6 to Augustus;: 1603 and Book 6 apparently caused the emperor's sister Octavia to faint. Although the truth of this claim is subject to scholarly skepticism, it has served as a basis for art, such as Jean-Baptiste Wicar's Virgil Reading the Aeneid. Some lines of the poem were left unfinished, and the whole was unedited, at Virgil's death in 19 BC. As a result, the text of the Aeneid that exists may contain faults which Virgil was planning to correct before publication. However, the only obvious imperfections are a few lines of verse that are metrically unfinished, i.e. not a complete line of dactylic hexameter. Some scholars have argued that Virgil deliberately left these incomplete for dramatic effect. Other alleged imperfections are subject to debate. Legacy and reception The works of Virgil, almost from the moment of their publication, revolutionized Latin poetry. The Eclogues, Georgics, and above all the Aeneid became standard texts in school curricula with which all educated Romans were familiar. Poets following Virgil often refer intertextually to his works to generate meaning in their poetry. The Augustan poet Ovid parodies the opening lines of the Aeneid in Amores 1.1.1–2, and his summary of the Aeneas story in Book 14 of the Metamorphoses, the so-called "mini-Aeneid", has been viewed as an important example of post-Virgilian response to the epic genre. Lucan's epic, the Bellum Civile, has been considered an anti-Virgilian epic, disposing of the divine mechanism, treating historical events, and diverging from Virgilian epic practice. The Flavian-era poet Statius in his 12-book epic Thebaid engages closely with the poetry of Virgil; in his epilogue he advises his poem not to "rival the divine Aeneid, but follow afar and ever venerate its footsteps". Virgil finds one of his most ardent admirers in Silius Italicus. With almost every line of his epic Punica, Silius references Virgil. Virgil also found commentators in antiquity. Servius, a commentator of the 4th century AD, based his work on the commentary of Donatus. Servius's commentary provides us with a great deal of information about Virgil's life, sources, and references; however, many modern scholars find the variable quality of his work and the often simplistic interpretations frustrating. Even as the Western Roman Empire collapsed, literate men acknowledged that Virgil was a master poet; Augustine of Hippo confessed how he had wept at reading the death of Dido. The best-known surviving manuscripts of Virgil's works include manuscripts from late antiquity, such as the Vergilius Augusteus, the Vergilius Vaticanus and the Vergilius Romanus. Gregory of Tours read Virgil, whom he quotes in several places, along with other Latin poets, though he cautions that "we ought not to relate their lying fables, lest we fall under sentence of eternal death". In the Renaissance of the 12th century, Alexander Neckham placed the "divine" Aeneid on his standard arts curriculum, and Dido became the romantic heroine of the age. Monks like Maiolus of Cluny might repudiate what they called "the luxurious eloquence of Virgil", but they could not deny the power of his appeal. Dante presents Virgil as his guide through Hell and the greater part of Purgatory in the Divine Comedy. He also mentions Virgil in De vulgari eloquentia, as one of the four regulati poetae along with Ovid, Lucan and Statius (ii, vi, 7). The Renaissance saw several authors inspired to write epic in Virgil's wake: Edmund Spenser called himself the English Virgil; Paradise Lost was influenced by the Aeneid; and later artists influenced include Berlioz and Hermann Broch. In the early modern period until the middle of the 18th century, Virgil was often regarded as the preeminent poet that European poets should try to emulate. A shift began in Germany when classical Greek culture rose in prestige at the expense of Roman, notably through the influence of Johann Joachim Winckelmann. In spite of a loss in prestige, Virgil continued to be widely read and studied, and had significant influence also on German-language writers from the second half of the 18th century, such as Salomon Gessner, Maler Müller, Johann Heinrich Voß, Johann Wolfgang von Goethe and Novalis. The legend of "Virgil in his basket" arose in the Middle Ages, and is often seen in art and mentioned in literature as part of the Power of Women literary topos, demonstrating the disruptive force of female attractiveness on men. In this story Virgil became enamoured of a beautiful woman, sometimes described as the emperor's daughter or mistress and called Lucretia. She played him along and agreed to an assignation at her house, which he was to sneak into at night, by climbing into a large basket let down from a window. When he did so he was hoisted only halfway up the wall and left trapped there into the next day, exposed to public ridicule. The story paralleled that of Phyllis riding Aristotle. Among other artists depicting the scene, Lucas van Leyden made a woodcut and later an engraving. Partially as a result of his so-called "Messianic" Eclogue 4 – interpreted from the 3rd century by Christian thinkers to have predicted the birth of Jesus – Virgil was in later antiquity imputed to have the magical abilities of a seer. Eclogue 4 describes the birth of a boy ushering in a golden age. In consequence, Virgil came to be seen on a similar level to the Hebrew prophets of the Bible as one who had heralded Christianity. The Jewish Encyclopedia argues that medieval legends about the golem may have been inspired by Virgilian legends about the poet's apocryphal power to bring inanimate objects to life. Possibly as early as the 2nd century AD, and into the Middle Ages, Virgil's works were seen as having magical properties and used for divination. In what became known as the Sortes Vergilianae ("Virgilian Lots"), passages would be selected at random and interpreted to answer questions. In a similar vein, Macrobius in the Saturnalia credits the work of Virgil as the embodiment of human knowledge and experience, mirroring the Greek conception of Homer.: 1603 In the 12th century, starting around Naples but eventually spreading throughout Europe, a tradition developed in which Virgil was regarded as a great magician. Legends about Virgil and his magical powers remained popular for over two hundred years, arguably becoming as prominent as his writings. In medieval Wales, the Welsh version of his name, Fferyllt or Pheryllt, became a generic term for magic-worker, and survives in its word for pharmacist, fferyllydd. Notes Citations Further reading External links Collected works Biography Commentary Bibliographies
========================================
[SOURCE: https://en.wikipedia.org/wiki/Jewish_population_by_country] | [TOKENS: 1616]
Contents Jewish population by country As of 2025,[update] the world's core Jewish population (those identifying as Jews to the exclusion of all else) was estimated at 15.8 million, which is approximately 0.2% of the 8 billion worldwide population. However, the "core Jewish" criterion faces criticism, especially in debates over the American Jewish population count, since it excludes the growing number of people who carry multiple ethnic and religious identities who may self-identify as Jews or qualify as Jewish under the Halakhic principle of matrilineal descent. Countries with core Jewish populations above 100,000 include France (440,000), Canada (398,000), the United Kingdom (312,000), Argentina (171,000), Russia (132,000), Germany (125,000), and Australia (117,200). In 1939, the core Jewish population reached its historical peak of 16.6 million or more. Due to the murder of almost six million Jews during the Holocaust, this number was reduced to 11 million by 1945. The core Jewish population grew to around 13 million by the 1970s and then recorded almost no growth until around 2005, due to low fertility rates and interfaith marriage by Jews. From 2005 to 2018, the world's core Jewish population grew 0.63% annually on average, while the world's population overall grew 1.1% annually in the same period. This increase primarily reflects rapid growth of Haredi, Orthodox populations. Trends Recent Jewish population dynamics are characterized by a continued steady increase in the Israeli population and flat or declining numbers in countries outside the Holy Land (the diaspora). Aliyah to Palestine began in earnest following the 1839 Tanzimat reforms; between 1840 and 1880, the Jewish population in Palestine rose from 9,000 to 23,000. In the late 19th century, 99.7% of the world's Jews lived outside the region, with Jews representing 2–5% of the Palestinian population. Through the phases of Aliyah, the Jewish population rose to 630,000 by the rebirth of Israel in 1948. By 2014 this had risen to 6,135,000, while the population of the diaspora had dropped from 10.5 to 8.1 million over the same period. Current demographics of Israel are characterized by a relatively high fertility rate of 3 children per woman and a stable age distribution. The overall growth rate of Jews in Israel is 1.7% annually. The diaspora countries, by contrast, have low Jewish birth rates, an increasingly elderly age composition, and a negative balance of people leaving Judaism versus converting to Judaism. Immigration trends also favor Israel ahead of diaspora countries. The Jewish state has a positive immigration balance (called aliyah in Hebrew). Israel saw its Jewish numbers significantly buoyed by a million-strong wave of Aliyah from the former Soviet Union in the 1990s, and immigration growth has been steady (in the low tens of thousands) since then. In general, the modern English-speaking world has seen an increase in its share of the diaspora since the Holocaust and the foundation of Israel, while historic diaspora Jewish populations in Eastern Europe, North Africa, and the Middle East have significantly declined or disappeared. France continues to be home to the world's third largest Jewish community, at around 500,000, but has shown an increasingly negative trend. As a long-term trend, intermarriage has reduced its "core" Jewish population and increased its "connected" and "enlarged" Jewish populations. More recently migration loss to Israel amongst French Jews reached the tens of thousands between 2014 and 2017 following a wave of antisemitic attacks. According to a 2017 Pew Research Center survey over the next four decades, the number of Jews around the world is expected to increase from 14.2 million in 2015 to 16.4 million in 2060. The number of Jews in the United States has been much debated because of differences in counting methodology resulting in recurring discrepancies of a million or more people in reports. These methodology differences are detailed by Pew Research Center in a 2020 study which estimated there were 5.8 million adult Jews in the United States and 1.8 million children of at least one Jewish parent being raised as Jewish in some way, for a total of 7.5 million Jews, 2.5% of the national population. However, Pew noted that Hebrew University demographer Sergio Della Pergola, reviewing the same data and applying a more narrower definition which counts children and adult Jews without religious affiliation only if they have two Jewish parents, determined that there were 4.8 million Jewish adults and 1.2 million Jewish children in the U.S. for a total of 6 million Jews, 2% of the national population. These numbers can be further complicated by applying the matrilineal descent principle from Halakha with Pew noting while only 4.8 million of the 5.8 million adults classified as Jews reported having a Jewish mother, a further 1.3 million adults classified as non-Jews of Jewish background, reported that they did have a Jewish mother. By country Total: 24268 Below is a list of Jewish populations in the world by country. All data below, except for the National official population, are from the annual World Jewish Population (2020) report coordinated by demographer Sergio Della Pergola at the Hebrew University of Jerusalem as part of the American Jewish Year Book. The figures are primarily based on national censuses combined with trend analysis. Della Pergola reports figures for the four following definitions: Where available, the list additionally contains official statistics reported by individual nations and year of latest report as National official population. The above table represents Jews that number at least a few dozen per country. Reports exist of Jewish communities remaining in other territories in the low single digits that are on the verge of disappearing, particularly in the Islamic world, as their part and parcel of their reaction to the Israeli declaration of Independence was the Jewish persecution in most Muslim lands; these are often of historical interest as they represent the remnant of much larger Jewish populations. For example, Egypt had a Jewish community of 80,000 in the early 20th century that numbered fewer than 40 as of 2014, mainly because of the forced expulsion movements to Israel and other countries at that time. Despite a 2,000-year history of Jewish presence, there are no longer any known Jews living in Afghanistan, as its last Jewish residents Zebulon Simintov and Tova Moradi, fled the country in September and October 2021, respectively. In the Syrian Arab republic, another Jewish community saw mass exodus at the end of the 20th century and numbered fewer than 20 in the midst of the Syrian Civil War. The size of the Jewish community in Indonesia has been variously given as 65, 100, or 18 at most over the last 50 years. Due to the Yemenite civil war (2014–present), the Yemeni Jews have faced persecution by various radical Islamist, Jihadist organizations including Houthis, AQAP and ISIS-Yemen who have demanded they convert to Islam, pay the Jizya tax and survive or face execution. The Israel Defense Forces has conducted operations evacuating the population and moving them to Israel. On 28 March 2021, 13 Jews were forced by the Houthis to leave Yemen, leaving the last four elderly Jews in Yemen. According to one report there are six Jews left in Yemen: one woman, her brother, three others, and Levi Salem Marahbi (who had been imprisoned for helping smuggle a Torah scroll out of Yemen). See also References External links
========================================
[SOURCE: https://github.com/pricing] | [TOKENS: 5624]
Navigation Menu Search code, repositories, users, issues, pull requests... Provide feedback We read every piece of feedback, and take your input very seriously. Saved searches Use saved searches to filter your results more quickly To see all available qualifiers, see our documentation. Try the Copilot-powered platform We get it, there's a lot you can do with GitHub. That’s why we've packed all of it into a single risk-free trial that includes GitHub Enterprise, Copilot, and Advanced Security. Free The basics for individuals and organizations Host open source projects in public GitHub repositories, accessible via web or command line. Public repositories are accessible to anyone at GitHub.com. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Keep projects secure by automatically opening pull requests to update vulnerable dependencies and keep them up to date. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Free for public repositories Use execution minutes with GitHub Actions to automate your software development workflows. Write tasks and combine them to build, test, and deploy any code project on GitHub. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Free for public repositories Host your own software packages or use them as dependencies in other projects. Both private and public hosting available. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Give your developers flexible features for project management that adapts to any team, project, and workflow — all alongside your code. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Get help with most of your GitHub questions and issues in our Community Forum. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. With GitHub Copilot, get suggestions for whole lines or entire functions—right inside your editor. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. With GitHub Codespaces, get an instant dev environment in the cloud, so you can code anywhere on any device. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Most popular Team Advanced collaboration for individuals and organizations Blazing fast cloud developer environments with flexible compute and pre-configured containers, developers can code, collaborate, and debug from any browser. Pay only for what you use with compute fees starting at $0.18/hr and storage fees at $0.07/GB per month. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Enforce restrictions on how code branches and tags are merged across your organization, including requiring reviews by selected collaborators, or allowing only specific contributors to work on a particular branch. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Assign multiple users or a team to review a pull request. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Easily discuss and collaborate on pull requests before submitting to formal review. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Automatically request reviews—or require approval—by selected contributors when changes are made to sections of code that they own. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Ensure that pull requests have a specific number of approving reviews before collaborators can make changes to a protected branch. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Host documentation and simple websites for your project in a wiki format that contributors can easily edit either on the web or command line. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. A job cannot access secrets that are defined in an environment unless it is running on the specified branch. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Free for public repositories Use execution minutes with GitHub Actions to automate your software development workflows. Write tasks and combine them to build, test, and deploy any code project on GitHub. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Free for public repositories Host your own software packages or use them as dependencies in other projects. Both private and public hosting available. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. GitHub Support can help you troubleshoot issues you run into while using GitHub. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Ensure your secrets stay secure. Mitigate risk associated with exposed secrets in your repositories, while preventing new leaks before they happen with push protection. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Find and fix vulnerabilities in your code before they reach production. Prioritize your Dependabot alerts with automated triage rules. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Recommended Enterprise Security, compliance, and flexible deployment GitHub Enterprise Cloud offers a multi-tenant enterprise SaaS solution on Microsoft Azure, allowing you to choose a regional cloud deployment for data residency, so your in-scope data is stored at rest in a designated location. Start a free 30 day trial today or contact our sales team for more information. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Own and control the user accounts of your enterprise members through your identity provider (IdP). There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Automatically invite members to join your organization when you grant access on your IdP. If you remove a member's access to your GitHub organization on your SAML IdP, the member will be automatically removed from the GitHub organization. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. GitHub Enterprise Cloud includes the option to create an enterprise account, which enables collaboration between multiple organizations, gives administrators a single point of visibility and management and brings license cost savings for identical users in multiple organizations. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. When a workflow job references an environment, the job won't start until all of the environment's protection rules pass. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Enforce branch and tag protections, as well as push rules across your enterprise. Rule insights allow you to assess impact of rules before and during enforcement. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. As a GitHub Enterprise Cloud organization administrator, you can now access log events using our GraphQL API and monitor the activity in your organization. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. GitHub offers AICPA System and Organization Controls (SOC) 1 Type 2 and SOC 2 Type 2 reports with IAASB International Standards on Assurance Engagements, ISAE 3000, and ISAE 3402. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Government users can host projects on GitHub Enterprise Cloud with the confidence that our platform meets the low impact software-as-a-service (SaaS) baseline of security standards set by our U.S. federal government partners. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Use an identity provider to manage the identities of GitHub users and applications. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Quickly review the actions performed by members of your organization. Keep copies of audit log data to ensure secure IP and maintain compliance for your organization. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Share features and workflows between your GitHub Enterprise Server instance and GitHub Enterprise Cloud. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Free for public repositories Use execution minutes with GitHub Actions to automate your software development workflows. Write tasks and combine them to build, test, and deploy any code project on GitHub. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Free for public repositories Host your own software packages or use them as dependencies in other projects. Both private and public hosting available. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. With Premium, get a 30-minute SLA on Urgent tickets and 24/7 web and phone support via callback request. With Premium Plus, get everything in Premium, assigned Customer Reliability Engineer and more. Learn more about Premium Support There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Additional add-ons Get started for free with up to 2,000 completions and 50 chat requests per month. Bring industry-leading AI into your workflow, securely, scalably, and with full developer control. Gain peace of mind with our security, privacy, and responsible AI policies. Starting at $0.18 per hour of compute and $0.07 per GB of storage. Get expert help for Enterprise Cloud and Enterprise—any hour your team needs it. $5 per month for 50 GB bandwidth and 50 GB of storage. “GitHub is the world’s mono repository, so sharing our open source there is natural.” — Martin Andersen, VP of Engineering, Trustpilot “GitHub Advanced Security is there for every pull request and excels compared to other static analysis tools we have used.” — Dimosthenis Kaponis, CTO, Netdata “GitHub keeps us up to speed with the industry’s best tools. We want new hires to know GitHub is in our toolchain—it makes them excited to join us.” — Spencer Kaiser, Principal Architect of Emerging Tech, American Airlines “This collaborative way of building software is unstoppable. It isn’t going away—and GitHub has its place in that. We can make the whole company rethink how they build software.” — Ingo Sauerzapf, SAP Cloud Development Tools Manager “People know what a pull request is because it’s how they contribute to open source projects. We have many developers who are well-versed with GitHub, either for personal development or previous roles. With GitHub Enterprise, no one has to relearn the wheel.” — Laurent Ploix, Product Manager, Spotify “I have seen some truly revolutionary actions happen in communities on GitHub. People are collaborating on code but they’re also having foundational conversations on best practices and how software, as a whole, is built. More and more, GitHub is an internet archive. It’s a deeply social and critical piece of our infrastructure.” — Michael Glukhovsky, Developer, Stripe “When we started talking about code reuse, we felt like we already had the perfect platform in place: GitHub.” — Timothy Carmean, Software Processes and Tools Supervisor, Ford “Using GitHub Enterprise Cloud removes the burden of managing infrastructure, and we don’t need to worry about the availability of our versioning code, source code and versioning tools. It lets us focus on what’s important for our business, and that’s our customers.” — Victor Gomes, Infosec Tech Manager, Nubank Compare features Free Team Enterprise Host open source projects in public GitHub repositories, accessible via web or command line. Public repositories are accessible to anyone at GitHub.com. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Host code in private GitHub repositories, accessible via appliance, web, and command line. Private repositories are only accessible to you and people you share them with. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Spin up fully configured dev environments in the cloud with the power of your favorite editor. A "core hour" denotes compute usage. On a 2-core machine, you would get 60 hours free. On a 4-core machine, you would get 30 hours free, etc. Free hours are assigned to personal accounts, rather than free organizations. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Free for public repositories Free for public repositories Free for public repositories Use execution minutes with GitHub Actions to automate your software development workflows. Write tasks and combine them to build, test, and deploy any code project on GitHub. Minutes are free for public repositories.Learn more about billing There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Free for public repositories Free for public repositories Free for public repositories Free for public repositories Free for public repositories Free for public repositories Host your own software packages or use them as dependencies in other projects. Both private and public hosting available. Packages are free for public repositories. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Free for public repositories Free for public repositories Free for public repositories Review new code, see visual code changes, and confidently merge code changes with automated status checks. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Allow contributors to easily notify you of changes they've pushed to a repository – with access limited to the contributors you specify. Easily merge changes you accept. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Enforce restrictions on how code branches are merged, including requiring reviews by selected collaborators, or allowing only specific contributors to work on a particular branch. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Automatically request reviews – or require approval – by selected contributors when changes are made to sections of code that they own. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Easily discuss and collaborate on pull requests before submitting to formal review. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Assign more than one person to a pull request. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. See data about activity and contributions within your repositories, including trends. You can use this data to improve collaboration and make development faster and more effective. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Send scheduled messages to you or your team listing open pull requests. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Automatically assign code reviews to members of your team based on one of two algorithms. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. When a workflow job references an environment, the job won't start until all of the environment's protection rules pass. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. A job cannot access secrets that are defined in an environment unless it is running on the specified branch. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Invite any GitHub member, or all GitHub members, to work with you on code in a public repository you control – including making changes and opening issues. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Invite any GitHub member, or all GitHub members, to work with you on code in a private repository you control – including making changes and opening issues. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Track bugs, enhancements, and other requests, prioritize work, and communicate with stakeholders as changes are proposed and merged. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Visualize and manage issues and pull requests across tables, boards, and roadmaps with custom fields and views that you can arrange to suit your workflow. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Track progress on groups of issues or pull requests in a repository, and map groups to overall project goals. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Manage access to projects on a team-by-team, or individual user, basis. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Host documentation and simple websites for your project in a wiki format that contributors can easily edit either on the web or command line. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Assign more than one person to an issue. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Prevent secret exposures by proactively blocking secrets before they reach your code. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Detect and manage exposed secrets across git history, pull requests, issues, and wikis. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. GitHub collaborates with AWS, Azure, and Google Cloud to detect secrets with high accuracy. This minimizes false positives, letting you focus on what matters. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Providers get real-time alerts when their tokens appear in public code, enabling them to notify, quarantine, or revoke secrets. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Prioritize active secrets with validity checks for provider patterns. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Use AI to detect unstructured like passwords—without the noise. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Detect tokens from unknown providers, including HTTP authentication headers, connection strings, and private keys. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Create your own patterns and find organization-specific secrets. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Manage who can bypass push protection and when. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Understand how risk is distributed across your organization with security metrics and insight dashboards. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Review how and when GitHub scans your repositories for secrets. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Powered by GitHub Copilot, generate automatic fixes for 90% of alert types in JavaScript, Typescript, Java, and Python. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Centralize your findings across all your scanning tools via SARIF upload to GitHub. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Quickly remediate with context provided by Copilot Autofix. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Uncover vulnerabilities in your code with our industry-leading semantic code analysis. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Reduce security debt and burn down your security backlog with security campaigns. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Get a clear view of your project’s dependencies with a summary of manifest, lock files, and submitted dependencies via the API. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Catch insecure dependencies before adding them and get insights on licenses, dependents, and age. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Define alert-centric policies to control how Dependabot handles alerts and pull requests. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Automated pull requests that batch dependency updates for known vulnerabilities. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Automated pull requests that keep your dependencies up to date. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Get a clear view of risk distribution with security metrics and dashboards. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Enforce consistent code standards, security, and compliance across branches and tags. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Export a software bill of materials (SBOM) for your repository. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Ensure unfalsifiable provenance and integrity for your software. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Define users' level of access to your code, data and settings. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Use an extra layer of security with two factor authentication (2FA) when logging into GitHub. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Quickly review the actions performed by members of your organization. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Keep copies of audit log data to ensure secure IP and maintain compliance for your organization. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Share features and workflows between your GitHub Enterprise Server instance and GitHub Enterprise Cloud. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Use an identity provider to manage the identities of GitHub users and applications. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Access GitHub Enterprise Server using your existing accounts and centrally manage repository access. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Limit access to known allowed IP addresses. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Install apps that integrate directly with GitHub's API to improve development workflows – or build your own for private use or publication in the GitHub Marketplace. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Define tests that GitHub automatically runs against code being committed to your repository, and get details about failures and what is causing them. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Create requirements for automatically accepting or rejecting a push based on the contents of the push. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Get help with most of your GitHub questions and issues in our Community Forum. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. GitHub Support can help you troubleshoot issues you run into while using GitHub. Get support via the web. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. With Premium, get a 30-minute SLA on Urgent tickets and 24/7 web and phone support via callback request. With Premium Plus, get everything in Premium, assigned Customer Reliability Engineer and more. Learn more about Premium Support There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Pay bills via invoice, rather than using your credit card. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Self-hosted GitHub for on-prem appliances or self-managed cloud tenants. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Multi-tenant enterprise SaaS solution on Microsoft Azure, allowing you to choose a regional cloud deployment for data residency, so your in-scope data is stored at rest in a designated location. This is available in the EU and Australia with additional regions coming soon. Contact our sales team to learn more. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. We love people who are changing the world If you manage multiple contributors, there’s a free option. We also run GitHub Sponsors, where we help fund your work. We’ve partnered with industry leaders to give students and teachers free access to the best developer tools—for the school year and beyond. Work for a government-recognized nonprofit, association, or 501(c)(3)? Get a discounted Organization on us. Site-wide Links Get tips, technical guides, and best practices. Twice a month.
========================================
[SOURCE: https://en.wikipedia.org/w/index.php?title=OpenAI&action=edit&section=6] | [TOKENS: 1430]
Editing OpenAI (section) Copy and paste: – — ° ′ ″ ≈ ≠ ≤ ≥ ± − × ÷ ← → · § Cite your sources: <ref></ref> {{}} {{{}}} | [] [[]] [[Category:]] #REDIRECT [[]] &nbsp; <s></s> <sup></sup> <sub></sub> <code></code> <pre></pre> <blockquote></blockquote> <ref></ref> <ref name="" /> {{Reflist}} <references /> <includeonly></includeonly> <noinclude></noinclude> {{DEFAULTSORT:}} <nowiki></nowiki> <!-- --> <span class="plainlinks"></span> Symbols: ~ | ¡ ¿ † ‡ ↔ ↑ ↓ • ¶ # ∞ ‹› «» ¤ ₳ ฿ ₵ ¢ ₡ ₢ $ ₫ ₯ € ₠ ₣ ƒ ₴ ₭ ₤ ℳ ₥ ₦ ₧ ₰ £ ៛ ₨ ₪ ৳ ₮ ₩ ¥ ♠ ♣ ♥ ♦ 𝄫 ♭ ♮ ♯ 𝄪 © ¼ ½ ¾ Latin: A a Á á À à  â Ä ä Ǎ ǎ Ă ă Ā ā à ã Å å Ą ą Æ æ Ǣ ǣ B b C c Ć ć Ċ ċ Ĉ ĉ Č č Ç ç D d Ď ď Đ đ Ḍ ḍ Ð ð E e É é È è Ė ė Ê ê Ë ë Ě ě Ĕ ĕ Ē ē Ẽ ẽ Ę ę Ẹ ẹ Ɛ ɛ Ǝ ǝ Ə ə F f G g Ġ ġ Ĝ ĝ Ğ ğ Ģ ģ H h Ĥ ĥ Ħ ħ Ḥ ḥ I i İ ı Í í Ì ì Î î Ï ï Ǐ ǐ Ĭ ĭ Ī ī Ĩ ĩ Į į Ị ị J j Ĵ ĵ K k Ķ ķ L l Ĺ ĺ Ŀ ŀ Ľ ľ Ļ ļ Ł ł Ḷ ḷ Ḹ ḹ M m Ṃ ṃ N n Ń ń Ň ň Ñ ñ Ņ ņ Ṇ ṇ Ŋ ŋ O o Ó ó Ò ò Ô ô Ö ö Ǒ ǒ Ŏ ŏ Ō ō Õ õ Ǫ ǫ Ọ ọ Ő ő Ø ø Œ œ Ɔ ɔ P p Q q R r Ŕ ŕ Ř ř Ŗ ŗ Ṛ ṛ Ṝ ṝ S s Ś ś Ŝ ŝ Š š Ş ş Ș ș Ṣ ṣ ß T t Ť ť Ţ ţ Ț ț Ṭ ṭ Þ þ U u Ú ú Ù ù Û û Ü ü Ǔ ǔ Ŭ ŭ Ū ū Ũ ũ Ů ů Ų ų Ụ ụ Ű ű Ǘ ǘ Ǜ ǜ Ǚ ǚ Ǖ ǖ V v W w Ŵ ŵ X x Y y Ý ý Ŷ ŷ Ÿ ÿ Ỹ ỹ Ȳ ȳ Z z Ź ź Ż ż Ž ž ß Ð ð Þ þ Ŋ ŋ Ə ə Greek: Ά ά Έ έ Ή ή Ί ί Ό ό Ύ ύ Ώ ώ Α α Β β Γ γ Δ δ Ε ε Ζ ζ Η η Θ θ Ι ι Κ κ Λ λ Μ μ Ν ν Ξ ξ Ο ο Π π Ρ ρ Σ σ ς Τ τ Υ υ Φ φ Χ χ Ψ ψ Ω ω {{Polytonic|}} Cyrillic: А а Б б В в Г г Ґ ґ Ѓ ѓ Д д Ђ ђ Е е Ё ё Є є Ж ж З з Ѕ ѕ И и І і Ї ї Й й Ј ј К к Ќ ќ Л л Љ љ М м Н н Њ њ О о П п Р р С с Т т Ћ ћ У у Ў ў Ф ф Х х Ц ц Ч ч Џ џ Ш ш Щ щ Ъ ъ Ы ы Ь ь Э э Ю ю Я я ́ IPA: t̪ d̪ ʈ ɖ ɟ ɡ ɢ ʡ ʔ ɸ β θ ð ʃ ʒ ɕ ʑ ʂ ʐ ç ʝ ɣ χ ʁ ħ ʕ ʜ ʢ ɦ ɱ ɳ ɲ ŋ ɴ ʋ ɹ ɻ ɰ ʙ ⱱ ʀ ɾ ɽ ɫ ɬ ɮ ɺ ɭ ʎ ʟ ɥ ʍ ɧ ʼ ɓ ɗ ʄ ɠ ʛ ʘ ǀ ǃ ǂ ǁ ɨ ʉ ɯ ɪ ʏ ʊ ø ɘ ɵ ɤ ə ɚ ɛ œ ɜ ɝ ɞ ʌ ɔ æ ɐ ɶ ɑ ɒ ʰ ʱ ʷ ʲ ˠ ˤ ⁿ ˡ ˈ ˌ ː ˑ ̪ {{IPA|}} This page is a member of 8 hidden categories (help):
========================================
[SOURCE: https://en.wikipedia.org/wiki/Mars#cite_note-43] | [TOKENS: 11899]
Contents Mars Mars is the fourth planet from the Sun. It is also known as the "Red Planet", for its orange-red appearance. Mars is a desert-like rocky planet with a tenuous atmosphere that is primarily carbon dioxide (CO2). At the average surface level the atmospheric pressure is a few thousandths of Earth's, atmospheric temperature ranges from −153 to 20 °C (−243 to 68 °F), and cosmic radiation is high. Mars retains some water, in the ground as well as thinly in the atmosphere, forming cirrus clouds, fog, frost, larger polar regions of permafrost and ice caps (with seasonal CO2 snow), but no bodies of liquid surface water. Its surface gravity is roughly a third of Earth's or double that of the Moon. Its diameter, 6,779 km (4,212 mi), is about half the Earth's, or twice the Moon's, and its surface area is the size of all the dry land of Earth. Fine dust is prevalent across the surface and the atmosphere, being picked up and spread at the low Martian gravity even by the weak wind of the tenuous atmosphere. The terrain of Mars roughly follows a north-south divide, the Martian dichotomy, with the northern hemisphere mainly consisting of relatively flat, low lying plains, and the southern hemisphere of cratered highlands. Geologically, the planet is fairly active with marsquakes trembling underneath the ground, but also hosts many enormous volcanoes that are extinct (the tallest is Olympus Mons, 21.9 km or 13.6 mi tall), as well as one of the largest canyons in the Solar System (Valles Marineris, 4,000 km or 2,500 mi long). Mars has two natural satellites that are small and irregular in shape: Phobos and Deimos. With a significant axial tilt of 25 degrees, Mars experiences seasons, like Earth (which has an axial tilt of 23.5 degrees). A Martian solar year is equal to 1.88 Earth years (687 Earth days), a Martian solar day (sol) is equal to 24.6 hours. Mars formed along with the other planets approximately 4.5 billion years ago. During the martian Noachian period (4.5 to 3.5 billion years ago), its surface was marked by meteor impacts, valley formation, erosion, the possible presence of water oceans and the loss of its magnetosphere. The Hesperian period (beginning 3.5 billion years ago and ending 3.3–2.9 billion years ago) was dominated by widespread volcanic activity and flooding that carved immense outflow channels. The Amazonian period, which continues to the present, is the currently dominating and remaining influence on geological processes. Because of Mars's geological history, the possibility of past or present life on Mars remains an area of active scientific investigation, with some possible traces needing further examination. Being visible with the naked eye in Earth's sky as a red wandering star, Mars has been observed throughout history, acquiring diverse associations in different cultures. In 1963 the first flight to Mars took place with Mars 1, but communication was lost en route. The first successful flyby exploration of Mars was conducted in 1965 with Mariner 4. In 1971 Mariner 9 entered orbit around Mars, being the first spacecraft to orbit any body other than the Moon, Sun or Earth; following in the same year were the first uncontrolled impact (Mars 2) and first successful landing (Mars 3) on Mars. Probes have been active on Mars continuously since 1997. At times, more than ten probes have simultaneously operated in orbit or on the surface, more than at any other planet beyond Earth. Mars is an often proposed target for future crewed exploration missions, though no such mission is currently planned. Natural history Scientists have theorized that during the Solar System's formation, Mars was created as the result of a random process of run-away accretion of material from the protoplanetary disk that orbited the Sun. Mars has many distinctive chemical features caused by its position in the Solar System. Elements with comparatively low boiling points, such as chlorine, phosphorus, and sulfur, are much more common on Mars than on Earth; these elements were probably pushed outward by the young Sun's energetic solar wind. After the formation of the planets, the inner Solar System may have been subjected to the so-called Late Heavy Bombardment. About 60% of the surface of Mars shows a record of impacts from that era, whereas much of the remaining surface is probably underlain by immense impact basins caused by those events. However, more recent modeling has disputed the existence of the Late Heavy Bombardment. There is evidence of an enormous impact basin in the Northern Hemisphere of Mars, spanning 10,600 by 8,500 kilometres (6,600 by 5,300 mi), or roughly four times the size of the Moon's South Pole–Aitken basin, which would be the largest impact basin yet discovered if confirmed. It has been hypothesized that the basin was formed when Mars was struck by a Pluto-sized body about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet. A 2023 study shows evidence, based on the orbital inclination of Deimos (a small moon of Mars), that Mars may once have had a ring system 3.5 billion years to 4 billion years ago. This ring system may have been formed from a moon, 20 times more massive than Phobos, orbiting Mars billions of years ago; and Phobos would be a remnant of that ring. Epochs: The geological history of Mars can be split into many periods, but the following are the three primary periods: Geological activity is still taking place on Mars. The Athabasca Valles is home to sheet-like lava flows created about 200 million years ago. Water flows in the grabens called the Cerberus Fossae occurred less than 20 million years ago, indicating equally recent volcanic intrusions. The Mars Reconnaissance Orbiter has captured images of avalanches. Physical characteristics Mars is approximately half the diameter of Earth or twice that of the Moon, with a surface area only slightly less than the total area of Earth's dry land. Mars is less dense than Earth, having about 15% of Earth's volume and 11% of Earth's mass, resulting in about 38% of Earth's surface gravity. Mars is the only presently known example of a desert planet, a rocky planet with a surface akin to that of Earth's deserts. The red-orange appearance of the Martian surface is caused by iron(III) oxide (nanophase Fe2O3) and the iron(III) oxide-hydroxide mineral goethite. It can look like butterscotch; other common surface colors include golden, brown, tan, and greenish, depending on the minerals present. Like Earth, Mars is differentiated into a dense metallic core overlaid by less dense rocky layers. The outermost layer is the crust, which is on average about 42–56 kilometres (26–35 mi) thick, with a minimum thickness of 6 kilometres (3.7 mi) in Isidis Planitia, and a maximum thickness of 117 kilometres (73 mi) in the southern Tharsis plateau. For comparison, Earth's crust averages 27.3 ± 4.8 km in thickness. The most abundant elements in the Martian crust are silicon, oxygen, iron, magnesium, aluminum, calcium, and potassium. Mars is confirmed to be seismically active; in 2019, it was reported that InSight had detected and recorded over 450 marsquakes and related events. Beneath the crust is a silicate mantle responsible for many of the tectonic and volcanic features on the planet's surface. The upper Martian mantle is a low-velocity zone, where the velocity of seismic waves is lower than surrounding depth intervals. The mantle appears to be rigid down to the depth of about 250 km, giving Mars a very thick lithosphere compared to Earth. Below this the mantle gradually becomes more ductile, and the seismic wave velocity starts to grow again. The Martian mantle does not appear to have a thermally insulating layer analogous to Earth's lower mantle; instead, below 1050 km in depth, it becomes mineralogically similar to Earth's transition zone. At the bottom of the mantle lies a basal liquid silicate layer approximately 150–180 km thick. The Martian mantle appears to be highly heterogenous, with dense fragments up to 4 km across, likely injected deep into the planet by colossal impacts ~4.5 billion years ago; high-frequency waves from eight marsquakes slowed as they passed these localized regions, and modeling indicates the heterogeneities are compositionally distinct debris preserved because Mars lacks plate tectonics and has a sluggishly convecting interior that prevents complete homogenization. Mars's iron and nickel core is at least partially molten, and may have a solid inner core. It is around half of Mars's radius, approximately 1650–1675 km, and is enriched in light elements such as sulfur, oxygen, carbon, and hydrogen. The temperature of the core is estimated to be 2000–2400 K, compared to 5400–6230 K for Earth's solid inner core. In 2025, based on data from the InSight lander, a group of researchers reported the detection of a solid inner core 613 kilometres (381 mi) ± 67 kilometres (42 mi) in radius. Mars is a terrestrial planet with a surface that consists of minerals containing silicon and oxygen, metals, and other elements that typically make up rock. The Martian surface is primarily composed of tholeiitic basalt, although parts are more silica-rich than typical basalt and may be similar to andesitic rocks on Earth, or silica glass. Regions of low albedo suggest concentrations of plagioclase feldspar, with northern low albedo regions displaying higher than normal concentrations of sheet silicates and high-silicon glass. Parts of the southern highlands include detectable amounts of high-calcium pyroxenes. Localized concentrations of hematite and olivine have been found. Much of the surface is deeply covered by finely grained iron(III) oxide dust. The Phoenix lander returned data showing Martian soil to be slightly alkaline and containing elements such as magnesium, sodium, potassium and chlorine. These nutrients are found in soils on Earth, and are necessary for plant growth. Experiments performed by the lander showed that the Martian soil has a basic pH of 7.7, and contains 0.6% perchlorate by weight, concentrations that are toxic to humans. Streaks are common across Mars and new ones appear frequently on steep slopes of craters, troughs, and valleys. The streaks are dark at first and get lighter with age. The streaks can start in a tiny area, then spread out for hundreds of metres. They have been seen to follow the edges of boulders and other obstacles in their path. The commonly accepted hypotheses include that they are dark underlying layers of soil revealed after avalanches of bright dust or dust devils. Several other explanations have been put forward, including those that involve water or even the growth of organisms. Environmental radiation levels on the surface are on average 0.64 millisieverts of radiation per day, and significantly less than the radiation of 1.84 millisieverts per day or 22 millirads per day during the flight to and from Mars. For comparison the radiation levels in low Earth orbit, where Earth's space stations orbit, are around 0.5 millisieverts of radiation per day. Hellas Planitia has the lowest surface radiation at about 0.342 millisieverts per day, featuring lava tubes southwest of Hadriacus Mons with potentially levels as low as 0.064 millisieverts per day, comparable to radiation levels during flights on Earth. Although Mars has no evidence of a structured global magnetic field, observations show that parts of the planet's crust have been magnetized, suggesting that alternating polarity reversals of its dipole field have occurred in the past. This paleomagnetism of magnetically susceptible minerals is similar to the alternating bands found on Earth's ocean floors. One hypothesis, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands suggest plate tectonic activity on Mars four billion years ago, before the planetary dynamo ceased to function and the planet's magnetic field faded. Geography and features Although better remembered for mapping the Moon, Johann Heinrich von Mädler and Wilhelm Beer were the first areographers. They began by establishing that most of Mars's surface features were permanent and by more precisely determining the planet's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars. Features on Mars are named from a variety of sources. Albedo features are named for classical mythology. Craters larger than roughly 50 km are named for deceased scientists and writers and others who have contributed to the study of Mars. Smaller craters are named for towns and villages of the world with populations of less than 100,000. Large valleys are named for the word "Mars" or "star" in various languages; smaller valleys are named for rivers. Large albedo features retain many of the older names but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian "continents" and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The permanent northern polar ice cap is named Planum Boreum. The southern cap is called Planum Australe. Mars's equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mädler and Beer selected a line for their first maps of Mars in 1830. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen by Merton E. Davies, Harold Masursky, and Gérard de Vaucouleurs for the definition of 0.0° longitude to coincide with the original selection. Because Mars has no oceans, and hence no "sea level", a zero-elevation surface had to be selected as a reference level; this is called the areoid of Mars, analogous to the terrestrial geoid. Zero altitude was defined by the height at which there is 610.5 Pa (6.105 mbar) of atmospheric pressure. This pressure corresponds to the triple point of water, and it is about 0.6% of the sea level surface pressure on Earth (0.006 atm). For mapping purposes, the United States Geological Survey divides the surface of Mars into thirty cartographic quadrangles, each named for a classical albedo feature it contains. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023. The vast upland region Tharsis contains several massive volcanoes, which include the shield volcano Olympus Mons. The edifice is over 600 km (370 mi) wide. Because the mountain is so large, with complex structure at its edges, giving a definite height to it is difficult. Its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over 21 km (13 mi), a little over twice the height of Mauna Kea as measured from its base on the ocean floor. The total elevation change from the plains of Amazonis Planitia, over 1,000 km (620 mi) to the northwest, to the summit approaches 26 km (16 mi), roughly three times the height of Mount Everest, which in comparison stands at just over 8.8 kilometres (5.5 mi). Consequently, Olympus Mons is either the tallest or second-tallest mountain in the Solar System; the only known mountain which might be taller is the Rheasilvia peak on the asteroid Vesta, at 20–25 km (12–16 mi). The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. It is possible that, four billion years ago, the Northern Hemisphere of Mars was struck by an object one-tenth to two-thirds the size of Earth's Moon. If this is the case, the Northern Hemisphere of Mars would be the site of an impact crater 10,600 by 8,500 kilometres (6,600 by 5,300 mi) in size, or roughly the area of Europe, Asia, and Australia combined, surpassing Utopia Planitia and the Moon's South Pole–Aitken basin as the largest impact crater in the Solar System. Mars is scarred by 43,000 impact craters with a diameter of 5 kilometres (3.1 mi) or greater. The largest exposed crater is Hellas, which is 2,300 kilometres (1,400 mi) wide and 7,000 metres (23,000 ft) deep, and is a light albedo feature clearly visible from Earth. There are other notable impact features, such as Argyre, which is around 1,800 kilometres (1,100 mi) in diameter, and Isidis, which is around 1,500 kilometres (930 mi) in diameter. Due to the smaller mass and size of Mars, the probability of an object colliding with the planet is about half that of Earth. Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter. Martian craters can[discuss] have a morphology that suggests the ground became wet after the meteor impact. The large canyon, Valles Marineris (Latin for 'Mariner Valleys, also known as Agathodaemon in the old canal maps), has a length of 4,000 kilometres (2,500 mi) and a depth of up to 7 kilometres (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 kilometres (277 mi) long and nearly 2 kilometres (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 kilometres (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement. Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the volcano Arsia Mons. The caves, named after loved ones of their discoverers, are collectively known as the "seven sisters". Cave entrances measure from 100 to 252 metres (328 to 827 ft) wide and they are estimated to be at least 73 to 96 metres (240 to 315 ft) deep. Because light does not reach the floor of most of the caves, they may extend much deeper than these lower estimates and widen below the surface. "Dena" is the only exception; its floor is visible and was measured to be 130 metres (430 ft) deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface. Martian geysers (or CO2 jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms – are the two most visible types of features ascribed to these eruptions. Similarly sized dust will settle from the thinner Martian atmosphere sooner than it would on Earth. For example, the dust suspended by the 2001 global dust storms on Mars only remained in the Martian atmosphere for 0.6 years, while the dust from Mount Pinatubo took about two years to settle. However, under current Martian conditions, the mass movements involved are generally much smaller than on Earth. Even the 2001 global dust storms on Mars moved only the equivalent of a very thin dust layer – about 3 μm thick if deposited with uniform thickness between 58° north and south of the equator. Dust deposition at the two rover sites has proceeded at a rate of about the thickness of a grain every 100 sols. Atmosphere Mars lost its magnetosphere 4 billion years ago, possibly because of numerous asteroid strikes, so the solar wind interacts directly with the Martian ionosphere, lowering the atmospheric density by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected ionized atmospheric particles trailing off into space behind Mars, and this atmospheric loss is being studied by the MAVEN orbiter. Compared to Earth, the atmosphere of Mars is quite rarefied. Atmospheric pressure on the surface today ranges from a low of 30 Pa (0.0044 psi) on Olympus Mons to over 1,155 Pa (0.1675 psi) in Hellas Planitia, with a mean pressure at the surface level of 600 Pa (0.087 psi). The highest atmospheric density on Mars is equal to that found 35 kilometres (22 mi) above Earth's surface. The resulting mean surface pressure is only 0.6% of Earth's 101.3 kPa (14.69 psi). The scale height of the atmosphere is about 10.8 kilometres (6.7 mi), which is higher than Earth's 6 kilometres (3.7 mi), because the surface gravity of Mars is only about 38% of Earth's. The atmosphere of Mars consists of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 μm in diameter which give the Martian sky a tawny color when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it. Despite repeated detections of methane on Mars, there is no scientific consensus as to its origin. One suggestion is that methane exists on Mars and that its concentration fluctuates seasonally. The existence of methane could be produced by non-biological process such as serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars, or by Martian life. Compared to Earth, its higher concentration of atmospheric CO2 and lower surface pressure may be why sound is attenuated more on Mars, where natural sources are rare apart from the wind. Using acoustic recordings collected by the Perseverance rover, researchers concluded that the speed of sound there is approximately 240 m/s for frequencies below 240 Hz, and 250 m/s for those above. Auroras have been detected on Mars. Because Mars lacks a global magnetic field, the types and distribution of auroras there differ from those on Earth; rather than being mostly restricted to polar regions as is the case on Earth, a Martian aurora can encompass the planet. In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25 times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month. Mars has seasons, alternating between its northern and southern hemispheres, similar to on Earth. Additionally the orbit of Mars has, compared to Earth's, a large eccentricity and approaches perihelion when it is summer in its southern hemisphere and winter in its northern, and aphelion when it is winter in its southern hemisphere and summer in its northern. As a result, the seasons in its southern hemisphere are more extreme and the seasons in its northern are milder than would otherwise be the case. The summer temperatures in the south can be warmer than the equivalent summer temperatures in the north by up to 30 °C (54 °F). Martian surface temperatures vary from lows of about −110 °C (−166 °F) to highs of up to 35 °C (95 °F) in equatorial summer. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure (about 1% that of the atmosphere of Earth), and the low thermal inertia of Martian soil. The planet is 1.52 times as far from the Sun as Earth, resulting in just 43% of the amount of sunlight. Mars has the largest dust storms in the Solar System, reaching speeds of over 160 km/h (100 mph). These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase global temperature. Seasons also produce dry ice covering polar ice caps. Hydrology While Mars contains water in larger amounts, most of it is dust covered water ice at the Martian polar ice caps. The volume of water ice in the south polar ice cap, if melted, would be enough to cover most of the surface of the planet with a depth of 11 metres (36 ft). Water in its liquid form cannot persist on the surface due to Mars's low atmospheric pressure, which is less than 1% that of Earth. Only at the lowest of elevations are the pressure and temperature high enough for liquid water to exist for short periods. Although little water is present in the atmosphere, there is enough to produce clouds of water ice and different cases of snow and frost, often mixed with snow of carbon dioxide dry ice. Landforms visible on Mars strongly suggest that liquid water has existed on the planet's surface. Huge linear swathes of scoured ground, known as outflow channels, cut across the surface in about 25 places. These are thought to be a record of erosion caused by the catastrophic release of water from subsurface aquifers, though some of these structures have been hypothesized to result from the action of glaciers or lava. One of the larger examples, Ma'adim Vallis, is 700 kilometres (430 mi) long, much greater than the Grand Canyon, with a width of 20 kilometres (12 mi) and a depth of 2 kilometres (1.2 mi) in places. It is thought to have been carved by flowing water early in Mars's history. The youngest of these channels is thought to have formed only a few million years ago. Elsewhere, particularly on the oldest areas of the Martian surface, finer-scale, dendritic networks of valleys are spread across significant proportions of the landscape. Features of these valleys and their distribution strongly imply that they were carved by runoff resulting from precipitation in early Mars history. Subsurface water flow and groundwater sapping may play important subsidiary roles in some networks, but precipitation was probably the root cause of the incision in almost all cases. Along craters and canyon walls, there are thousands of features that appear similar to terrestrial gullies. The gullies tend to be in the highlands of the Southern Hemisphere and face the Equator; all are poleward of 30° latitude. A number of authors have suggested that their formation process involves liquid water, probably from melting ice, although others have argued for formation mechanisms involving carbon dioxide frost or the movement of dry dust. No partially degraded gullies have formed by weathering and no superimposed impact craters have been observed, indicating that these are young features, possibly still active. Other geological features, such as deltas and alluvial fans preserved in craters, are further evidence for warmer, wetter conditions at an interval or intervals in earlier Mars history. Such conditions necessarily require the widespread presence of crater lakes across a large proportion of the surface, for which there is independent mineralogical, sedimentological and geomorphological evidence. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water. The chemical signature of water vapor on Mars was first unequivocally demonstrated in 1963 by spectroscopy using an Earth-based telescope. In 2004, Opportunity detected the mineral jarosite. This forms only in the presence of acidic water, showing that water once existed on Mars. The Spirit rover found concentrated deposits of silica in 2007 that indicated wet conditions in the past, and in December 2011, the mineral gypsum, which also forms in the presence of water, was found on the surface by NASA's Mars rover Opportunity. It is estimated that the amount of water in the upper mantle of Mars, represented by hydroxyl ions contained within Martian minerals, is equal to or greater than that of Earth at 50–300 parts per million of water, which is enough to cover the entire planet to a depth of 200–1,000 metres (660–3,280 ft). On 18 March 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 centimetres (24 in), during the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain. In September 2015, NASA announced that they had found strong evidence of hydrated brine flows in recurring slope lineae, based on spectrometer readings of the darkened areas of slopes. These streaks flow downhill in Martian summer, when the temperature is above −23 °C, and freeze at lower temperatures. These observations supported earlier hypotheses, based on timing of formation and their rate of growth, that these dark streaks resulted from water flowing just below the surface. However, later work suggested that the lineae may be dry, granular flows instead, with at most a limited role for water in initiating the process. A definitive conclusion about the presence, extent, and role of liquid water on the Martian surface remains elusive. Researchers suspect much of the low northern plains of the planet were covered with an ocean hundreds of meters deep, though this theory remains controversial. In March 2015, scientists stated that such an ocean might have been the size of Earth's Arctic Ocean. This finding was derived from the ratio of protium to deuterium in the modern Martian atmosphere compared to that ratio on Earth. The amount of Martian deuterium (D/H = 9.3 ± 1.7 10−4) is five to seven times the amount on Earth (D/H = 1.56 10−4), suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the former presence of an ocean. Other scientists caution that these results have not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water. Near the northern polar cap is the 81.4 kilometres (50.6 mi) wide Korolev Crater, which the Mars Express orbiter found to be filled with approximately 2,200 cubic kilometres (530 cu mi) of water ice. In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior (which is 12,100 cubic kilometers). During observations from 2018 through 2021, the ExoMars Trace Gas Orbiter spotted indications of water, probably subsurface ice, in the Valles Marineris canyon system. Orbital motion Mars's average distance from the Sun is roughly 230 million km (143 million mi), and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. The gravitational potential difference and thus the delta-v needed to transfer between Mars and Earth is the second lowest for Earth. The axial tilt of Mars is 25.19° relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day, the orientation of the north pole of Mars is close to the star Deneb. Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury has a larger orbital eccentricity. It is known that in the past, Mars has had a much more circular orbit. At one point, 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. Mars's cycle of eccentricity is 96,000 Earth years compared to Earth's cycle of 100,000 years. Mars has its closest approach to Earth (opposition) in a synodic period of 779.94 days. It should not be confused with Mars conjunction, where the Earth and Mars are at opposite sides of the Solar System and form a straight line crossing the Sun. The average time between the successive oppositions of Mars, its synodic period, is 780 days; but the number of days between successive oppositions can range from 764 to 812. The distance at close approach varies between about 54 and 103 million km (34 and 64 million mi) due to the planets' elliptical orbits, which causes comparable variation in angular size. At their furthest Mars and Earth can be as far as 401 million km (249 million mi) apart. Mars comes into opposition from Earth every 2.1 years. The planets come into opposition near Mars's perihelion in 2003, 2018 and 2035, with the 2020 and 2033 events being particularly close to perihelic opposition. The mean apparent magnitude of Mars is +0.71 with a standard deviation of 1.05. Because the orbit of Mars is eccentric, the magnitude at opposition from the Sun can range from about −3.0 to −1.4. The minimum brightness is magnitude +1.86 when the planet is near aphelion and in conjunction with the Sun. At its brightest, Mars (along with Jupiter) is second only to Venus in apparent brightness. Mars usually appears distinctly yellow, orange, or red. When farthest away from Earth, it is more than seven times farther away than when it is closest. Mars is usually close enough for particularly good viewing once or twice at 15-year or 17-year intervals. Optical ground-based telescopes are typically limited to resolving features about 300 kilometres (190 mi) across when Earth and Mars are closest because of Earth's atmosphere. As Mars approaches opposition, it begins a period of retrograde motion, which means it will appear to move backwards in a looping curve with respect to the background stars. This retrograde motion lasts for about 72 days, and Mars reaches its peak apparent brightness in the middle of this interval. Moons Mars has two relatively small (compared to Earth's) natural moons, Phobos (about 22 km (14 mi) in diameter) and Deimos (about 12 km (7.5 mi) in diameter), which orbit at 9,376 km (5,826 mi) and 23,460 km (14,580 mi) around the planet. The origin of both moons is unclear, although a popular theory states that they were asteroids captured into Martian orbit. Both satellites were discovered in 1877 by Asaph Hall and were named after the characters Phobos (the deity of panic and fear) and Deimos (the deity of terror and dread), twins from Greek mythology who accompanied their father Ares, god of war, into battle. Mars was the Roman equivalent to Ares. In modern Greek, the planet retains its ancient name Ares (Aris: Άρης). From the surface of Mars, the motions of Phobos and Deimos appear different from that of the Earth's satellite, the Moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbit – where the orbital period would match the planet's period of rotation – rises as expected in the east, but slowly. Because the orbit of Phobos is below a synchronous altitude, tidal forces from Mars are gradually lowering its orbit. In about 50 million years, it could either crash into Mars's surface or break up into a ring structure around the planet. The origin of the two satellites is not well understood. Their low albedo and carbonaceous chondrite composition have been regarded as similar to asteroids, supporting a capture theory. The unstable orbit of Phobos would seem to point toward a relatively recent capture. But both have circular orbits near the equator, which is unusual for captured objects, and the required capture dynamics are complex. Accretion early in the history of Mars is plausible, but would not account for a composition resembling asteroids rather than Mars itself, if that is confirmed. Mars may have yet-undiscovered moons, smaller than 50 to 100 metres (160 to 330 ft) in diameter, and a dust ring is predicted to exist between Phobos and Deimos. A third possibility for their origin as satellites of Mars is the involvement of a third body or a type of impact disruption. More-recent lines of evidence for Phobos having a highly porous interior, and suggesting a composition containing mainly phyllosilicates and other minerals known from Mars, point toward an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's satellite. Although the visible and near-infrared (VNIR) spectra of the moons of Mars resemble those of outer-belt asteroids, the thermal infrared spectra of Phobos are reported to be inconsistent with chondrites of any class. It is also possible that Phobos and Deimos were fragments of an older moon, formed by debris from a large impact on Mars, and then destroyed by a more recent impact upon the satellite. More recently, a study conducted by a team of researchers from multiple countries suggests that a lost moon, at least fifteen times the size of Phobos, may have existed in the past. By analyzing rocks which point to tidal processes on the planet, it is possible that these tides may have been regulated by a past moon. Human observations and exploration The history of observations of Mars is marked by oppositions of Mars when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars, which are distinguished because Mars is close to perihelion, making it even closer to Earth. The ancient Sumerians named Mars Nergal, the god of war and plague. During Sumerian times, Nergal was a minor deity of little significance, but, during later times, his main cult center was the city of Nineveh. In Mesopotamian texts, Mars is referred to as the "star of judgement of the fate of the dead". The existence of Mars as a wandering object in the night sky was also recorded by the ancient Egyptian astronomers and, by 1534 BCE, they were familiar with the retrograde motion of the planet. By the period of the Neo-Babylonian Empire, the Babylonian astronomers were making regular records of the positions of the planets and systematic observations of their behavior. For Mars, they knew that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. They invented arithmetic methods for making minor corrections to the predicted positions of the planets. In Ancient Greece, the planet was known as Πυρόεις. Commonly, the Greek name for the planet now referred to as Mars, was Ares. It was the Romans who named the planet Mars, for their god of war, often represented by the sword and shield of the planet's namesake. In the fourth century BCE, Aristotle noted that Mars disappeared behind the Moon during an occultation, indicating that the planet was farther away. Ptolemy, a Greek living in Alexandria, attempted to address the problem of the orbital motion of Mars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection later called the Almagest (from the Arabic for "greatest"), which became the authoritative treatise on Western astronomy for the next fourteen centuries. Literature from ancient China confirms that Mars was known by Chinese astronomers by no later than the fourth century BCE. In the East Asian cultures, Mars is traditionally referred to as the "fire star" (火星) based on the Wuxing system. In 1609 Johannes Kepler published a 10 year study of Martian orbit, using the diurnal parallax of Mars, measured by Tycho Brahe, to make a preliminary calculation of the relative distance to the planet. From Brahe's observations of Mars, Kepler deduced that the planet orbited the Sun not in a circle, but in an ellipse. Moreover, Kepler showed that Mars sped up as it approached the Sun and slowed down as it moved farther away, in a manner that later physicists would explain as a consequence of the conservation of angular momentum.: 433–437 In 1610 the first use of a telescope for astronomical observation, including Mars, was performed by Italian astronomer Galileo Galilei. With the telescope the diurnal parallax of Mars was again measured in an effort to determine the Sun-Earth distance. This was first performed by Giovanni Domenico Cassini in 1672. The early parallax measurements were hampered by the quality of the instruments. The only occultation of Mars by Venus observed was that of 13 October 1590, seen by Michael Maestlin at Heidelberg. By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. On 5 September 1877, a perihelic opposition to Mars occurred. The Italian astronomer Giovanni Schiaparelli used a 22-centimetre (8.7 in) telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which, with the possible exception of the natural canyon Valles Marineris, were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means "channels" or "grooves", was popularly mistranslated in English as "canals". Influenced by the observations, the orientalist Percival Lowell founded an observatory which had 30- and 45-centimetre (12- and 18-in) telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894, and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were independently observed by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time. The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summers) in combination with the canals led to speculation about life on Mars, and it was a long-held belief that Mars contained vast seas and vegetation. As bigger telescopes were used, fewer long, straight canali were observed. During observations in 1909 by Antoniadi with an 84-centimetre (33 in) telescope, irregular patterns were observed, but no canali were seen. The first spacecraft from Earth to visit Mars was Mars 1 of the Soviet Union, which flew by in 1963, but contact was lost en route. NASA's Mariner 4 followed and became the first spacecraft to successfully transmit from Mars; launched on 28 November 1964, it made its closest approach to the planet on 15 July 1965. Mariner 4 detected the weak Martian radiation belt, measured at about 0.1% that of Earth, and captured the first images of another planet from deep space. Once spacecraft visited the planet during the 1960s and 1970s, many previous concepts of Mars were radically broken. After the results of the Viking life-detection experiments, the hypothesis of a dead planet was generally accepted. The data from Mariner 9 and Viking allowed better maps of Mars to be made. Until 1997 and after Viking 1 shut down in 1982, Mars was only visited by three unsuccessful probes, two flying past without contact (Phobos 1, 1988; Mars Observer, 1993), and one (Phobos 2 1989) malfunctioning in orbit before reaching its destination Phobos. In 1997 Mars Pathfinder became the first successful rover mission beyond the Moon and started together with Mars Global Surveyor (operated until late 2006) an uninterrupted active robotic presence at Mars that has lasted until today. It produced complete, extremely detailed maps of the Martian topography, magnetic field and surface minerals. Starting with these missions a range of new improved crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars, with successful missions by the NASA (United States), Jaxa (Japan), ESA, United Kingdom, ISRO (India), Roscosmos (Russia), the United Arab Emirates, and CNSA (China) to study the planet's surface, climate, and geology, uncovering the different elements of the history and dynamic of the hydrosphere of Mars and possible traces of ancient life. As of 2023[update], Mars is host to ten functioning spacecraft. Eight are in orbit: 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, ExoMars Trace Gas Orbiter, the Hope orbiter, and the Tianwen-1 orbiter. Another two are on the surface: the Mars Science Laboratory Curiosity rover and the Perseverance rover. Collected maps are available online at websites including Google Mars. NASA provides two online tools: Mars Trek, which provides visualizations of the planet using data from 50 years of exploration, and Experience Curiosity, which simulates traveling on Mars in 3-D with Curiosity. Planned missions to Mars include: As of February 2024[update], debris from these types of missions has reached over seven tons. Most of it consists of crashed and inactive spacecraft as well as discarded components. In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Key areas include establishing telecommunications, payload delivery and surface imaging. Habitability and habitation During the late 19th century, it was widely accepted in the astronomical community that Mars had life-supporting qualities, including the presence of oxygen and water. However, in 1894 W. W. Campbell at Lick Observatory observed the planet and found that "if water vapor or oxygen occur in the atmosphere of Mars it is in quantities too small to be detected by spectroscopes then available". That observation contradicted many of the measurements of the time and was not widely accepted. Campbell and V. M. Slipher repeated the study in 1909 using better instruments, but with the same results. It was not until the findings were confirmed by W. S. Adams in 1925 that the myth of the Earth-like habitability of Mars was finally broken. However, even in the 1960s, articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars. The current understanding of planetary habitability – the ability of a world to develop environmental conditions favorable to the emergence of life – favors planets that have liquid water on their surface. Most often this requires the orbit of a planet to lie within the habitable zone, which for the Sun is estimated to extend from within the orbit of Earth to about that of Mars. During perihelion, Mars dips inside this region, but Mars's thin (low-pressure) atmosphere prevents liquid water from existing over large regions for extended periods. The past flow of liquid water demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface may have been too salty and acidic to support regular terrestrial life. The environmental conditions on Mars are a challenge to sustaining organic life: the planet has little heat transfer across its surface, it has poor insulation against bombardment by the solar wind due to the absence of a magnetosphere and has insufficient atmospheric pressure to retain water in a liquid form (water instead sublimes to a gaseous state). Mars is nearly, or perhaps totally, geologically dead; the end of volcanic activity has apparently stopped the recycling of chemicals and minerals between the surface and interior of the planet. Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there remains unknown. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites and had positive results, including a temporary increase in CO2 production on exposure to water and nutrients. This sign of life was later disputed by scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A 2014 analysis of Martian meteorite EETA79001 found chlorate, perchlorate, and nitrate ions in sufficiently high concentrations to suggest that they are widespread on Mars. UV and X-ray radiation would turn chlorate and perchlorate ions into other, highly reactive oxychlorines, indicating that any organic molecules would have to be buried under the surface to survive. Small quantities of methane and formaldehyde detected by Mars orbiters are both claimed to be possible evidence for life, as these chemical compounds would quickly break down in the Martian atmosphere. Alternatively, these compounds may instead be replenished by volcanic or other geological means, such as serpentinite. Impact glass, formed by the impact of meteors, which on Earth can preserve signs of life, has also been found on the surface of the impact craters on Mars. Likewise, the glass in impact craters on Mars could have preserved signs of life, if life existed at the site. The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available. Several plans for a human mission to Mars have been proposed, but none have come to fruition. The NASA Authorization Act of 2017 directed NASA to study the feasibility of a crewed Mars mission in the early 2030s; the resulting report concluded that this would be unfeasible. In addition, in 2021, China was planning to send a crewed Mars mission in 2033. Privately held companies such as SpaceX have also proposed plans to send humans to Mars, with the eventual goal to settle on the planet. As of 2024, SpaceX has proceeded with the development of the Starship launch vehicle with the goal of Mars colonization. In plans shared with the company in April 2024, Elon Musk envisions the beginning of a Mars colony within the next twenty years. This would be enabled by the planned mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. The moon Phobos has been proposed as an anchor point for a space elevator. Besides national space agencies and space companies, groups such as the Mars Society and The Planetary Society advocate for human missions to Mars. In culture Mars is named after the Roman god of war (Greek Ares), but was also associated with the demi-god Heracles (Roman Hercules) by ancient Greek astronomers, as detailed by Aristotle. This association between Mars and war dates back at least to Babylonian astronomy, in which the planet was named for the god Nergal, deity of war and destruction. It persisted into modern times, as exemplified by Gustav Holst's orchestral suite The Planets, whose famous first movement labels Mars "The Bringer of War". The planet's symbol, a circle with a spear pointing out to the upper right, is also used as a symbol for the male gender. The symbol dates from at least the 11th century, though a possible predecessor has been found in the Greek Oxyrhynchus Papyri. The idea that Mars was populated by intelligent Martians became widespread in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". In the present day, high-resolution mapping of the surface of Mars has revealed no artifacts of habitation, but pseudoscientific speculation about intelligent life on Mars still continues. Reminiscent of the canali observations, these speculations are based on small scale features perceived in the spacecraft images, such as "pyramids" and the "Face on Mars". In his book Cosmos, planetary astronomer Carl Sagan wrote: "Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears." The depiction of Mars in fiction has been stimulated by its dramatic red color and by nineteenth-century scientific speculations that its surface conditions might support not just life but intelligent life. This gave way to many science fiction stories involving these concepts, such as H. G. Wells's The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth; Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization; as well as Edgar Rice Burroughs's series Barsoom, C. S. Lewis's novel Out of the Silent Planet (1938), and a number of Robert A. Heinlein stories before the mid-sixties. Since then, depictions of Martians have also extended to animation. A comic figure of an intelligent Martian, Marvin the Martian, appeared in Haredevil Hare (1948) as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. After the Mariner and Viking spacecraft had returned pictures of Mars as a lifeless and canal-less world, these ideas about Mars were abandoned; for many science-fiction authors, the new discoveries initially seemed like a constraint, but eventually the post-Viking knowledge of Mars became itself a source of inspiration for works like Kim Stanley Robinson's Mars trilogy. See also Notes References Further reading External links Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
========================================
[SOURCE: https://en.wikipedia.org/wiki/Palestine_Exploration_Fund] | [TOKENS: 2642]
Contents Palestine Exploration Fund The Palestine Exploration Fund is a British society based in London. It was founded in 1865, shortly after the completion of the Ordnance Survey of Jerusalem by Royal Engineers of the War Department. The Fund is the oldest known organization in the world created specifically for the study of the Levant region, also known as Palestine. Often simply known as the PEF, its initial objective was to carry out surveys of the topography and ethnography of Ottoman Palestine – producing the PEF Survey of Palestine. Its remit was considered to fall between an expeditionary survey and military intelligence gathering. There was also strong religious interest from Christians; William Thomson, Archbishop of York, was the first president of the PEF. As a result, the PEF had a complex relationship with Corps of Royal Engineers of the War Department. The PEF members sent back reports to the UK on the need to salvage and modernise the Levant region. History "This country of Palestine belongs to you and me, it is essentially ours. It was given to the Father of Israel in the words: "Walk through the land in the length of it, and in the breadth of it, for I will give it unto thee". We mean to walk through Palestine in the length and in the breadth of it, because that land has been given unto us. It is the land from which comes news of our Redemption. It is the land towards which we turn as the fountain of all our hopes; it is the land to which we may look with as true a patriotism as we do in this dear old England, which we love so much." Following the completion of the Ordnance Survey of Jerusalem, the Biblical archaeologists and clergymen who supported the survey financed the creation of the fund. It was founded on 22 June 1865 with initial funding of £300. The most notable of the founders were Arthur P. Stanley, the Dean of Westminster, and George Grove, who later founded the Royal College of Music and was responsible for Grove's Dictionary of Music. Its founders established the fund "for the purpose of investigating the Archaeology, Geography, manners, customs and culture, Geology and Natural History of the Holy Land". The roots of the Palestine Exploration Fund lie in a literary society founded by British Consul James Finn and his wife Elizabeth Anne Finn. Many photographs of Palestine have survived from this period. Frederick J. Bliss wrote of the foundation that "[a]s far as its aims were concerned this organization was but a re-institution of a Society formed about the year 1804 under the name of the Palestine Association... it is interesting to note that the General Committee of the Palestine Exploration Fund recognized an organic connection with the earlier Society." The preliminary meeting of the Society of the Palestine Exploration Fund took place in the Jerusalem Chamber of Westminster Abbey. William Thomson, the Archbishop of York, publicly read the original prospectus at this meeting; [O]ur object is strictly an inductive inquiry. We are not to be a religious society; we are not about to launch controversy; we are about to apply the rules of science, which are so well understood by us in our branches, to an investigation into the facts concerning the Holy Land. "No country should be of so much interest to us as that in which the documents of our Faith were written, and the momentous events they describe enacted. At the same time no country more urgently requires illustration ... Even to a casual traveller in the Holy Land the Bible becomes, in its form, and therefore to some extent in its substance, a new book. Much would be gained by ...bringing to light the remains of so many races and generations which must lie concealed under the accumulation of rubbish and ruins on which those villages stand ... The PEF conducted many early excavations of biblical and post-biblical sites around the Levant, as well as studies involving natural history, anthropology, history and geography. In 1867, Charles Warren led PEF's biggest expedition. Warren and his team improved the topography of Jerusalem and discovered the ancient water systems that lay beneath the city. The water system was later named Warren's Shaft, after his work. They also made the first excavations of Tell es-Sultan, site of the biblical city of Jericho. A 2013 publication, The Walls of the Temple Mount, provides more specifics about Warren's work, as summarized in a book review: "... he concentrated on excavating shafts down beneath the ground to the level of the lower parts of the external Temple Mount walls, recording the different types of stonework he encountered at different levels and other features, such as Robinson's Arch on the western side and the Herodian street below it. ... in 1884 the PEF published a large portfolio of 50 of Warren's maps, plans and drawings titled Plans, Elevations, Sections, etc., Shewing the Results of the Excavations at Jerusalem, 1867–70 (now known as the 'Warren Atlas')." In 1875, the Earl of Shaftesbury, a prominent social reformer, told the Annual General Meeting of the PEF that "We have there a land teeming with fertility and rich in history, but almost without an inhabitant – a country without a people, and look! scattered over the world, a people without a country." It was one of the earliest usages by a prominent politician of the phrase "A land without a people for a people without a land," which was to become widely used by advocates of Jewish settlement in Palestine. And, he added: "But let it return into the hands of the Israelites..." In 1878, the Treasurer's statement listed over 130 local associations of the PEF in the United Kingdom (including Ireland). There were also branches in Canada and Australia, and Gaza City and Jerusalem. Expenditure in 1877 amounted to £2,959 14s 11d. Notable persons associated with PEF: The first 21 years of the fund are summarised in PEF (1886). Its chapters and persons mentioned include the following: In his opening address (p.8), Archbishop Thomson laid down three basic principles for the Society: Regarding the latter, great emphasis was placed upon the nomenclature "Holy Land", so the notion of religion could never have been far away. Also (p.10) stress was laid upon the fact that "The Society numbers among its supporters Christians and Jews". (Muslims were not mentioned.)[citation needed] Elsewhere the following activities have been reported: The Palestine Exploration Fund was also involved in the foundation of the British School of Archaeology in Jerusalem in 1919. The School worked with the Fund in joint excavations at Jerusalem's Ophel in the 1920s. The school's second director, John Winter Crowfoot, was Chairman of the PEF from 1945 to 1950. Women of the Palestine Exploration Fund Through the late nineteenth and early twentieth centuries, women were frequently employed by the Fund to carry baskets of soil from the excavations to the dump. These women also cut back brush and dug. The majority of these women remain nameless, as they were hired to perform hard labour on behalf of the trained archaeologists. Bliss took an active interest in the lives of his workers—though not necessarily in their well-being—recording a few names and stories. In his diary, Bliss wrote that most of the workers were from Bureir, a village six miles away from the Tell. Most of the men slept at camp, "digging little shallow graves for a bed", but "the women and girls had the long walk both before and after work. Six miles' walk before 6.30a.m., and six miles' walk after 5p.m., with a hard day's work of carrying earth-piled baskets on the head in between". He comments that this does not seem like an easy life, but more women and girls applied for work than he could employ. Heuda is one woman employed to work on an excavation with Bliss, at Tell el-Hesi. He first writes about her in 1891, noting that she is a capital worker though "a bolder, wilder girl I never saw". He describes her capacity to run all over the site and clear the trenches for excavation with wonder, also commenting on her good looks and marriage prospects. He writes about her cousin, Rizq, as well, and her abilities to haul earth. Bliss provided a unique insight into the lives of two of the women comprising the PEF workforce. Subsequent directors only referred to the women in their employ as anonymous labourers, sometimes complaining that they brought too much gossip—though in Bliss' journals, he recounts more familial and romantic tension that caused trouble on site among the men. PEF today For some years, the fund's office was located north of Wigmore Street in the Marylebone section of the City of Westminster, London, but in early 2019, the PEF moved to 5-6 Dreadnought Walk, Greenwich, London. Chief Executive and Curator of the PEF, Felicity Cobbing, told The Jordan Times that the Ottoman's Palestine region included historical Palestine, Jordan, southern Syria, Lebanon, the Sinai Peninsula and the island of Cyprus. The PEF's "goal was – and remains – to study the country, its people and its natural, ancient and cultural heritage," she added. The new Greenwich headquarters provides more space for PEF collections and its specialist library. "Now we can welcome many more scholars and we can look forward to developing collaborative projects with other institutions both in the UK and internationally," Cobbing said. The PEF holds regular events and lectures and provides annual grants for various projects. In partnership with the British Museum Department of Middle East, the Palestine Exploration Fund hosts free lectures that reflect the diverse interests of their membership. The PEF also co-ordinates joint lectures with the Council for British Research in the Levant, the Anglo-Israel Archaeological Society, the Society for Arabian Studies, and the Egypt Exploration Society. Once a year, an Annual General Meeting (AGM) is held before a lecture. Each year the Palestine Exploration Fund offers grants for travel and research related to topics connected with its founding aims: "to promote research into the archaeology and history, manners and customs and culture, topography, geology and natural sciences of biblical Palestine and the Levant" The committee welcomes interdisciplinary applications relating to the fund's aims, as well as those relating to the PEF's archival collections. The PEF grants are open to all members of the PEF or someone who is becoming a member. The PEF's offices also house collections of photographs, maps, specimens, manuscripts, and paintings. At their location in London, there are collections over 6,000 artefacts that range in date from 40,000 B.C. to the 19th century. The archives contain over 40,000 photographs of Palestine, Jordan, and Syria. Objects come from sites in the South Levant, in particular from Jerusalem, Tell el Hesi, and Samaria. The material comes almost exclusively from PEF excavations carried out between the 1860s to the 1930s. Items on display include artefacts from excavations by Charles Warren, Sir William Flinders Petrie, Frederick Jones Bliss, and John Crowfoot. The PEF also has a collection of casts from original items that now reside in different areas around the world. Also at the PEF is an archive of maps that is composed mainly of documents, letters, reports, plans and maps compiled by the explorers and scholars who worked for the PEF. These explorers include Charles Warren in Jerusalem and Palestine (1867–1870), Claude Conder and Horatio Kitchener on the Survey of Western Palestine (1872–1878), the Survey of Eastern Palestine (1880–81) and the Wady Arabah (1883–4), the excavations of Flinders Petrie and Frederick Jones Bliss at Tell el Hesi (1890–1892), the excavations of R.A.S. Macalister at Gezer (1902–06), Duncan Mackenzie's excavations at Ain Shems-Beth Shemesh in 1910–1912, C. L. Woolley and T. E. Lawrence on the Wilderness of Zin Survey (1913–14), and many others. In addition to these items, the PEF also maintains a collection of photographs of expeditions, coins, natural history, models, and historic forgeries. The PEF also houses a library containing books pertaining to the diverse interests of itself and its members. Quarterly publication The journal of the PEF devoted to the study of the history, archaeology and geography of the Levant has appeared under two successive titles: For more see below under Further reading. See also References Bibliography Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Wikipedia:Recentism] | [TOKENS: 2701]
Contents Wikipedia:Recentism Recentism is a phenomenon on Wikipedia where an article has an inflated or imbalanced focus on recent events. It is writing without an aim toward a long-term, historical view. This can result in, among others: Recentism is a symptom of Wikipedia's dynamic and immediate editorial process, and has positive aspects as well—up-to-date information on breaking news events, vetted and counter-vetted by enthusiastic volunteer editors, is something that no other encyclopedia can offer. Still, Wikipedia is not a newspaper and it is not an indiscriminate collection of information. Articles should be written from a neutral point of view, with attention to the long-term significance of the information included, and with awareness that, under the general notability guideline, not every topic will merit its own stand-alone article. What to do about it Allegations of recentism should prompt consideration of proportion, balance, and due weight. Material may need to be moved, deleted, or expanded. Certain articles might be merged or placed on the Wikipedia:Articles for deletion list. Conversely, an article might need to be split into multiple articles in order to achieve a balance not readily attainable within a single article. Sometimes in-depth information on current events is more appropriately added to Wikinews, which can be found here. Over-use of recent material does not by itself mean that an article should be deleted, but the quick and contemporaneous passage of events may make any subject difficult to judge as actually notable enough for a permanent encyclopedia entry. Proper perspective requires maturity, judgment, and the passage of time (see also § Suggestions for dealing with recentism, below). Examples A news spike is a sudden mass interest in any current event, whereupon Wikipedians create and update articles on it, even if some readers later feel that the topic was not historically significant in any way. The result might be a well-written and well-documented neutral-point-of-view article on a topic that might hardly be remembered a month later (see Jennifer Wilbanks and the article's deletion debate). Still, these articles are valuable for future historical research. An event that occurs in a certain geographic region might come to dominate an entire article about that region. For example, in the aftermath of Hurricane Katrina, the New Orleans article was inundated with day-by-day facts about the hurricane. The solution: an article on the Effect of Hurricane Katrina on New Orleans was created to collect this quickly accumulating content. Subjects with a long history might be described in purely modern terms, even though they were actually more significant in the past than they are today. Even when the topics remain significant, articles can cover the subject as if the most recent events were the salient, defining traits. For large-scale topics, such as slavery, marriage, or war, the stress might be on simply the last few centuries, though the subject matter of the article might have a history of thousands of years. This tendency towards article imbalance is enhanced by the availability of reliable sources, which is not uniform across different topics. This manifests both from the language a source is written in and the ease with which it can be accessed. Sources published in a medium that is both widely available and familiar to editors, such as a news website, are more likely to be used than those from esoteric or foreign-language publications regardless of their reliability. For example, a 2010 story on CNN or BBC News website is more likely to be cited than a 1970 edition of the Bombay Samachar or Večernje novosti. Similarly, the cost of access to a source can be a barrier; for example, most research in astronomy is freely available to the public via arXiv or NASA ADS, while many law journals are available only through costly subscription services. Thus, a political candidate's biography might become bloated with specific details related to a particular, recent election. Long passages in an athlete's or an actor's biography might be devoted to detailed coverage of a recent controversy. With celebrities, an article about a rock music singer or actor who became famous decades ago for achievements on stage may focus almost exclusively on recent news reports of alleged scandals, infidelity, or recreational drug use—none of which are the notability justification behind the creation of their article in the first place. For example, Wikipedia's article on English disc jockey and television presenter Jimmy Savile changed rapidly and substantially during October 2012, with over 700 edits to the article in that month alone compared to 85 for the rest of the year to that point. Eventually, a breakout article, Jimmy Savile sexual abuse scandal was required. Debate over recentism Any disagreement over whether to remove an article might also be related to Wikipedia's ongoing inclusionism-versus-deletionism debate. (Deletionists tend to view Wikipedia as a traditional, rigorous encyclopedia. Inclusionists tend to see it as a compendium of all knowledge, with broader remit.) Many editors identify as mergists, separatists, or some other more nuanced position, and they may have their own thoughts on dealing with recent material. Recentism in one sense—established articles that are bloated with event-specific facts at the expense of longstanding content—is considered a Wikipedia fault, as discussed above under News Spikes. Wikipedia is not a newspaper. When dealing with contemporary subjects, editors should consider whether they are simply regurgitating media coverage of an issue or actually adding well-sourced information that will remain notable over time. Yes, unneeded content can be eliminated later, but a cluttered "first draft" of an article may degrade its eventual quality and a coherent orientation may not always be attained. The second sense of recentism—the creation of a glut of new articles on a recent event—can result in a slap-dash approach to the subject and a rambling, disorganized look to the encyclopedia. Wikipedia is not an indiscriminate collection of information, and not every topic meets Wikipedia's general notability guideline to merit its own stand-alone article. Journalism is a first rough draft of history. In many cases, such content is a valuable preliminary stage in presenting information. Any encyclopedia goes through rough drafts; new Wikipedia articles are immediately published in what might be considered draft form: They can be—and are—improved in real time; these rapidly developing drafts may appear to be a clutter of news links and half-developed thoughts, but later, as the big picture emerges, the least relevant content ought to be—and often is—eliminated. One example is the Pitcairn sexual assault trial of 2004, which was developed day by day as the trial and appeals process advanced. Eventually, when the process ended, later editors could place everything in perspective—while also retaining the chronological coverage as an exhaustive historical record. (As of June 2024[update] this article is still marked as "Cleanup Needed", showing that the editing procedure is never really ended.) Collaborative editing on Wikipedia has resulted in a massive encyclopedia of comprehensive and well-balanced articles on the many current events of the twenty-first century. This record will be valuable to those in the future who seek to understand the history of this time period. In other words: "If we don't make sense of it today, someone else will struggle to make sense of it tomorrow." One of Wikipedia's strengths is the collation and sifting through of vast amounts of reporting on current events, producing encyclopedia-quality articles in real time about ongoing events or developing stories: natural disasters, political campaigns and elections, wars, product releases, assassinations. Finally, Wikipedia articles are often developed via on-line references, which may be temporary in nature. But by documenting timely material with reliable sources at the outset, more permanent sources will hopefully be found and used later - and, with the original online sources linked from Wikipedia, they are much more likely to be picked up and archived by the Wayback Machine or other similar web archives before they disappear. Recentism as recruitment Search engines drive a large amount of traffic to Wikipedia's articles about what were at the moment recent events—for example, the death of Ronald Reagan, the 2004 Indian Ocean earthquake and subsequent tsunami, the death of Pope John Paul II and election of a successor, the nomination of John Roberts to the Supreme Court of the United States, and newsy articles like those from other English-speaking countries. What might seem at the time to be an excessive amount of information on recent topics actually serves the purpose of drawing in new readers—and among them, potential new Wikipedians. Example: Wikipedia received positive coverage on the American National Public Radio program On the Media about its quick response to the London bombings of July 2005. Recentist articles as case studies The related articles that are written during a "recentist news frenzy" provide an in-depth look for interested readers. For example, the Terri Schiavo piece and its companion articles at Category:Terri Schiavo case provide a case-study outlook into how the state and federal governments in the United States interact constitutionally, some insight into motivations for politicians to intervene in court cases, and nuances of end-of-life issues. Suggestions for dealing with recentism Consider the ten-year test or twenty-year test as a thought experiment that might be helpful, but keep in mind the policy WP:CRYSTAL: Will someone ten or twenty years from now be confused about how this article is written? In ten or twenty years, will this addition still appear relevant? If I am devoting more time to it than other topics in the article, will it appear more relevant than what is already here? For example, in 2020, devoting more space to the 2020 United States presidential election article than to the 2000 United States presidential election article might seem logical. Nevertheless, in the future, when neither event is fresh, readers will benefit from a similar level of detail in both articles. As of May 2025, the 2020 entry is still twice as long as the 2000 entry. Furthermore, detailed stand-alone articles and lists may no longer comply with the general notability guideline, particularly the "Presumed" criterion. Content that seemed notable at the time might, in retrospect, violate what Wikipedia is not and other guidelines. Similarly, a person who receives a temporary blip of news coverage for a single incident or event is not necessarily an appropriate topic for a standalone biographical article, if their notability claim is not likely to still be of sustained public interest in the next few decades. After "recentist" articles have calmed down and the number of edits per day has dropped to a minimum, why not initiate comprehensive rewrites? Many articles can be condensed to keep only the most important information, the wider notable effects of an event, and links to related issues. Much of the timeline and the day-to-day updates collected in the "rough draft" stages can safely be excised. A number of the citations to breaking news reports written at the time of the event (especially those later found to be inaccurate) could be replaced by those to more scholarly, historical, or retrospective references created later on. Any detailed subarticle relating to the event may also be either merged back into the main article, or deleted (this includes any article about a subject only notable for that one event). Use Wikinews. Unlike Wikipedia, the Wikinews project was founded to provide in-depth "news article"-like coverage of current events. Just wait and see. Remember there is no deadline, and consensus can change later on. Editors writing today do not have a historical perspective on today's events, and should not pretend to have a crystal ball. This is especially true during a news spike, when there is mass interest to create and update articles on a current event, regardless of whether it may be historically significant later on. Also, editors updating an article affected by a current event may not necessarily be the same ones participating months (or even years) later in the clean-up and maintenance of the page. Above all else, editors should avoid getting into edit wars or contentious deletion discussions when trying to deal with recentism. Some editors employ the Recentism tag {{Recentism}} at the top of articles to warn the reader that the content may be tilted toward recent perspectives. (Tagging is a subject of debate: Some think tags on articles make them ugly or caution readers that a tagged article is defective.) The tag looks like this: {{Recentism}} and results in this: Of course this tag, like many others, should be employed only if editors cannot immediately rectify the problems themselves. You can find a list of articles that have been tagged by going to Category:Articles slanted towards recent events. Choose any article and examine it to see why an editor has tagged it; you may have to check the article history or the Discussion page to find out. If the tag is dated, look at the history of that month and the month preceding it. Improve the article by deleting the recentism or adding information that brings the piece into chronological balance (this may take a while because you have to find reputable sources). You might have to add an "Expert Needed" tag and move on. (For information, see Wikipedia:TC#Expert_needed.) Sometimes you won't agree with the assessment, and you can simply remove the Recentism tag. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Comedy_drama] | [TOKENS: 457]
Contents Comedy drama Comedy drama, also known by the portmanteau dramedy, is a hybrid genre that combines elements of comedy and drama. In film, as well as scripted television series, serious dramatic subjects (such as death, illness, betrayal, grief, etc.) are handled with realism and subtlety, while preserving a light or humorous tone. The term "dramedy" began to be used in the television industry in the 1980s. Modern television comedy dramas tend to have more humour integrated into the story than the comic relief common in drama series, but usually contain a lower joke rate than sitcoms.[citation needed][not verified in body] History In Greek theatre, plays were considered comedies or tragedies (i.e. drama): the former being light stories with a happy ending, and the latter serious stories with a sad ending. This concept even influenced Roman theatre and theatre of the Hellenistic period. Theatre of that era is thought to have long-lasting influence, even in modern narrative works. Even today, works are often classified into two broad categories: dramas and comedies. For instance, many awards that recognize achievements in film and television, such as the Primetime Emmy Awards and the Golden Globe Awards, segregate several awards into these two classifications. The 20th century saw a rise in film and television works that could be described as comedy dramas. In American cinema, The Kid (1921) by Charlie Chaplin is acknowledged as the first feature length film to blend comedy and drama. Characteristics In January 2022, Rafael Abreu, writing for the StudioBinder filmmaking blog, defined this genre as follows: A dramedy is a movie or program that balances the elements of a drama and a comedy. Also known as a comedy drama, this hybrid genre often deals with real life situations, grounded characters, and believable situations. The ratio between the drama and comedy can vary, but most of the time there is an equal measure of both, with neither side dominating. Abreu also adds that dramedies often deal with relatable and serious topics such as divorce, illness, hardship, and heartache. Notable examples Examples of comedy dramas in film include: Examples of television comedy dramas include: See also References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-PCReview_265-1] | [TOKENS: 12858]
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_note-FOOTNOTERandell19826,_11–13-30] | [TOKENS: 10628]
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/PyCon] | [TOKENS: 259]
Contents Python Conference The Python Conference (also called PyCon: 564 ) is the largest annual convention for the discussion and promotion of the Python programming language. It originated in the United States but is also held in more than 40 other countries. It was one of the first computer programming conferences to develop and adhere to a code of conduct.: 565 The conference hosts tutorials, demonstrations and training sessions. PyCon 2020 was listed as one of "The best software engineering conferences [to attend] of 2020" and "As Python becomes ever more popular in the scientific community and for big data, the influence of PyCon will continue to grow." PyCon is often attended by Guido van Rossum (the author of the Python language). Other groups, such as PyLadies and Django Girls, often have concurrent sessions. It is sometimes referred to in software documentation and conference papers. It is organised by the Python Software Foundation, and is supported by many significant companies, including Microsoft, Google, and Facebook. Location history The canonical "PyCon" has run annually in the United States -- except for 2014-2015 when the conference was held in Canada -- since 2003 (23 years ago) (2003) in Washington, D.C: References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Parasite] | [TOKENS: 10128]
Contents Parasitism Page version status This is an accepted version of this page Parasitism is a close relationship between species, where one organism, the parasite, lives (at least some of the time) on or inside another organism, the host, causing it some harm, and is adapted structurally to this way of life. The entomologist E. O. Wilson characterised parasites' way of feeding as "predators that eat prey in units of less than one". Parasites include single-celled protozoans such as the agents of malaria, sleeping sickness, and amoebic dysentery; animals such as hookworms, lice, mosquitoes, and vampire bats; fungi such as honey fungus and the agents of ringworm; and plants such as mistletoe, dodder, and the broomrapes. There are six major parasitic strategies of exploitation of animal hosts, namely parasitic castration, directly transmitted parasitism (by contact), trophically-transmitted parasitism (by being eaten), vector-transmitted parasitism, parasitoidism, and micropredation. One major axis of classification concerns invasiveness: an endoparasite lives inside the host's body; an ectoparasite lives outside, on the host's surface. Like predation, parasitism is a type of consumer–resource interaction, but unlike predators, parasites, with the exception of parasitoids, are much smaller than their hosts, do not kill them, and often live in or on their hosts for an extended period. Parasites of animals are highly specialised, each parasite species living on one given animal species, and reproduce at a faster rate than their hosts. Classic examples include interactions between vertebrate hosts and tapeworms, flukes, and those between the malaria-causing Plasmodium species, and fleas. Parasites reduce host fitness by general or specialised pathology, that ranges from parasitic castration to modification of host behaviour. Parasites increase their own fitness by exploiting hosts for resources necessary for their survival, in particular by feeding on them and by using intermediate (secondary) hosts to assist in their transmission from one definitive (primary) host to another. Although parasitism is often unambiguous, it is part of a spectrum of interactions between species, grading via parasitoidism into predation, through evolution into mutualism, and in some fungi, shading into being saprophytic. Human knowledge of parasites such as roundworms and tapeworms dates back to ancient Egypt, Greece, and Rome. In early modern times, Antonie van Leeuwenhoek observed Giardia lamblia with his microscope in 1681, while Francesco Redi described internal and external parasites including sheep liver fluke and ticks. Modern parasitology developed in the 19th century. In human culture, parasitism has negative connotations. These were exploited to satirical effect in Jonathan Swift's 1733 poem "On Poetry: A Rhapsody", comparing poets to hyperparasitical "vermin". In fiction, Bram Stoker's 1897 Gothic horror novel Dracula and its many later adaptations featured a blood-drinking parasite. Ridley Scott's 1979 film Alien was one of many works of science fiction to feature a parasitic alien species. Etymology First used in English in 1539, the word parasite comes from the Medieval French parasite, from the Latinised form parasitus, from Ancient Greek παράσιτος (parasitos) 'one who eats at the table of another' in turn from παρά (para) 'beside, by' and σῖτος (sitos) 'wheat, food'. The related term parasitism appears in English from 1611. Evolutionary strategies Parasitism is a kind of symbiosis, a close and persistent long-term biological interaction between a parasite and its host. Unlike saprotrophs, parasites feed on living hosts, though some parasitic fungi, for instance, may continue to feed on hosts they have killed. Unlike commensalism and mutualism, the parasitic relationship harms the host, either feeding on it or, as in the case of intestinal parasites, consuming some of its food. Because parasites interact with other species, they can readily act as vectors of pathogens, causing disease. Predation is by definition not a symbiosis, as the interaction is brief, but the entomologist E. O. Wilson has characterised parasites as "predators that eat prey in units of less than one". Within that scope are many possible strategies. Taxonomists classify parasites in a variety of overlapping schemes, based on their interactions with their hosts and on their life cycles, which can be complex. An obligate parasite depends completely on the host to complete its life cycle, while a facultative parasite does not. Parasite life cycles involving only one host are called "direct"; those with a definitive host (where the parasite reproduces sexually) and at least one intermediate host are called "indirect". An endoparasite lives inside the host's body; an ectoparasite lives outside, on the host's surface. Mesoparasites—like some copepods, for example—enter an opening in the host's body and remain partly embedded there. Some parasites can be generalists, feeding on a wide range of hosts, but many parasites, and the majority of protozoans and helminths that parasitise animals, are specialists and extremely host-specific. An early basic, functional division of parasites distinguished microparasites and macroparasites. These each had a mathematical model assigned in order to analyse the population movements of the host–parasite groupings. The microorganisms and viruses that can reproduce and complete their life cycle within the host are known as microparasites. Macroparasites are the multicellular organisms that reproduce and complete their life cycle outside of the host or on the host's body. Much of the thinking on types of parasitism has focused on terrestrial animal parasites of animals, such as helminths. Those in other environments and with other hosts often have analogous strategies. For example, the snubnosed eel is probably a facultative endoparasite (i.e., it is semiparasitic) that opportunistically burrows into and eats sick and dying fish. Plant-eating insects such as scale insects, aphids, and caterpillars closely resemble ectoparasites, attacking much larger plants; they serve as vectors of bacteria, fungi and viruses which cause plant diseases. As female scale insects cannot move, they are obligate parasites, permanently attached to their hosts. The sensory inputs that a parasite employs to identify and approach a potential host are known as "host cues". Such cues can include, for example, vibration, exhaled carbon dioxide, skin odours, visual and heat signatures, and moisture. Parasitic plants can use, for example, light, host physiochemistry, and volatiles to recognize potential hosts. There are six major parasitic strategies, namely parasitic castration; directly transmitted parasitism; trophically-transmitted parasitism; vector-transmitted parasitism; parasitoidism; and micropredation. These apply to parasites whose hosts are plants as well as animals. These strategies represent adaptive peaks; intermediate strategies are possible, but organisms in many different groups have consistently converged on these six, which are evolutionarily stable. A perspective on the evolutionary options can be gained by considering four key questions: the effect on the fitness of a parasite's hosts; the number of hosts they have per life stage; whether the host is prevented from reproducing; and whether the effect depends on intensity (number of parasites per host). From this analysis, the major evolutionary strategies of parasitism emerge, alongside predation. Parasitic castrators partly or completely destroy their host's ability to reproduce, diverting the energy that would have gone into reproduction into host and parasite growth, sometimes causing gigantism in the host. The host's other systems remain intact, allowing it to survive and to sustain the parasite. Parasitic crustaceans such as those in the specialised barnacle genus Sacculina specifically cause damage to the gonads of their many species of host crabs. In the case of Sacculina, the testes of over two-thirds of their crab hosts degenerate sufficiently for these male crabs to develop female secondary sex characteristics such as broader abdomens, smaller claws and egg-grasping appendages. Various species of helminth castrate their hosts (such as insects and snails). This may happen directly, whether mechanically by feeding on their gonads, or by secreting a chemical that destroys reproductive cells; or indirectly, whether by secreting a hormone or by diverting nutrients. For example, the trematode Zoogonus lasius, whose sporocysts lack mouths, castrates the intertidal marine snail Tritia obsoleta chemically, developing in its gonad and killing its reproductive cells. Directly transmitted parasites, not requiring a vector to reach their hosts, include such parasites of terrestrial vertebrates as lice and mites; marine parasites such as copepods and cyamid amphipods; monogeneans; and many species of nematodes, fungi, protozoans, bacteria, and viruses. Whether endoparasites or ectoparasites, each has a single host-species. Within that species, most individuals are free or almost free of parasites, while a minority carry a large number of parasites; this is known as an aggregated distribution. Trophically-transmitted parasites are transmitted by being eaten by a host. They include trematodes (all except schistosomes), cestodes, acanthocephalans, pentastomids, many roundworms, and many protozoa such as Toxoplasma. They have complex life cycles involving hosts of two or more species. In their juvenile stages they infect and often encyst in the intermediate host. When the intermediate-host animal is eaten by a predator, the definitive host, the parasite survives the digestion process and matures into an adult; some live as intestinal parasites. Many trophically transmitted parasites modify the behaviour of their intermediate hosts, increasing their chances of being eaten by a predator. As with directly transmitted parasites, the distribution of trophically transmitted parasites among host individuals is aggregated. Coinfection by multiple parasites is common. Autoinfection, where (by exception) the whole of the parasite's life cycle takes place in a single primary host, can sometimes occur in helminths such as Strongyloides stercoralis. Vector-transmitted parasites rely on a third party, an intermediate host, where the parasite does not reproduce sexually, to carry them from one definitive host to another. These parasites are microorganisms, namely protozoa, bacteria, or viruses, often intracellular pathogens (disease-causers). Their vectors are mostly hematophagic arthropods such as fleas, lice, ticks, and mosquitoes. For example, the deer tick Ixodes scapularis acts as a vector for diseases including Lyme disease, babesiosis, and anaplasmosis. Protozoan endoparasites, such as the malarial parasites in the genus Plasmodium and sleeping-sickness parasites in the genus Trypanosoma, have infective stages in the host's blood which are transported to new hosts by biting insects. Parasitoids are insects which sooner or later kill their hosts, placing their relationship close to predation. Most parasitoids are parasitoid wasps or other hymenopterans; others include dipterans such as phorid flies. They can be divided into two groups, idiobionts and koinobionts, differing in their treatment of their hosts. Idiobiont parasitoids sting their often-large prey on capture, either killing them outright or paralysing them immediately. The immobilised prey is then carried to a nest, sometimes alongside other prey if it is not large enough to support a parasitoid throughout its development. An egg is laid on top of the prey and the nest is then sealed. The parasitoid develops rapidly through its larval and pupal stages, feeding on the provisions left for it. Koinobiont parasitoids, which include flies as well as wasps, lay their eggs inside young hosts, usually larvae. These are allowed to go on growing, so the host and parasitoid develop together for an extended period, ending when the parasitoids emerge as adults, leaving the prey dead, eaten from inside. Some koinobionts regulate their host's development, for example preventing it from pupating or making it moult whenever the parasitoid is ready to moult. They may do this by producing hormones that mimic the host's moulting hormones (ecdysteroids), or by regulating the host's endocrine system. A micropredator attacks more than one host, reducing each host's fitness by at least a small amount, and is only in contact with any one host intermittently. This behavior makes micropredators suitable as vectors, as they can pass smaller parasites from one host to another. Most micropredators are hematophagic, feeding on blood. They include annelids such as leeches, crustaceans such as branchiurans and gnathiid isopods, various dipterans such as mosquitoes and tsetse flies, other arthropods such as fleas and ticks, vertebrates such as lampreys, and mammals such as vampire bats. Parasites use a variety of methods to infect animal hosts, including physical contact, the fecal–oral route, free-living infectious stages, and vectors, suiting their differing hosts, life cycles, and ecological contexts. Examples to illustrate some of the many possible combinations are given in the table. social behaviour(grooming) Among the many variations on parasitic strategies are hyperparasitism, social parasitism, brood parasitism, kleptoparasitism, sexual parasitism, and adelphoparasitism. Hyperparasites feed on another parasite, as exemplified by protozoa living in helminth parasites, or facultative or obligate parasitoids whose hosts are either conventional parasites or parasitoids. Levels of parasitism beyond secondary also occur, especially among facultative parasitoids. In oak gall systems, there can be up to four levels of parasitism. Hyperparasites can control their hosts' populations, and are used for this purpose in agriculture and to some extent in medicine. The controlling effects can be seen in the way that the CHV1 virus helps to control the damage that chestnut blight, Cryphonectria parasitica, does to American chestnut trees, and in the way that bacteriophages can limit bacterial infections. It is likely, though little researched, that most pathogenic microparasites have hyperparasites which may prove widely useful in both agriculture and medicine. Social parasites take advantage of interspecific interactions between members of eusocial animals such as ants, termites, and bumblebees. Examples include the large blue butterfly, Phengaris arion, its larvae employing ant mimicry to parasitise certain ants, Bombus bohemicus, a bumblebee which invades the hives of other bees and takes over reproduction while their young are raised by host workers, and Melipona scutellaris, a eusocial bee whose virgin queens escape killer workers and invade another colony without a queen. An extreme example of interspecific social parasitism is found in the ant Tetramorium inquilinum, an obligate parasite which lives exclusively on the backs of other Tetramorium ants. A mechanism for the evolution of social parasitism was first proposed by Carlo Emery in 1909. Now known as "Emery's rule", it states that social parasites tend to be closely related to their hosts, often being in the same genus. Intraspecific social parasitism occurs in parasitic nursing, where some individual young take milk from unrelated females. In wedge-capped capuchins, higher ranking females sometimes take milk from low ranking females without any reciprocation. In brood parasitism, the hosts suffer increased parental investment and energy expenditure to feed parasitic young, which are commonly larger than host young. The growth rate of host nestlings is slowed, reducing the host's fitness. Brood parasites include birds in different families such as cowbirds, whydahs, cuckoos, and black-headed ducks. These do not build nests of their own, but leave their eggs in nests of other species. In the family Cuculidae, over 40% of cuckoo species are obligate brood parasites, while others are either facultative brood parasites or provide parental care. The eggs of some brood parasites mimic those of their hosts, while some cowbird eggs have tough shells, making them hard for the hosts to kill by piercing, both mechanisms implying selection by the hosts against parasitic eggs. The adult female European cuckoo further mimics a predator, the European sparrowhawk, giving her time to lay her eggs in the host's nest unobserved. Host species often combat parasitic egg mimicry through egg polymorphism, having two or more egg phenotypes within a single population of a species. Multiple phenotypes in host eggs decrease the probability of a parasitic species accurately "matching" their eggs to host eggs. In kleptoparasitism (from Greek κλέπτης (kleptēs), "thief"), parasites steal food gathered by the host. The parasitism is often on close relatives, whether within the same species or between species in the same genus or family. For instance, the many lineages of cuckoo bees lay their eggs in the nest cells of other bees in the same family. Kleptoparasitism is uncommon generally but conspicuous in birds; some such as skuas are specialised in pirating food from other seabirds, relentlessly chasing them down until they disgorge their catch. A unique approach is seen in some species of anglerfish, such as Ceratias holboelli, where the males are reduced to tiny sexual parasites, wholly dependent on females of their own species for survival, permanently attached below the female's body, and unable to fend for themselves. The female nourishes the male and protects him from predators, while the male gives nothing back except the sperm that the female needs to produce the next generation. Adelphoparasitism, (from Greek ἀδελφός (adelphós), brother), also known as sibling-parasitism, occurs where the host species is closely related to the parasite, often in the same family or genus. In the citrus blackfly parasitoid, Encarsia perplexa, unmated females may lay haploid eggs in the fully developed larvae of their own species, producing male offspring, while the marine worm Bonellia viridis has a similar reproductive strategy, although the larvae are planktonic. Examples of the major variant strategies are illustrated. Taxonomic range Parasitism has an extremely wide taxonomic range, including animals, plants, fungi, protozoans, bacteria, and viruses. Parasitism is widespread in the animal kingdom, and has evolved independently from free-living forms hundreds of times. Many types of helminth including flukes and cestodes have complete life cycles involving two or more hosts. By far the largest group is the parasitoid wasps in the Hymenoptera. The phyla and classes with the largest numbers of parasitic species are listed in the table. Numbers are conservative minimum estimates. The columns for Endo- and Ecto-parasitism refer to the definitive host, as documented in the Vertebrate and Invertebrate columns. A hemiparasite or partial parasite such as mistletoe derives some of its nutrients from another living plant, whereas a holoparasite such as Cuscuta derives all of its nutrients from another plant. Parasitic plants make up about one per cent of angiosperms and are in almost every biome in the world. All these plants have modified roots, haustoria, which penetrate the host plants, connecting them to the conductive system—either the xylem, the phloem, or both. This provides them with the ability to extract water and nutrients from the host. A parasitic plant is classified depending on where it latches onto the host, either the stem or the root, and the amount of nutrients it requires. Since holoparasites have no chlorophyll and therefore cannot make food for themselves by photosynthesis, they are always obligate parasites, deriving all their food from their hosts. Some parasitic plants can locate their host plants by detecting chemicals in the air or soil given off by host shoots or roots, respectively. About 4,500 species of parasitic plant in approximately 20 families of flowering plants are known. Species within the Orobanchaceae (broomrapes) are among the most economically destructive of all plants. Species of Striga (witchweeds) are estimated to cost billions of dollars a year in crop yield loss, infesting over 50 million hectares of cultivated land within Sub-Saharan Africa alone. Striga infects both grasses and grains, including corn, rice, and sorghum, which are among the world's most important food crops. Orobanche also threatens a wide range of other important crops, including peas, chickpeas, tomatoes, carrots, and varieties of cabbage. Yield loss from Orobanche can be total; despite extensive research, no method of control has been entirely successful. Many plants and fungi exchange carbon and nutrients in mutualistic mycorrhizal relationships. Some 400 species of myco-heterotrophic plants, mostly in the tropics, however effectively cheat by taking carbon from a fungus rather than exchanging it for minerals. They have much reduced roots, as they do not need to absorb water from the soil; their stems are slender with few vascular bundles, and their leaves are reduced to small scales, as they do not photosynthesize. Their seeds are small and numerous, so they appear to rely on being infected by a suitable fungus soon after germinating. Parasitic fungi derive some or all of their nutritional requirements from plants, other fungi, or animals. Plant pathogenic fungi are classified into three categories depending on their mode of nutrition: biotrophs, hemibiotrophs and necrotrophs. Biotrophic fungi derive nutrients from living plant cells, and during the course of infection they colonise their plant host in such a way as to keep it alive for a maximally long time. One well-known example of a biotrophic pathogen is Ustilago maydis, causative agent of the corn smut disease. Necrotrophic pathogens on the other hand, kill host cells and feed saprophytically, an example being the root-colonising honey fungi in the genus Armillaria. Hemibiotrophic pathogens begin their colonising their hosts as biotrophs, and subsequently killing off host cells and feeding as necrotrophs, a phenomenon termed the biotrophy-necrotrophy switch. Pathogenic fungi are well-known causative agents of diseases on animals as well as humans. Fungal infections (mycosis) are estimated to kill 1.6 million people each year. One example of a potent fungal animal pathogen are Microsporidia - obligate intracellular parasitic fungi that largely affect insects, but may also affect vertebrates including humans, causing the intestinal infection microsporidiosis. Protozoa such as Plasmodium, Trypanosoma, and Entamoeba are endoparasitic. They cause serious diseases in vertebrates including humans—in these examples, malaria, sleeping sickness, and amoebic dysentery—and have complex life cycles. Many bacteria are parasitic, though they are more generally thought of as pathogens causing disease. Parasitic bacteria are extremely diverse, and infect their hosts by a variety of routes. To give a few examples, Bacillus anthracis, the cause of anthrax, is spread by contact with infected domestic animals; its spores, which can survive for years outside the body, can enter a host through an abrasion or may be inhaled. Borrelia, the cause of Lyme disease and relapsing fever, is transmitted by vectors, ticks of the genus Ixodes, from the diseases' reservoirs in animals such as deer. Campylobacter jejuni, a cause of gastroenteritis, is spread by the fecal–oral route from animals, or by eating insufficiently cooked poultry, or by contaminated water. Haemophilus influenzae, an agent of bacterial meningitis and respiratory tract infections such as influenza and bronchitis, is transmitted by droplet contact. Treponema pallidum, the cause of syphilis, is spread by sexual activity. Viruses are obligate intracellular parasites, characterised by extremely limited biological function, to the point where, while they are evidently able to infect all other organisms from bacteria and archaea to animals, plants and fungi, it is unclear whether they can themselves be described as living. They can be either RNA or DNA viruses consisting of a single or double strand of genetic material (RNA or DNA, respectively), covered in a protein coat and sometimes a lipid envelope. They thus lack all the usual machinery of the cell such as enzymes, relying entirely on the host cell's ability to replicate DNA and synthesise proteins. Most viruses are bacteriophages, infecting bacteria. Evolutionary ecology Parasitism is a major aspect of evolutionary ecology; for example, almost all free-living animals are host to at least one species of parasite. Vertebrates, the best-studied group, are hosts to between 75,000 and 300,000 species of helminths and an uncounted number of parasitic microorganisms. On average, a mammal species hosts four species of nematode, two of trematodes, and two of cestodes. Humans have 342 species of helminth parasites, and 70 species of protozoan parasites. Some three-quarters of the links in food webs include a parasite, important in regulating host numbers. Perhaps 40 per cent of described species are parasitic. Parasitism is hard to demonstrate from the fossil record, but holes in the mandibles of several specimens of Tyrannosaurus may have been caused by Trichomonas-like parasites. Saurophthirus, the Early Cretaceous flea, parasitized pterosaurs. Eggs that belonged to nematode worms and probably protozoan cysts were found in the Late Triassic coprolite of phytosaur. This rare find in Thailand reveals more about the ecology of prehistoric parasites. As hosts and parasites evolve together, their relationships often change. When a parasite is in a sole relationship with a host, selection drives the relationship to become more benign, even mutualistic, as the parasite can reproduce for longer if its host lives longer. But where parasites are competing, selection favours the parasite that reproduces fastest, leading to increased virulence. There are thus varied possibilities in host–parasite coevolution. Evolutionary epidemiology analyses how parasites spread and evolve, whereas Darwinian medicine applies similar evolutionary thinking to non-parasitic diseases like cancer and autoimmune conditions. Long-term partnerships can lead to a relatively stable relationship tending to commensalism or mutualism, as, all else being equal, it is in the evolutionary interest of the parasite that its host thrives. A parasite may evolve to become less harmful for its host or a host may evolve to cope with the unavoidable presence of a parasite—to the point that the parasite's absence causes the host harm. For example, although animals parasitised by worms are often clearly harmed, such infections may also reduce the prevalence and effects of autoimmune disorders in animal hosts, including humans. In a more extreme example, some nematode worms cannot reproduce, or even survive, without infection by Wolbachia bacteria. Lynn Margulis and others have argued, following Peter Kropotkin's 1902 Mutual Aid: A Factor of Evolution, that natural selection drives relationships from parasitism to mutualism when resources are limited. This process may have been involved in the symbiogenesis which formed the eukaryotes from an intracellular relationship between archaea and bacteria, though the sequence of events remains largely undefined. Competition between parasites can be expected to favour faster reproducing and therefore more virulent parasites, by natural selection. Among competing parasitic insect-killing bacteria of the genera Photorhabdus and Xenorhabdus, virulence depended on the relative potency of the antimicrobial toxins (bacteriocins) produced by the two strains involved. When only one bacterium could kill the other, the other strain was excluded by the competition. But when caterpillars were infected with bacteria both of which had toxins able to kill the other strain, neither strain was excluded, and their virulence was less than when the insect was infected by a single strain. A parasite sometimes undergoes cospeciation with its host, resulting in the pattern described in Fahrenholz's rule, that the phylogenies of the host and parasite come to mirror each other. An example is between the simian foamy virus (SFV) and its primate hosts. The phylogenies of SFV polymerase and the mitochondrial cytochrome c oxidase subunit II from African and Asian primates were found to be closely congruent in branching order and divergence times, implying that the simian foamy viruses cospeciated with Old World primates for at least 30 million years. The presumption of a shared evolutionary history between parasites and hosts can help elucidate how host taxa are related. For instance, there has been a dispute about whether flamingos are more closely related to storks or ducks. The fact that flamingos share parasites with ducks and geese was initially taken as evidence that these groups were more closely related to each other than either is to storks. However, evolutionary events such as the duplication, or the extinction of parasite species (without similar events on the host phylogeny) often erode similarities between host and parasite phylogenies. In the case of flamingos, they have similar lice to those of grebes. Flamingos and grebes do have a common ancestor, implying cospeciation of birds and lice in these groups. Flamingo lice then switched hosts to ducks, creating the situation which had confused biologists. Parasites infect sympatric hosts (those within their same geographical area) more effectively, as has been shown with digenetic trematodes infecting lake snails. This is in line with the Red Queen hypothesis, which states that interactions between species lead to constant natural selection for coadaptation. Parasites track the locally common hosts' phenotypes, so the parasites are less infective to allopatric hosts, those from different geographical regions. Some parasites modify host behaviour in order to increase their transmission between hosts, often in relation to predator and prey (parasite increased trophic transmission). For example, in the California coastal salt marsh, the fluke Euhaplorchis californiensis reduces the ability of its killifish host to avoid predators. This parasite matures in egrets, which are more likely to feed on infected killifish than on uninfected fish. Another example is the protozoan Toxoplasma gondii, a parasite that matures in cats but can be carried by many other mammals. Uninfected rats avoid cat odors, but rats infected with T. gondii are drawn to this scent, which may increase transmission to feline hosts. The malaria parasite modifies the skin odour of its human hosts, increasing their attractiveness to mosquitoes and hence improving the chance for the parasite to be transmitted. The spider Cyclosa argenteoalba often have parasitoid wasp larvae attached to them which alter their web-building behavior. Instead of producing their normal sticky spiral shaped webs, they made simplified webs when the parasites were attached. This manipulated behavior lasted longer and was more prominent the longer the parasites were left on the spiders. Parasites can exploit their hosts to carry out a number of functions that they would otherwise have to carry out for themselves. Parasites which lose those functions then have a selective advantage, as they can divert resources to reproduction. Many insect ectoparasites including bedbugs, batbugs, lice and fleas have lost their ability to fly, relying instead on their hosts for transport. Trait loss more generally is widespread among parasites. An extreme example is the myxosporean Henneguya zschokkei, an ectoparasite of fish and the only animal known to have lost the ability to respire aerobically: its cells lack mitochondria. Hosts have evolved a variety of defensive measures against their parasites, including physical barriers like the skin of vertebrates, the immune system of mammals, insects actively removing parasites, and defensive chemicals in plants. The evolutionary biologist W. D. Hamilton suggested that sexual reproduction could have evolved to help to defeat multiple parasites by enabling genetic recombination, the shuffling of genes to create varied combinations. Hamilton showed by mathematical modelling that sexual reproduction would be evolutionarily stable in different situations, and that the theory's predictions matched the actual ecology of sexual reproduction. However, there may be a trade-off between immunocompetence and breeding male vertebrate hosts' secondary sex characteristics, such as the plumage of peacocks and the manes of lions. This is because the male hormone testosterone encourages the growth of secondary sex characteristics, favouring such males in sexual selection, at the price of reducing their immune defences. The physical barrier of the tough and often dry and waterproof skin of reptiles, birds and mammals keeps invading microorganisms from entering the body. Human skin also secretes sebum, which is toxic to most microorganisms. On the other hand, larger parasites such as trematodes detect chemicals produced by the skin to locate their hosts when they enter the water. Vertebrate saliva and tears contain lysozyme, an enzyme that breaks down the cell walls of invading bacteria. Should the organism pass the mouth, the stomach with its hydrochloric acid, toxic to most microorganisms, is the next line of defence. Some intestinal parasites have a thick, tough outer coating which is digested slowly or not at all, allowing the parasite to pass through the stomach alive, at which point they enter the intestine and begin the next stage of their life. Once inside the body, parasites must overcome the immune system's serum proteins and pattern recognition receptors, intracellular and cellular, that trigger the adaptive immune system's lymphocytes such as T cells and antibody-producing B cells. These have receptors that recognise parasites. Insects often adapt their nests to reduce parasitism. For example, one of the key reasons why the wasp Polistes canadensis nests across multiple combs, rather than building a single comb like much of the rest of its genus, is to avoid infestation by tineid moths. The tineid moth lays its eggs within the wasps' nests and then these eggs hatch into larvae that can burrow from cell to cell and prey on wasp pupae. Adult wasps attempt to remove and kill moth eggs and larvae by chewing down the edges of cells, coating the cells with an oral secretion that gives the nest a dark brownish appearance. Plants respond to parasite attack with a series of chemical defences, such as polyphenol oxidase, under the control of the jasmonic acid-insensitive (JA) and salicylic acid (SA) signalling pathways. The different biochemical pathways are activated by different attacks, and the two pathways can interact positively or negatively. In general, plants can either initiate a specific or a non-specific response. Specific responses involve recognition of a parasite by the plant's cellular receptors, leading to a strong but localised response: defensive chemicals are produced around the area where the parasite was detected, blocking its spread, and avoiding wasting defensive production where it is not needed. Non-specific defensive responses are systemic, meaning that the responses are not confined to an area of the plant, but spread throughout the plant, making them costly in energy. These are effective against a wide range of parasites. When damaged, such as by lepidopteran caterpillars, leaves of plants including maize and cotton release increased amounts of volatile chemicals such as terpenes that signal they are being attacked; one effect of this is to attract parasitoid wasps, which in turn attack the caterpillars. Biology and conservation Parasitism and parasite evolution were until the twenty-first century studied by parasitologists, in a science dominated by medicine, rather than by ecologists or evolutionary biologists. Even though parasite-host interactions were plainly ecological and important in evolution, the history of parasitology caused what the evolutionary ecologist Robert Poulin called a "takeover of parasitism by parasitologists", leading ecologists to ignore the area. This was in his opinion "unfortunate", as parasites are "omnipresent agents of natural selection" and significant forces in evolution and ecology. In his view, the long-standing split between the sciences limited the exchange of ideas, with separate conferences and separate journals. The technical languages of ecology and parasitology sometimes involved different meanings for the same words. There were philosophical differences, too: Poulin notes that, influenced by medicine, "many parasitologists accepted that evolution led to a decrease in parasite virulence, whereas modern evolutionary theory would have predicted a greater range of outcomes". Their complex relationships make parasites difficult to place in food webs: a trematode with multiple hosts for its various life cycle stages would occupy many positions in a food web simultaneously, and would set up loops of energy flow, confusing the analysis. Further, since nearly every animal has (multiple) parasites, parasites would occupy the top levels of every food web. Parasites can play a role in the proliferation of non-native species. For example, invasive green crabs are minimally affected by native trematodes on the Eastern Atlantic coast. This helps them outcompete native crabs such as the Atlantic Rock and Jonah crabs. Ecological parasitology can be important to attempts at control, like during the campaign for eradicating the Guinea worm. Even though the parasite was eradicated in all but four countries, the worm began using frogs as an intermediary host before infecting dogs, making control more difficult than it would have been if the relationships had been better understood. Although parasites are widely considered to be harmful, the eradication of all parasites would not be beneficial. Parasites account for at least half of life's diversity; they perform important ecological roles; and without parasites, organisms might tend to asexual reproduction, diminishing the diversity of traits brought about by sexual reproduction. Parasites provide an opportunity for the transfer of genetic material between species, facilitating evolutionary change. Many parasites require multiple hosts of different species to complete their life cycles and rely on predator-prey or other stable ecological interactions to get from one host to another. The presence of parasites thus indicates that an ecosystem is healthy. An ectoparasite, the California condor louse, Colpocephalum californici, became a well-known conservation issue. A large and costly captive breeding program was run in the United States to rescue the California condor. It was host to a louse, which lived only on it. Any lice found were "deliberately killed" during the program, to keep the condors in the best possible health. The result was that one species, the condor, was saved and returned to the wild, while another species, the parasite, became extinct. Although parasites are often omitted in depictions of food webs, they usually occupy the top position. Parasites can function like keystone species, reducing the dominance of superior competitors and allowing competing species to co-exist. A single parasite species usually has an aggregated distribution across host animals, which means that most hosts carry few parasites, while a few hosts carry the vast majority of parasite individuals. This poses considerable problems for students of parasite ecology, as it renders parametric statistics as commonly used by biologists invalid. Log-transformation of data before the application of parametric test, or the use of non-parametric statistics is recommended by several authors, but this can give rise to further problems, so quantitative parasitology is based on more advanced biostatistical methods. History Human parasites including roundworms, the Guinea worm, threadworms and tapeworms are mentioned in Egyptian papyrus records from 3000 BC onwards; the Ebers Papyrus describes hookworm. In ancient Greece, parasites including the bladder worm are described in the Hippocratic Corpus, while the comic playwright Aristophanes called tapeworms "hailstones". The Roman physicians Celsus and Galen documented the roundworms Ascaris lumbricoides and Enterobius vermicularis. In his Canon of Medicine, completed in 1025, the Persian physician Avicenna recorded human and animal parasites including roundworms, threadworms, the Guinea worm and tapeworms. In his 1397 book Traité de l'état, science et pratique de l'art de la Bergerie (Account of the state, science and practice of the art of shepherding), Jehan de Brie [fr] wrote the first description of a trematode endoparasite, the sheep liver fluke Fasciola hepatica. In the early modern period, Francesco Redi's 1668 book Esperienze Intorno alla Generazione degl'Insetti (Experiences of the Generation of Insects), explicitly described ecto- and endoparasites, illustrating ticks, the larvae of nasal flies of deer, and sheep liver fluke. Redi noted that parasites develop from eggs, contradicting the theory of spontaneous generation. In his 1684 book Osservazioni intorno agli animali viventi che si trovano negli animali viventi (Observations on Living Animals found in Living Animals), Redi described and illustrated over 100 parasites including the large roundworm in humans that causes ascariasis. Redi was the first to name the cysts of Echinococcus granulosus seen in dogs and sheep as parasitic; a century later, in 1760, Peter Simon Pallas correctly suggested that these were the larvae of tapeworms. In 1681, Antonie van Leeuwenhoek observed and illustrated the protozoan parasite Giardia lamblia, and linked it to "his own loose stools". This was the first protozoan parasite of humans to be seen under a microscope. A few years later, in 1687, the Italian biologists Giovanni Cosimo Bonomo and Diacinto Cestoni described scabies as caused by the parasitic mite Sarcoptes scabiei, marking it as the first disease of humans with a known microscopic causative agent. Modern parasitology developed in the 19th century with accurate observations and experiments by many researchers and clinicians; the term was first used in 1870. In 1828, James Annersley described amoebiasis, protozoal infections of the intestines and the liver, though the pathogen, Entamoeba histolytica, was not discovered until 1873 by Friedrich Lösch. James Paget discovered the intestinal nematode Trichinella spiralis in humans in 1835. James McConnell described the human liver fluke, Clonorchis sinensis, in 1875. Algernon Thomas and Rudolf Leuckart independently made the first discovery of the life cycle of a trematode, the sheep liver fluke, by experiment in 1881–1883. In 1877 Patrick Manson discovered the life cycle of the filarial worms that cause elephantiasis transmitted by mosquitoes. Manson further predicted that the malaria parasite, Plasmodium, had a mosquito vector, and persuaded Ronald Ross to investigate. Ross confirmed that the prediction was correct in 1897–1898. At the same time, Giovanni Battista Grassi and others described the malaria parasite's life cycle stages in Anopheles mosquitoes. Ross was controversially awarded the 1902 Nobel prize for his work, while Grassi was not. In 1903, David Bruce identified the protozoan parasite and the tsetse fly vector of African trypanosomiasis. Given the importance of malaria, with some 220 million people infected annually, many attempts have been made to interrupt its transmission. Various methods of malaria prophylaxis have been tried including the use of antimalarial drugs to kill off the parasites in the blood, the eradication of its mosquito vectors with organochlorine and other insecticides, and the development of a malaria vaccine. All of these have proven problematic, with drug resistance, insecticide resistance among mosquitoes, and repeated failure of vaccines as the parasite mutates. The first and as of 2015 the only licensed vaccine for any parasitic disease of humans is RTS,S for Plasmodium falciparum malaria. Several groups of parasites, including microbial pathogens and parasitoidal wasps have been used as biological control agents in agriculture and horticulture. Poulin observes that the widespread prophylactic use of anthelmintic drugs in domestic sheep and cattle constitutes a worldwide uncontrolled experiment in the life-history evolution of their parasites. The outcomes depend on whether the drugs decrease the chance of a helminth larva reaching adulthood. If so, natural selection can be expected to favour the production of eggs at an earlier age. If on the other hand the drugs mainly affects adult parasitic worms, selection could cause delayed maturity and increased virulence. Such changes appear to be underway: the nematode Teladorsagia circumcincta is changing its adult size and reproductive rate in response to drugs. Cultural significance In the classical era, the concept of the parasite was not strictly pejorative: the parasitus was an accepted role in Roman society, in which a person could live off the hospitality of others, in return for "flattery, simple services, and a willingness to endure humiliation". Parasitism has a derogatory sense in popular usage. According to the immunologist John Playfair, In everyday speech, the term 'parasite' is loaded with derogatory meaning. A parasite is a sponger, a lazy profiteer, a drain on society. The satirical cleric Jonathan Swift alludes to hyperparasitism in his 1733 poem "On Poetry: A Rhapsody", comparing poets to "vermin" who "teaze and pinch their foes": The vermin only teaze and pinch Their foes superior by an inch. So nat'ralists observe, a flea Hath smaller fleas that on him prey; And these have smaller fleas to bite 'em. And so proceeds ad infinitum. Thus every poet, in his kind, Is bit by him that comes behind: A 2022 study examined the naming of some 3000 parasite species discovered in the previous two decades. Of those named after scientists, over 80% were named for men, whereas about a third of authors of papers on parasites were women. The study found that the percentage of parasite species named for relatives or friends of the author has risen sharply in the same period. In Bram Stoker's 1897 Gothic horror novel Dracula, and its many film adaptations, the eponymous Count Dracula is a blood-drinking parasite (a vampire). The critic Laura Otis argues that as a "thief, seducer, creator, and mimic, Dracula is the ultimate parasite. The whole point of vampirism is sucking other people's blood—living at other people's expense." Disgusting and terrifying parasitic alien species are widespread in science fiction, as for instance in Ridley Scott's 1979 film Alien. In one scene, a Xenomorph bursts out of the chest of a dead man, with blood squirting out under high pressure assisted by explosive squibs. Animal organs were used to reinforce the shock effect. The scene was filmed in a single take, and the startled reactions of the actors were genuine. The entomopathogenic fungus Cordyceps is represented culturally as a deadly threat to the human race. The video game series The Last of Us (2013–present) and its television adaptation present Cordyceps as a parasite of humans, causing a zombie apocalypse. Its human hosts initially become violent "infected" beings, before turning into blind zombie "clickers", complete with fruiting bodies growing out from their faces. See also Notes References Sources Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Go!_(programming_language)] | [TOKENS: 330]
Contents Go! (programming language) Go! is an agent-based programming language in the tradition of logic-based programming languages like Prolog. It was introduced in a 2003 paper by Francis McCabe and Keith Clark. Design The authors of Go! describe it as "a multi-paradigm programming language that is oriented to the needs of programming secure, production quality and agent-based applications. It is multi-threaded, strongly typed and higher order (in the functional programming sense). It has relation, function and action procedure definitions. Threads execute action procedures, calling functions and querying relations as needed. Threads in different agents communicate and coordinate using asynchronous messages. Threads within the same agent can also use shared dynamic relations acting as Linda-style tuple stores." The authors also propose that the language is suitable for representing ontologies due to its integration of logic, functional and imperative styles of programming. Example The following example illustrates the "ontology-oriented" type and declarations style of Go!: Conflict with Google In November 2009, Google released a similarly named Go programming language (with no exclamation point). McCabe asked Google to change the name of their language as he was concerned they were "steam-rolling over us". The issue received attention among technology news websites, with some of them characterizing Go! as "obscure". The issue thread opened on the subject was closed by a Google developer on 12 October 2010 with the custom status "Unfortunate" and with the following comment: "there are many computing products and services named Go. In the 11 months since our release, there has been minimal confusion of the two languages." References Further reading External links
========================================
[SOURCE: https://github.com/signup?ref_cta=Sign+up&ref_loc=header+logged+out&ref_page=%2F&source=header-home] | [TOKENS: 82]
Create your free account Sign up for GitHub GitHub requires JavaScript to proceed with the sign up process. Please enable JavaScript. Select Country/Region Sorry, something went wrong. Sorry, something went wrong. No results found Verify your account By creating an account, you agree to the Terms of Service. For more information about GitHub's privacy practices, see the GitHub Privacy Statement. We'll occasionally send you account-related emails.
========================================
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-FOOTNOTEAsakura200072–73-65] | [TOKENS: 10728]
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Hyperbolic_geometric_graph] | [TOKENS: 1671]
Contents Hyperbolic geometric graph A hyperbolic geometric graph (HGG) or hyperbolic geometric network (HGN) is a special type of spatial network where (1) latent coordinates of nodes are sprinkled according to a probability density function into a hyperbolic space of constant negative curvature and (2) an edge between two nodes is present if they are close according to a function of the metric (typically either a Heaviside step function resulting in deterministic connections between vertices closer than a certain threshold distance, or a decaying function of hyperbolic distance yielding the connection probability). A HGG generalizes a random geometric graph (RGG) whose embedding space is Euclidean. Mathematical formulation Mathematically, a HGG is a graph G ( V , E ) {\displaystyle G(V,E)} with a vertex set V (cardinality N = | V | {\displaystyle N=|V|} ) and an edge set E constructed by considering the nodes as points placed onto a 2-dimensional hyperbolic space H ζ 2 {\displaystyle \mathbb {H} _{\zeta }^{2}} of constant negative Gaussian curvature, − ζ 2 {\displaystyle -\zeta ^{2}} and cut-off radius R {\displaystyle R} , i.e. the radius of the Poincaré disk which can be visualized using a hyperboloid model. Each point i {\displaystyle i} has hyperbolic polar coordinates ( r i , θ i ) {\displaystyle (r_{i},\theta _{i})} with 0 ≤ r i ≤ R {\displaystyle 0\leq r_{i}\leq R} and 0 ≤ θ i < 2 π {\displaystyle 0\leq \theta _{i}<2\pi } . The hyperbolic law of cosines allows to measure the distance d i j {\displaystyle d_{ij}} between two points i {\displaystyle i} and j {\displaystyle j} , The angle Δ {\displaystyle \Delta } is the (smallest) angle between the two position vectors. In the simplest case, an edge ( i , j ) {\displaystyle (i,j)} is established iff (if and only if) two nodes are within a certain neighborhood radius r {\displaystyle r} , d i j ≤ r {\displaystyle d_{ij}\leq r} , this corresponds to an influence threshold. In general, a link will be established with a probability depending on the distance d i j {\displaystyle d_{ij}} . A connectivity decay function γ ( s ) : R + → [ 0 , 1 ] {\displaystyle \gamma (s):\mathbb {R} ^{+}\to [0,1]} represents the probability of assigning an edge to a pair of nodes at distance s {\displaystyle s} . In this framework, the simple case of hard-code neighborhood like in random geometric graphs is referred to as truncation decay function. Generating hyperbolic geometric graphs Krioukov et al. describe how to generate hyperbolic geometric graphs with uniformly random node distribution (as well as generalized versions) on a disk of radius R {\displaystyle R} in H ζ 2 {\displaystyle \mathbb {H} _{\zeta }^{2}} . These graphs yield a power-law distribution for the node degrees. The angular coordinate θ {\displaystyle \theta } of each point/node is chosen uniformly random from [ 0 , 2 π ] {\displaystyle [0,2\pi ]} , while the density function for the radial coordinate r is chosen according to the probability distribution ρ {\displaystyle \rho } : The growth parameter α > 0 {\displaystyle \alpha >0} controls the distribution: For α = ζ {\displaystyle \alpha =\zeta } , the distribution is uniform in H ζ 2 {\displaystyle \mathbb {H} _{\zeta }^{2}} , for smaller values the nodes are distributed more towards the center of the disk and for bigger values more towards the border. In this model, edges between nodes u {\displaystyle u} and v {\displaystyle v} exist iff d u v < R {\displaystyle d_{uv}<R} or with probability γ ( d u v ) {\displaystyle \gamma (d_{uv})} if a more general connectivity decay function is used. The average degree is controlled by the radius R {\displaystyle R} of the hyperbolic disk. It can be shown, that for α / ζ > 1 / 2 {\displaystyle \alpha /\zeta >1/2} the node degrees follow a power law distribution with exponent γ = 1 + 2 α / ζ {\displaystyle \gamma =1+2\alpha /\zeta } . The image depicts randomly generated graphs for different values of α {\displaystyle \alpha } and R {\displaystyle R} in H 1 2 {\displaystyle \mathbb {H} _{1}^{2}} . It can be seen how α {\displaystyle \alpha } has an effect on the distribution of the nodes and R {\displaystyle R} on the connectivity of the graph. The native representation where the distance variables have their true hyperbolic values is used for the visualization of the graph, therefore edges are straight lines. Source: The naive algorithm for the generation of hyperbolic geometric graphs distributes the nodes on the hyperbolic disk by choosing the angular and radial coordinates of each point are sampled randomly. For every pair of nodes an edge is then inserted with the probability of the value of the connectivity decay function of their respective distance. The pseudocode looks as follows: N {\displaystyle N} is the number of nodes to generate, the distribution of the radial coordinate by the probability density function ρ {\displaystyle \rho } is achieved by using inverse transform sampling. U {\displaystyle U} denotes the uniform sampling of a value in the given interval. Because the algorithm checks for edges for all pairs of nodes, the runtime is quadratic. For applications where N {\displaystyle N} is big, this is not viable any more and algorithms with subquadratic runtime are needed. To avoid checking for edges between every pair of nodes, modern generators use additional data structures that partition the graph into bands. A visualization of this shows a hyperbolic graph with the boundary of the bands drawn in orange. In this case, the partitioning is done along the radial axis. Points are stored sorted by their angular coordinate in their respective band. For each point u {\displaystyle u} , the limits of its hyperbolic circle of radius R {\displaystyle R} can be (over-)estimated and used to only perform the edge-check for points that lie in a band that intersects the circle. Additionally, the sorting within each band can be used to further reduce the number of points to look at by only considering points whose angular coordinate lie in a certain range around the one of u {\displaystyle u} (this range is also computed by over-estimating the hyperbolic circle around u {\displaystyle u} ). Using this and other extensions of the algorithm, time complexities of O ( n log ⁡ log ⁡ n + m ) {\displaystyle {\mathcal {O}}(n\log \log n+m)} (where n {\displaystyle n} is the number of nodes and m {\displaystyle m} the number of edges) are possible with high probability. Findings For ζ = 1 {\displaystyle \zeta =1} (Gaussian curvature K = − 1 {\displaystyle K=-1} ), HGGs form an ensemble of networks for which is possible to express the degree distribution analytically as closed form for the limiting case of large number of nodes. This is worth mentioning since this is not true for many ensembles of graphs. Applications HGGs have been suggested as promising model for social networks where the hyperbolicity appears through a competition between similarity and popularity of an individual. References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Resident_Alien_(TV_series)] | [TOKENS: 1080]
Contents Resident Alien (TV series) Resident Alien is an American science fiction comedy-drama television series created by Chris Sheridan, based on the comic book by Peter Hogan and Steve Parkhouse, that aired for four seasons from January 2021 to August 2025 on Syfy. It stars Alan Tudyk in the title role as an extraterrestrial who crash-lands on Earth with the intent to destroy the planet but develops a moral dilemma. In July 2025, it was confirmed that the fourth season would be its last. Premise After crash-landing on Earth in a small Colorado town, an extraterrestrial sent to wipe out humanity kills a vacationing physician and takes on his identity. He is asked to do an autopsy on the town's doctor, who has died in unknown circumstances, and eventually takes over for the doctor at the town's clinic. He wrestles with the moral dilemma of his secret mission, while also dealing with the mayor's young son, who can see his true alien appearance. He develops compassion for humanity and ends up defending them from other extraterrestrial threats. Cast and characters Episodes Production Series creator Chris Sheridan stated that he was inspired to start the Resident Alien television project after reading the novels and comic book series of the same name. Upon being interviewed at a Television Critics Association panel in January 2020, he also stated that his real inspiration came from an aerial phenomenon "close encounter" he and his wife witnessed while honeymooning in the Bahamas twenty years ago. On May 31, 2018, Syfy announced that the TV adaptation of Resident Alien was given a pilot order with Chris Sheridan as the show creator and Universal Cable Productions, Dark Horse Entertainment, and Amblin Television developing the pilot. On February 28, 2019, Syfy gave a series order with production starting in Vancouver, and David Dobkin directing and serving as an executive producer for the pilot. Robert Duncan McNeill executive produced and was producing director for the remaining episodes. On March 17, 2021, Syfy renewed the series for a second season, which premiered on January 26, 2022, and was split into two eight-episode parts; the second half premiered on August 10, 2022. On July 21, 2022, Syfy renewed the series for a 12-episode third season. On November 15, 2022, Syfy reduced the number of episodes in the third season from 12 to 8. On June 18, 2024, it was announced that the series was renewed for a fourth season and would be simulcast on USA Network which premiered June 6, 2025. On July 24, 2025, Sheridan confirmed that the fourth season would be the final season of the series. On September 20, 2018, Alan Tudyk was cast as the main character "Dr. Harry Vanderspeigle" in the pilot, along with Sara Tomko, Corey Reynolds, Alice Wetterlund, and Levi Fiehler. On January 31, 2020, Linda Hamilton, Mandell Maughan, and Alex Barima were cast in the recurring roles in the series. On February 12, 2020, Elizabeth Bowen was cast in the recurring role of Deputy Sheriff Liv Baker in the series. Principal photography for the first season began on September 10, 2020, and concluded on October 14, 2020, in Delta, British Columbia, Canada. Filming for the second season began on August 3, 2021, and concluded on April 1, 2022. Filming for the first half of the second season took place in Ladysmith, British Columbia, with filming for the second half taking place from February 27, 2022. Filming for the third season began on January 30, 2023, and had concluded on May 2, 2023. Filming for the fourth season began on December 2, 2024 and concluded on March 31, 2025. Release On February 13, 2020, Syfy announced that the series would premiere in summer 2020. However, on October 9, 2020, the premiere was moved to January 2021, specifically January 27 in the United States. Internationally, the series premiered in Canada on CTV Sci-Fi Channel on January 27, 2021, and in the United Kingdom on Sky One the following day. Reception For the first season, review aggregator Rotten Tomatoes reported an approval rating of 94% based on 31 critic reviews, with an average rating of 7.8/10. The website's critics consensus reads, "Resident Alien takes a minute to settle into its skin, but once it does it finds fresh humor in a familiar framework and proves a perfect showcase for Alan Tudyk's singular comedic skills". Metacritic gave the first season a weighted average score of 70 out of 100 based on 15 critic reviews, indicating "generally favorable reviews". For the second season, Rotten Tomatoes reported an approval rating of 100% based on 5 critic reviews, with an average rating of 7.7/10. Notes References External links
========================================
[SOURCE: https://github.com/github-copilot/pro] | [TOKENS: 68]
Navigation Menu Search code, repositories, users, issues, pull requests... Provide feedback We read every piece of feedback, and take your input very seriously. Saved searches Use saved searches to filter your results more quickly To see all available qualifiers, see our documentation. Try Copilot Pro for 30 days free Everything in Copilot Free and: Footer
========================================
[SOURCE: https://en.wikipedia.org/wiki/Game_Oriented_Assembly_Lisp] | [TOKENS: 858]
Contents Game Oriented Assembly Lisp Game Oriented Assembly Lisp (GOAL, also known as Game Object Assembly Lisp) is a programming language, a dialect of the language Lisp, made for video games developed by Andy Gavin and the Jak and Daxter team at the company Naughty Dog. It was written using Allegro Common Lisp and used in the development of the entire Jak and Daxter series of games (excluding Daxter and Jak and Daxter: The Lost Frontier). Design GOAL's syntax resembles the Lisp dialect Scheme, though with many idiosyncratic object-oriented programming features such as classes, inheritance, and virtual functions. GOAL encourages an imperative programming style: programs tend to consist of a sequence of events to be executed rather than the functional programming style of functions to be evaluated recursively. This is a diversion from Scheme, which allows such side effects but does not encourage imperative style. GOAL does not run in an interpreter, but instead is compiled directly into PlayStation 2 machine code to execute. It offers limited facilities for garbage collection, relying extensively on runtime support. It offers dynamic memory allocation primitives designed to make it well-suited to running in constant memory on a video game console. GOAL has extensive support for inlined assembly language code using a special rlet form, allowing programs to freely mix assembly and higher-level constructs within one function. The GOAL compiler is implemented in Allegro Common Lisp. It supports a long term compiling listener session which gives the compiler knowledge about the state of the compiled and thus running program, including the symbol table. This, in addition to dynamic linking, allows a function to be edited, recompiled, uploaded, and inserted into a running game without having to restart. The process is similar to the edit and continue feature offered by some C++ compilers, but allows programs to replace arbitrary amounts of code (even up to entire object files), and does not interrupt the running game with the debugger. This feature was used to implement code and to enable level streaming in the Jak and Daxter games. Uses GOAL's first use was for the game Jak and Daxter: The Precursor Legacy. The predecessor language, Game Oriented Object Lisp (GOOL), was also developed by Andy Gavin for Crash Bandicoot. Since Naughty Dog no longer employs GOAL's primary development and maintenance engineer, and they were under pressure from their new parent company, Sony, to share technology between studios, Naughty Dog transitioned away from Lisp: In all honesty, the biggest reason we're not using GOAL for next-gen development is because we're now part of Sony. I can only imagine Sony's shock when they purchased Naughty Dog a few years back, hoping to be able to leverage some of our technology across other Sony studios, and then realized that there was no way anyone else would be able to use any of our codebase. Sony wants us to be able to share code with other studios, and this works both ways - both other studios using our code and vice versa. Add this to the difficulty curve of learning a new language for new hires, lack of support from external development tools (we had our own compiler, linker, and debugger, and pretty much had to use Emacs as our IDE), etc, means that there are clearly a lot of other factors involved. Note, however, that these issues aren't really technical problems, they're social ones. — Scott Shumaker However, they have since resumed using it for scripting on some PlayStation 3 games, including The Last of Us. OpenGOAL A community project, OpenGOAL, started in 2020 with the goal of porting GOAL to x86-64 by decompiling existing Jak and Daxter: The Precursor Legacy, Jak II, Jak 3 and, tentatively, Jak X: Combat Racing assets and recompiling them natively. It includes a GOAL compiler written in C++ as well as a read–eval–print loop to enable a similar workflow to Naughty Dog's original implementation. By November 2023, the OpenGOAL team had produced Windows and Linux ports for the first two games that are 100% completable, with a Jak 3 port in development as of 2026[update]. References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Palo_Alto,_California] | [TOKENS: 10187]
Contents Palo Alto, California Palo Alto (/ˌpæloʊ ˈæltoʊ/ PAL-oh AL-toh; Spanish for 'tall stick') is a charter city in northwestern Santa Clara County, California, United States, in the San Francisco Bay Area, named after a coastal redwood tree known as El Palo Alto. The city of Palo Alto was incorporated in 1894 by the American industrialist Leland Stanford and his wife, Jane Stanford, when they founded Stanford University in memory of their only child, Leland Stanford Jr. Palo Alto later expanded and now borders East Palo Alto, Mountain View, Los Altos, Los Altos Hills, Stanford, Portola Valley, and Menlo Park. As of the 2020 United States census, the population was 68,572. Palo Alto has one of the highest costs of living in the United States, and its residents are among the most educated in the country. However, it has a youth suicide rate four times higher than the national average, often attributed to academic pressure. As one of the principal cities of Silicon Valley, Palo Alto is home to the headquarters of multiple tech companies, including HP, Space Systems/Loral, VMware, and PARC. Palo Alto has also served as headquarters or the founding location of several other tech companies, including Apple, Google, Meta, Logitech, Tesla, Intuit, IDEO, Pinterest, and PayPal. Ford Motor Company and Lockheed Martin each additionally maintain major research and technology facilities within Palo Alto. History Before the arrival of Europeans, the Ohlone lived on the San Francisco peninsula; in particular, the Puichon Ohlone lived in the Palo Alto area. The area of modern Palo Alto was first recorded by the 1769 party of Gaspar de Portolá, a 64-man, 200-horse expedition setting out from San Diego to find Monterey Bay. The group trekked past the bay without recognizing it and continued north. When they reached modern-day Pacifica, they ascended Sweeney Ridge and saw the San Francisco Bay on November 2. Portolá descended from Sweeney Ridge southeast down San Andreas Creek to Laguna Creek (now Crystal Springs Reservoir), thence to the San Francisquito Creek watershed, ultimately camping from November 6–11, 1769, by a tall redwood later to be known as El Palo Alto. In 1777, Father Junípero Serra established the Mission Santa Clara de Asis, whose northern boundary was San Francisquito Creek and whose lands included modern Palo Alto. The area was under the control of the viceroy of Mexico and ultimately under the control of Spain. On November 29, 1777, Pueblo de San Jose de Guadalupe (now the city of San Jose a few miles to the south of what was to be Palo Alto) was established by order of the viceroy despite the displeasure of the local mission. The Mexican War of Independence ending in 1821 led to Mexico becoming an independent country, though San Jose did not recognize rule by the new Mexico until May 10, 1825. Mexico proceeded to sell off or grant much of the mission land. During the Mexican–American War, the United States seized Alta California in 1846; however, this was not legalized until the Treaty of Guadalupe Hidalgo was signed on July 4, 1848. Mexican citizens in the area could choose to become United States citizens and their land grants were to be recognized if they chose to do so (though many legal disputes arose over this).[citation needed] The land grant, Rancho Rinconada del Arroyo de San Francisquito, of about 2,230 acres (9.0 km2) on the lower reaches of San Francisquito Creek (i.e., parts of modern Menlo Park and northern Palo Alto) was given to Maria Antonia Mesa in 1841. She and her husband Rafael Soto (who had died in 1839) had settled in 1835 near present-day Newell and Middlefield roads and sold supplies. In 1839, their daughter María Luisa Soto (1817–1883) married John Coppinger, who was to be, in 1841, the grantee of Rancho Cañada de Raymundo (in modern San Mateo county). Upon Coppinger's death in 1847, Maria inherited it and later married a visiting boat captain, John Greer. Greer owned a home on the site that is now Town & Country Village on Embarcadero and El Camino Real. Greer Avenue and Court are named for him.[citation needed] To the south of the Sotos, the brothers Secundino and Teodoro Robles in 1849 bought Rancho Rincon de San Francisquito from José Peña, the 1841 grantee. The grant covered the area south of Rancho Rinconada del Arroyo de San Francisquito to more or less present-day Mountain View. The grant was bounded on the south by Mariano Castro's Rancho Pastoria de las Borregas grant across San Antonio Road. This later became the Robles Rancho, which constitutes about 80% of Palo Alto and Stanford University today. In 1863, it was whittled down in the courts to 6,981 acres (28.25 km2). Stories say the grand hacienda was built on the former meager adobe of José Peña near Ferne off San Antonio Road, midway between Middlefield and Alma Street. Their hacienda hosted fiestas and bull fights. It was ruined in the 1906 earthquake and its lumber was used to build a large barn nearby, which was said to have lingered until the early 1950s. On April 10, 1853, 250 acres (1.0 km2), comprising the present-day Barron Park, Matadero Creek and Stanford Business Park, was sold for $2,000 to Elisha Oscar Crosby, who called his new property Mayfield Farm. The name of Mayfield was later attached to the community that started nearby. On September 23, 1856, the Crosby land was transferred to Sarah Wallis to satisfy a debt he owed her. In 1880, Secundino Robles, father to twenty-nine children, still lived just south of Palo Alto, near the location of the present-day San Antonio Shopping Center in Mountain View.[citation needed] Many of the Spanish names in the Palo Alto area represent the local heritage, descriptive terms, and former residents. Pena Court, Miranda Avenue, which was essentially Foothill Expressway, was the married name of Juana Briones and the name occurs in Courts and Avenues and other street names in Palo Alto and Mountain View in the quadrant where she owned vast areas between Stanford University, Grant Road in Mountain View and west of El Camino Real, as well as a public park. Yerba Buena was to her credit. Rinconada was the major Mexican land grant name.[citation needed] The township of Mayfield was formed in 1855, around the site of a stagecoach stop and saloon known as "Uncle Jim's Cabin" near the intersection of El Camino Real and today's California Avenue in what is now southern Palo Alto. In October 1863 the San Francisco to San Jose railroad had been built as far as Mayfield and service started between San Francisco and Mayfield (the station is now California Avenue); train service all the way to San Jose started in January 1864. El Camino became Main Street; the northeast–southwest cross streets were named for Civil War heroes, with California Avenue originally being Lincoln Street. The town had its own newspaper by 1869 (the Mayfield Enterprise, in English and Spanish), incorporated in 1903, and had breweries and a cannery. In 1875, French financier Jean Baptiste Paulin Caperon, better known as Peter Coutts, purchased land in Mayfield and four other parcels around three sides of today's College Terrace – more than a thousand acres (4.0 km2) extending from today's Page Mill Road to Serra Street and from El Camino Real to the foothills. Coutts named his property Ayrshire Farm. Leland Stanford started buying land in the area in 1876 for a horse farm, called the Palo Alto Stock Farm. Stanford bought Ayrshire Farm in 1882. In 1884, Leland and Jane Stanford lost their only child Leland Stanford Jr. when he died of typhoid fever at 15 years old and decided to create a university in his memory. In 1886, they proposed having the university's gateway be Mayfield. However, they had one condition: alcohol had to be banned from the town. Known for its 13 rowdy saloons, Mayfield rejected his request. This led them to drive the formation of a new temperance town with the help of their friend Timothy Hopkins of the Southern Pacific Railroad, who in 1887 bought 740 acres (3.0 km2) of private land for the new townsite. This Hopkins Tract, bounded by El Camino Real, San Francisquito Creek, Boyce, Channing, Melville, and Hopkins Avenues, and Embarcadero Road, was proclaimed a local Heritage District during Palo Alto's centennial in 1994. The Stanfords set up their university, Stanford University, and a train stop (on University Avenue) by the new town. This new community was initially called University Park (the name "Palo Alto" at that time was attached to what is now College Terrace), but was incorporated on April 16, 1894, with the name Palo Alto. With the Stanfords' support, Palo Alto grew to the size of Mayfield. Mayfield eventually passed an ordinance banning saloons that took effect in January 1905.[citation needed] On July 2, 1925, Palo Alto voters approved the annexation of Mayfield and the two communities were officially consolidated on July 6, 1925. As a result, Palo Alto has two Caltrain stops and two downtown areas: one along University Avenue and one along California Avenue (renamed after the annexation since Palo Alto already had a Lincoln Avenue). The Mayfield News wrote its own obituary four days later: It is with a feeling of deep regret that we see on our streets today those who would sell, or give, our beautiful little city to an outside community. We have watched Mayfield grow from a small hamlet, when Palo Alto was nothing more than a hayfield, to her present size ... and it is with a feeling of sorrow that we contemplate the fact that there are those who would sell or give the city away. Palo Alto continued to annex more land, including the Stanford Shopping Center area in 1953. Stanford Research Park, Embarcadero Road northeast of Bayshore, and the West Bayshore/San Antonio Road area were also annexed during the 1950s. Large amounts of land west of Foothill Expressway were annexed between 1959 and 1968; this is mostly undeveloped and includes Foothills Park and Arastradero Preserve. The last major annexations were of Barron Park in 1975 and, in 1979, a large area of marshlands bordering the bay. Many of Stanford University's first faculty members settled in the Professorville neighborhood of Palo Alto. Professorville, now a registered national historic district, is bounded by Kingsley, Lincoln, and Addison Avenues and the cross streets of Ramona, Bryant, and Waverley. The district includes a large number of well-preserved residences dating from the 1890s, including 833 Kingsley, 345 Lincoln, and 450 Kingsley. 1044 Bryant was the home of Russell Varian, co-inventor of the Klystron tube. The Federal Telegraph laboratory site, situated at 218 Channing, is a California Historical Landmark recognizing Lee de Forest's 1911 invention of the vacuum tube and electronic oscillator at that location. While not open to the public, the garage that housed the launch of Hewlett Packard is located at 367 Addison Avenue. Hewlett Packard recently restored the house and garage. A second historic district on Ramona Street can be found downtown between University and Hamilton Avenues. Established in 1963, the Palo Alto Chinese School is the San Francisco Bay Area's oldest Chinese school. It is also home to the second oldest opera company in California, the West Bay Opera. One early major business was when Thomas Foon Chew, owner of the Bayside Canning Company in Alviso founded by his father, expanded his business by starting a cannery in 1918 in what was then Mayfield that initially employed 350 workers but later expanded. In the 1920s the Bayside Canning Company became one of the largest in the world. In 1949 the Palo Alto cannery, now part of the Sutter Packing Company under the ownership of Safeway, closed; at the time it was the largest employer in Palo Alto with about a 1,000 workers. Various businesses used the building since including Fry's Electronics. Palo Alto is also home to a long-standing baseball tradition. The Palo Alto Oaks are a collegiate summer baseball club that has been in the Bay Area since 1950, eight years longer than the San Francisco Giants. The Oaks were originally managed by Tony Makjavich for 49 years. The Oaks were going to fold before the summer 2016 season but were taken on by Daniel Palladino and Whaylan Price, Bay Area baseball coaches who did not want to see the team die. The Oaks have a rich history within the Palo Alto community. Geography Palo Alto is situated in the southeastern section of the San Francisco Peninsula. It consists of two large parcels of land connected by a narrow corridor. The southern inland section, located south of Interstate 280, is hilly, rural, and lightly populated and is the site of Pearson–Arastradero Preserve and Foothills Park both part of the Palo Alto park system and also large parts of the Los Trancos and Monte Bello Open Space Preserves part of the Midpeninsula Regional Open Space District. The city extends as far as Skyline Boulevard along the ridge of the Santa Cruz Mountains. The northern more densely populated parcel is bordered by San Francisquito Creek (with Menlo Park and East Palo Alto in adjacent San Mateo County beyond) to the north, San Francisco Bay to the north-east, Mountain View, Los Altos, and Los Altos Hills to the east and south-east and Stanford University to the south-west and west. Several major transit routes cross this parcel from the north-west to the south-east. The biggest and closest to the bay is the Bayshore Freeway and going inland are Alma Street/Central Expressway, El Camino Real, and Foothill Expressway. Interstate 280 is parallel and crosses the narrow corridor of land that connects the two parcels that makeup Palo Alto. Somewhat perpendicular to these roads are Sand Hill Road from El Camino until it crosses San Francisquito Creek into Menlo Park, Embarcadero Road, Oregon Expressway/Page Mill Road, Arastradero Road/East Charleston Road, and San Antonio Road (the last forms part of the boundary with Mountain View). According to the United States Census Bureau, the city has a total area of 25.8 square miles (67 km2), of which 23.9 square miles (62 km2) is land and 1.9 square miles (4.9 km2), comprising 7.38%, is water. The official elevation is 30 feet (9 m) above sea level, but the city boundaries reach well into the northern section of the Santa Cruz Mountains. Palo Alto is crossed by several creeks that flow north in the direction of the San Francisco Bay, Adobe Creek near its eastern boundary, San Francisquito Creek on its western boundary, and Matadero Creek in between the other two. Arastradero Creek is a tributary to Matadero Creek, and Barron Creek is now diverted to Adobe Creek just south of Highway 101 by a diversion channel. The San Francisquito Creek mainstem is formed by the confluence of Corte Madera Creek and Bear Creek not far below Searsville Dam. Further downstream, Los Trancos Creek is a tributary to San Francisquito Creek below Interstate 280. Palo Alto has a number of significant natural habitats, including estuarine, riparian, and oak forest. Many of these habitats are visible in Foothills Park, which is owned by the city. The Charleston Slough contains a rich marsh and littoral zone, providing feeding areas for a variety of shorebirds and other estuarine wildlife. Typical of the South Peninsula region of the San Francisco Bay Area, Palo Alto has a Mediterranean climate with mild, moderately wet winters and warm, dry summers. Typically, in the warmer months, as the sun goes down, the fog bank flows over the foothills to the west and covers the night sky, thus creating a blanket that helps trap the summer warmth absorbed during the day.[citation needed] Even so, it is rare for the overnight low temperature to exceed 60 °F (16 °C).[citation needed] In January, average daily temperatures range from a low of 39.0 °F (3.9 °C) to a high of 57.8 °F (14.3 °C). In July, average temperatures range from 55.7 to 79.4 °F (13.2 to 26.3 °C). The record high temperature was 108 °F (42 °C) on September 6, 2022, and the record low temperature was 20 °F (−7 °C) on January 11, 1949, and December 24, 1990. Temperatures reach 90 °F (32 °C) or higher on an average of 12.0 days. Temperatures drop to 32 °F (0 °C) or lower on an average of 14.0 days. Due to the Santa Cruz Mountains to the west, there is a "rain shadow" in Palo Alto, resulting in an average annual rainfall of only 15.12 inches (384 mm). Measurable rainfall occurs on an average of 55.8 days annually. The wettest year on record was 1983 with 32.51 inches (826 mm) and the driest year was 2013 with 3.81 inches (97 mm). The most rainfall in one month was 12.43 inches (316 mm) in February 1998 and the most rainfall in one day was 3.75 inches (95 mm) on February 3, 1998. Measurable snowfall is very rare in the populated areas of Palo Alto, but 1.5 inches (3.8 cm) fell on January 21, 1962. A dusting of snow occasionally occurs in the highest (unpopulated) section of Palo Alto near Skyline Ridge, where the elevation reaches up to 2,812 feet (857 m). According to the Köppen Climate Classification system, Palo Alto has a warm-summer Mediterranean climate (Csb). Local government Palo Alto was incorporated in 1894. In 1909 a municipal charter created a local government consisting of a fifteen-member city council, with responsibilities for various governmental functions delegated to appointed committees. In 1950, the city adopted a Council–manager government. Several appointed committees continue to advise the city council on specialized issues, such as land-use planning, utilities, and libraries, but these committees no longer have direct authority over City staff. Currently, the city council has seven members. The mayor and vice-mayor serve one year at a time, with terms ending in January. General municipal elections are held in November of even-numbered years. Council terms are four years long. According to one study in 2015, the city's effective property tax rate of 0.42% was the lowest of the California cities included in the study. Politics In the California State Legislature, Palo Alto is in the 13th senatorial district, represented by Democrat Josh Becker, and in the 23rd Assembly district, represented by Democrat Marc Berman. In the United States House of Representatives, Palo Alto is in California's 16th congressional district, represented by Democrat Sam Liccardo. According to the California Secretary of State, as of February 10, 2019, Palo Alto has 40,040 registered voters. Of those, 20,857 (52.1%) are registered Democrats, 4,689 (11.7%) are registered Republicans, and 13,520 (33.8%) have declined to state a political party. Demographics The 2020 United States census reported that Palo Alto had a population of 68,572. The population density was 2,845.8 inhabitants per square mile (1,098.8/km2). The racial makeup of Palo Alto was 49.9% White, 1.8% African American, 0.2% Native American, 35.5% Asian, 0.2% Pacific Islander, 3.0% from other races, and 9.5% from two or more races. Hispanic or Latino of any race were 7.4% of the population. The census reported that 98.8% of the population lived in households, 0.2% lived in non-institutionalized group quarters, and 1.0% were institutionalized. There were 26,677 households, out of which 32.7% included children under the age of 18, 55.0% were married-couple households, 4.2% were cohabiting couple households, 24.5% had a female householder with no partner present, and 16.3% had a male householder with no partner present. 26.6% of households were one person, and 13.2% were one person aged 65 or older. The average household size was 2.54. There were 17,563 families (65.8% of all households). The age distribution was 21.8% under the age of 18, 7.1% aged 18 to 24, 24.8% aged 25 to 44, 26.7% aged 45 to 64, and 19.6% who were 65 years of age or older. The median age was 42.2 years. For every 100 females, there were 96.7 males. There were 28,904 housing units at an average density of 1,199.5 units per square mile (463.1 units/km2), of which 26,677 (92.3%) were occupied. Of these, 52.7% were owner-occupied, and 47.3% were occupied by renters. In 2023, the US Census Bureau estimated that the median household income was $220,408, and the per capita income was $121,565. About 3.2% of families and 5.4% of the population were below the poverty line. The 2010 United States census reported that Palo Alto had a population of 64,403. The population density was 2,497.5 inhabitants per square mile (964.3/km2). The racial makeup of Palo Alto was 41,359 (64.2%) White, 17,461 (27.1%) Asian, 1,197 (1.9%) African American, 121 (0.2%) Native American, 142 (0.2%) Pacific Islander, 1,426 (2.2%) from other races, and 2,697 (4.2%) from two or more races. Hispanic or Latino of any race were 3,974 persons (6.2%). The Census reported that 63,820 people (99.1% of the population) lived in households, 205 (0.3%) lived in non-institutionalized group quarters, and 378 (0.6%) were institutionalized. There were 26,493 households, out of which 8,624 (32.6%) had children under the age of 18 living in them, 13,975 (52.7%) were opposite-sex married couples living together, 1,843 (7.0%) had a female householder with no husband present, 659 (2.5%) had a male householder with no wife present. There were 979 (3.7%) unmarried opposite-sex partnerships, and 188 (0.7%) same-sex married couples or partnerships. 7,982 households (30.1%) were made up of individuals, and 3,285 (12.4%) had someone living alone who was 65 years of age or older. The average household size was 2.41. There were 16,477 families (62.2% of all households); the average family size was 3.04. The population was spread out, with 15,079 people (23.4%) under the age of 18, 3,141 people (4.9%) aged 18 to 24, 17,159 people (26.6%) aged 25 to 44, 18,018 people (28.0%) aged 45 to 64, and 11,006 people (17.1%) who were 65 years of age or older. The median age was 41.9 years. For every 100 females, there were 95.7 males. For every 100 females aged 18 and over, there were 93.0 males. There were 28,216 housing units at an average density of 1,094.2 units per square mile (422.5 units/km2), of which 14,766 (55.7%) were owner-occupied, and 11,727 (44.3%) were occupied by renters. The homeowner vacancy rate was 1.5%; the rental vacancy rate was 5.6%. 39,176 people (60.8% of the population) lived in owner-occupied housing units and 24,644 people (38.3%) lived in rental housing units. Housing Palo Alto, north of Oregon Expressway, is filled with older homes, including Craftsman and California Colonials, some of which date back to the 1890s but most of which were built in the first four decades of the 20th century. South of Oregon Expressway, the homes, including many Joseph Eichler-designed or Eichler-style houses, were primarily built in the first 20 years after World War II. While the city contains homes that now cost anywhere from $800,000 to well over $40 million, much of Palo Alto's housing stock is in the style of California mid-century middle-class suburbia. The median home sale price for all of Palo Alto was $1.2 million in 2007 and $1.4 million in July 2009. Palo Alto ranked in as the 5th most expensive city in the United States as of 2007[update], with an average home sales price of $1,677,000. In 2010, Palo Alto ranked as the 2nd most expensive city in the United States, with a four-bedroom, two-bathroom home listing for $1.48 million on average. Palo Alto is by some measures the most expensive college town in the United States. By 2020, residents' opposition to new housing has resulted in Palo Alto only allowing construction of enough low-income housing to meet 13% of its state-mandated share, and 6% of its share for very-low-income housing. In the 1920s, racial covenants were used that banned "persons of African, Japanese, Chinese, or Mongolian descent" from purchasing or renting homes in many neighborhoods throughout Palo Alto. In the 1950s, some movements opposed these policies, including the Palo Alto Fair Play Association, as well as architect and developer Joseph Eichler, who built almost 3,000 homes in Palo Alto. Blockbusting strategies were also employed to instill fear in white neighborhoods and cause White flight out of areas on the outskirts of the city. Blockbusting refers to a practice realtors adopted in which they would advertise the incoming presence of a black family to a neighborhood, causing panic among the white residents who would consequently sell their houses at deflated prices very quickly. One famous blockbusting event is responsible for the prevailing demographic divides between Palo Alto and East Palo Alto. One of the most destructive policies at the time was redlining. Redlining was a policy put in place by the Federal Housing Association starting in 1937. Through the program, the association could rank neighborhoods from Type A, which was desirable, to Type D (outlined in red) which was deemed hazardous. Residents in Type D neighborhoods were ineligible for loans to buy or fix houses. The program was implemented in a way so that neighborhoods with any kind of African American population were ranked type C or D. This was also the case in Palo Alto and the surrounding areas. Palo Alto's White neighborhoods were ranked mostly Type A and B, allowing for wealth accumulation and eventually resulting in the high housing prices we see today. On the other hand, the surrounding areas were all marked Type C and D, and African Americans found themselves being driven to the outskirts of Palo Alto, what is now mostly East Palo Alto, where there was no money from loans in the economy, leading to a state of decay. However, for the most part, Palo Alto's housing was built on policies that are still reflected in the current demographics. Economy Palo Alto serves as a central economic focal point of the Silicon Valley and is home to more than 7,000 businesses employing more than 98,000 people. In the mid-1950s Palo Alto had a railroad village feel and the streets were lined with bungalows and prim storefronts. At a time when only 7 percent of American adults had completed four years of college, more than a third of men living in the Palo Alto suburb had a college degree. In 1949 Wallace Sterling had been appointed as Stanford's president and Frederick Terman became provost. The fledgling Stanford University was reorganized, offering applied educational programs on physics, material science, and electrical engineering. Stanford's basic research capabilities were built up and the laboratory facilities for applied work were brought together in the Stanford Electronics Laboratories. Many prominent technology firms reside in the Stanford Research Park on Page Mill Road, while nearby Sand Hill Road in the adjacent city of Menlo Park is a notable hub of venture capitalists. A number of prominent Silicon Valley companies no longer reside primarily in Palo Alto. These include Google (now in Mountain View), Facebook (now in Menlo Park), and PayPal (now in San Jose). In 2021, Tesla, Inc. moved its headquarters from Palo Alto to Austin, Texas. Palo Alto's retail and restaurant trade includes Stanford Shopping Center, an upscale open air shopping center established in 1955, downtown Palo Alto, centered on University Avenue, Town and Country Village off of El Camino, and the California Avenue shopping district in its second downtown. Palo Alto is the location of the first street-level Apple Store, the first Apple mini store, the first West Coast Whole Foods Market store, and the first Victoria's Secret. According to the city's 2025 Annual Comprehensive Annual Financial Report, the top employers in the city are: Utilities Palo Alto has a city-run and owned utility, City of Palo Alto Utilities (CPAU), which provides water, electric, gas service, and waste water disposal within city limits, with the minor exception of a rural portion of the city in the hills west of Interstate 280, past the Country Club, which does not receive gas from the city. Almost all other communities in northern California depend on Pacific Gas and Electric Company (PG&E) for gas and electricity. Water and Gas Services (WGS) operates gas and water distribution networks within the city limits. The city operates both gas meters and the distribution pipelines. Water comes from city-operated watershed and wells and the City and County of San Francisco Hetch Hetchy system. The city is located in Santa Clara Valley Water District, North Zone. Hetch Hetchy pipeline #3 and #4 pass through the city. The city operates its own electric power distribution network and telemetry cable network. Interconnection points tie the city into PG&E's electric transmission system, which brings power from several sources to the city. Palo Alto is a member of a joint powers authority (the Northern California Power Agency), which cooperatively generates electricity for government power providers such as the City of Santa Clara, the City of Redding, and the Port of Oakland. Roughly the same group of entities operate the Transmission Agency of Northern California (TANC). TANC transports power over its own lines from as far as British Columbia through an interconnection with the federal Bonneville Power Administration. A local oddity is a series of joint poles; those primary conductor cross arms are marked PGE and CPA (City of Palo Alto) to identify each utility's side of the shared cross arms. Palo Alto has an ongoing community debate about the city providing fiber optic connectivity to all residences.[citation needed] A series of pilot programs have been proposed. One proposal called for the city to install dark fiber, which would be made live by a contractor.[citation needed] Services traditionally attributed to a cable television provider were sold to a regulated commercial concern. Previously the cable system was operated by a cooperative called Palo Alto Cable Coop.[citation needed] The former Regional Bell Operating Company in Palo Alto was Pacific Telephone, now called AT&T Inc., and previously called SBC and Pacific Bell. One of the earliest central office facilities switching Palo Alto calls is the historic Davenport central office (CO) at 529 Bryant Street.[citation needed] The building was sold and is now the home of the Palo Alto Internet Exchange. The former CO building is marked by a bronze plaque and is located on the north side of Bryant Street between University Avenue and Hamilton Avenue. It was called Davenport after the exchange name at the introduction of dial telephone service in Palo Alto.[citation needed] For example, modern numbers starting with 325- were Davenport 5 in the 1950s and '60s. The Step-by-Step office was scrapped and replaced by stored-program-controlled equipment at a different location about 1980. Stanford calls ran on a Step-by-Step Western Electric 701 PBX until the university purchased its own switch about 1980. It had the older, traditional Bell System 600 Hz+120 Hz dial tone. The old 497-number PBX, MDF, and battery string were housed in a steel building at 333 Bonair Siding. From the 1950s to 1980s, the bulk of Palo Alto calls were switched on Number 5 Crossbar systems. By the mid-1980s, these electromechanical systems had been junked. Under the Bell System's regulated monopoly, local coin telephone calls were ten cents until the early 1980s.[citation needed] During the drought of the early 1990s, Palo Alto employed water waste patrol officers to enforce water saving regulations.[citation needed] The team, called "Gush Busters", patrolled city streets looking for broken water pipes and poorly managed irrigation systems. Regulations were set to stop restaurants from habitually serving water, runoff from irrigation, and irrigation during the day. The main goal of the team was to educate the public in ways to save water. Citations consisted of Friendly Reminder postcards and more formal notices. To help promote the conservation message, the team only used bicycles and mopeds.[citation needed] Fire and police departments The city was among the first in Santa Clara County to offer advanced life support (ALS) paramedic-level (EMT-P) ambulance service. In an arrangement predating countywide paramedic service, Palo Alto Fire operates two paramedic ambulances which are theoretically shared with county EMS assets. The Palo Alto Fire Department is currently the only fire department in Santa Clara County that routinely transports patients. Rural Metro holds the Santa Clara County 911 contract and provides transportation in other cities. Enhanced 9-1-1 arrived in about 1980 and included the then-new ability to report emergencies from coin telephones without using a coin. Palo Alto Fire also has a contract with Stanford University to cover most of the campus. In all, the Fire Department has six regular stations plus one opened only during the summer fire season in the foothills. The police station was originally housed in a stone building at 450 Bryant Street. Still engraved with the words Police Court, the building is now a non-profit senior citizen center, Avenidas. The police are now headquartered in the City Hall high rise. The department has just under 100 sworn officers ranking supplemented by approximately ten reserve officers and professional staff who support the police department and the animal services organization. The Barron Park donkey pasture, in Bol Park, has one of the original firetrucks used by the volunteer Barron Park fire department (before it was incorporated into Palo Alto). Education The Palo Alto Unified School District provides public education for most of Palo Alto. According to the National Center for Education Statistics, Palo Alto has a student-teacher ratio of 14.9, much lower than some surrounding communities. Juana Briones Elementary has a student/teacher ratio of 14.4. The school board meets at 7 p.m. on the 2nd and 4th Tuesdays of the month; the meetings are open to the public and city, cast live on Channel 28. Channel 28 is operated by the Mid-peninsula Community Media Center in Palo Alto, which is affiliated with the Alliance for Community Media. ACM represents over 2,000 PEG channels in the US. Government-access television (GATV) Cable TV. PAUSD high school students attend either Gunn High School or Palo Alto High School. There are three middle schools in the school district: JLS, Greene, and Fletcher. Fletcher students typically go to Gunn, while Greene students go to Palo Alto High School. JLS students are split between the two. PAUSD also owns the property of Cubberly Community Center, which was formerly the Ellwood P. Cubberley High School. The Los Altos School District and Mountain View–Los Altos Union High School District provide public education for the Monroe neighborhood portion of Palo Alto off El Camino Real south of Adobe Creek. Palo Alto is home to Palo Alto University, a school focused on psychology. The main academic campus of Stanford University, a private research university, is adjacent to Palo Alto. The university also includes lands in the Palo Alto city limits. Libraries The Palo Alto City Library has five branches, with a total of 265,000 items in their collections. The Mitchell Park Library was rebuilt between 2010 and December 2014 to become the largest in Palo Alto. The former Main Library was then renamed the Rinconada branch. Palo Alto Children's Library is located close to the former Main Library. There are smaller branches in the Downtown and College Terrace neighborhoods. Media The Palo Alto Daily Post publishes six days a week. Palo Alto Daily News, a unit of the San Jose Mercury News, publishes five days a week. Palo Alto Weekly is published on Fridays. Palo Alto Times, a daily newspaper, served Palo Alto and neighboring cities beginning in 1894. In 1979, it became the Peninsula Times Tribune. The newspaper ceased publication in 1993. KDOW, 1220 AM, began broadcasting in 1949 as KIBE. It later became KDFC, simulcasting classical KDFC-FM. As KDOW it broadcasts a business news format. The transmitter is in East Palo Alto near the western approach to the Dumbarton Bridge. Power is 5,000 watts daytime and 145 watts nighttime. KZSU at 90.1 FM is owned by Stanford University. KFJC at 89.7 FM is licensed to nearby Los Altos and is licensed to the Foothill-De Anza Community College District. KTLN-TV, virtual channel 68, transmits from Mt. Allison across San Francisco bay, east of Palo Alto. The Midpeninsula Community Media Center provides public, educational, and government access (PEG) cable television channels 26, 28, 29, 30 and 75. Transportation Palo Alto is served by two major freeways, Highway 101 and Interstate 280 and is traversed by the Peninsula's main north–south boulevard, El Camino Real (SR 82). Santa Clara County maintains 2 expressways in Palo Alto; Route G3 is the city's main east–west route and is the only road in the city that connects the two freeways directly. Route G6 travels through the city as Alma Street, serving as an alternate route to SR 82. The city is also served indirectly by State Route 84 which traverses the Dumbarton Bridge to the north, and State Route 85 via Mountain View to the south. There are no parking meters in Palo Alto, and all municipal parking lots and multi-level parking structures are free but limited to two or three hours per weekday 8am–5pm. Downtown Palo Alto has recently added many new lots to fill the overflow of vehicles. Beginning in 2014, Palo Alto has begun implementing permit parking in some areas of the town. Palo Alto is served by Palo Alto Airport (KPAO), one of the busiest single-runway general aviation airports in the country. It is used by many daily commuters who fly (usually in private single-engine aircraft) from their homes in the Central Valley to work in the Palo Alto area. The nearest commercial airport is San Jose International Airport (SJC) (also known as Norman Mineta Airport), about 15 miles (24 km) southeast. Nearby is San Francisco International Airport (SFO), about 21 miles (34 km) north. Passenger train service is provided exclusively by Caltrain, with service between San Francisco and San Jose, extending to Gilroy. Caltrain has two regular stations in Palo Alto, the main one at the Palo Alto Station in downtown Palo Alto (local, limited, and express). The main Palo Alto station is the second busiest (behind 4th and King in San Francisco) on the entire Caltrain line. The other station is located at California Avenue, (local and limited). A third, the Stanford station, located beside Alma Street at Embarcadero Road, is used for occasional sports events (generally football) at Stanford Stadium. Freight trains through Palo Alto are operated by Union Pacific (formerly Southern Pacific). There are 4 grade crossings within city limits, at the intersections of Alma St, Churchill Ave, Meadow Dr, and Charleston Rd. The city has made plans upgrade the southern crossings and even proposed closing the Churchill Ave crossing completely to road traffic, but the project has been a hot-button issue for local residents and politicians for over a decade. The current proposed solution is elevating the tracks above grade, while lowering the roadway below grade. A subway tunnel underneath the right of way and converting the old track bed into a park was also proposed, but eventually dropped after it was deemed to be too costly and concerns over environmental impacts. Despite increased train service and Caltrain already installing Electric Service equipment on the mainline, Palo Alto has no green-lighted plans to address the crossings, except for immediate changes at Churchill Ave to address safety concerns at that intersection. The Palo Alto Transit Center adjacent to the Palo Alto Train Station is the major bus hub for northern Santa Clara county. The Santa Clara Valley Transportation Authority (VTA) provides primary bus service through Palo Alto with service to the South Bay and Silicon Valley. San Mateo County Transit District (SamTrans) provides service to San Mateo County to the north but some lines include the Palo Alto Transit Center. The Stanford University Free Shuttle (Marguerite) provides a supplementary bus service between Stanford University and the Palo Alto Transit Center, and the Palo Alto Free Shuttle (Crosstown and Embarcadero), which circulates frequently, and provides service to major points in Palo Alto, including the main library, downtown, the Municipal Golf Course, the Palo Alto Transit Center, and both high schools. The Dumbarton Express is a weekday-only limited stop bus service that connects Union City BART in the East Bay to Palo Alto via the Dumbarton Bridge serving Stanford University, Stanford Research Park, Palo Alto Transit Center, and Veterans Hospital. Cycling is a popular mode of transportation in Palo Alto. 9.5% of residents bicycle to work, the highest percentage of any city in the Bay Area, and third highest in the United States, after Davis, California and Boulder, Colorado. Since 2003, Palo Alto has received a Bicycle Friendly Community status of "Gold" from the League of American Bicyclists. The city's flat terrain and many quiet tree-shaded residential streets offer comfort and safety to cyclists, and the temperate climate makes year-round cycling convenient. Palo Alto pioneered the bicycle boulevard concept in the early 1980s, enhancing residential Bryant Street to prioritize it for cyclists by removing stop signs, providing special traffic signals, and installing traffic diverters, and a bicycle/pedestrian bridge over Matadero Creek.[citation needed] However, busy arterial streets which often offer the fastest and most direct route to many destinations, are dangerous for cyclists due to high volumes of fast-moving traffic and the lack of bicycle lanes.[citation needed] El Camino Real, Alma Street, and Embarcadero and Middlefield roads, all identified as "high priorities" for adding bicycle lanes to improve safety by the 2003 Palo Alto Bicycle Transportation Plan, still contain no provisions for cyclists. The Palo Alto Police Department decided to stop using tasers to detain bicyclists after a 2012 incident in which a 16-year-old boy, who had bicycled through a stop sign, was injured after police officers pursued him, fired a taser at him and suddenly braked their patrol car in front of him, causing the boy to crash. Conditions for walking are excellent in Palo Alto except for crossing high-volume arterial streets such as El Camino Real and Oregon Expressway. Sidewalks are available on nearly every city street, with the notable exception of the Barron Park neighborhood, which was the last to be incorporated into the city. Palo Alto's Street grid is well-connected with few dead-end streets, especially in the city's older northern neighborhoods. An extensive urban forest, which is protected by the city's municipal code, provides shade and visual diversity, and slows motor vehicle traffic. 4.8% of residents walk to work. The city of Palo Alto created a rideshare service option called Palo Alto Link, and made its debut 2023. The rideshare service option provides service throughout the city, and has discounted fares for students and senior citizens. Funding for the project was provided by 2016 VTA Measure B and Bay Area Air Quality Transportation Fund for Clean Air. The program is administered by Nomad Transit, and has 10 vehicles in its fleet. Sister cities Palo Alto has eight sister cities, as designated by Sister Cities International: The city also has a relationship via Sibling Cities USA with Bloomington, Indiana since 2022. In addition Narok in Kenya is a friendship city, a less formal shorter term relationship which started in 2025. In 1989, Palo Alto received a gift of a large, whimsical wooden sculpture called Foreign Friends (Fjärran Vänner)—of a man, woman, dog and bird sitting on a park bench—from Linköping. The sculpture was praised by some, called "grotesque" by others, and became a lightning rod for vandals. It was covered with a large, addressed postcard marked "Return to Sender." A former Stanford University professor was arrested for attempting to light it on fire. It was also doused with paint. When the original heads were decapitated on Halloween, 1993, the statue became a shrine—flowers bouquets and cards were placed upon it. Following an anonymous donation, the heads were restored. Within weeks, the restored heads were decapitated again, this time disappearing. The heads were eventually replaced with new ones, which generated even more distaste, as many deemed the new heads even less attractive. A few months later, the man's arm was chopped off, the woman's lap was vandalized, the bird was stolen, and the replacements heads were decapitated and stolen. The sculpture was removed from its location on Embarcadero Road and Waverley Avenue in 1995, dismantled, and placed in storage until it was destroyed in 2000. Ironically, the statue was designed not as a lasting work of art, but as something to be climbed on with a lifespan of 10 to 25 years. Notable buildings and other points of interest Notable people See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Jewish_diaspora] | [TOKENS: 14550]
Contents Jewish diaspora The Jewish diaspora (Hebrew: גוֹלָה gōlā), alternatively the dispersion (תְּפוּצָה təfūṣā) or the exile (גָּלוּת gālūṯ; Yiddish: גלות gōləs),[a] consists of Jews who reside outside of the Land of Israel. Historically, it refers to the expansive scattering of the Israelites out of their homeland in the Southern Levant and their subsequent settlement in other parts of the world, which gave rise to the various Jewish communities. In the Hebrew Bible, the term gālūṯ (lit. 'exile') denotes the fate of the Twelve Tribes of Israel over the course of two major exilic events in ancient Israel and Judah: the Assyrian captivity, which occurred after the Kingdom of Israel was conquered by the Neo-Assyrian Empire in the 8th century BCE; and the Babylonian captivity, which occurred after the Kingdom of Judah was conquered by the Neo-Babylonian Empire in the 6th century BCE. While those who were taken from Israel dispersed as the Ten Lost Tribes, those who were taken from Judah—consisting of the Tribe of Judah and the Tribe of Benjamin—became known by the identity "Jew" (יְהוּדִי Yehūdī, lit. 'of Judah') and were repatriated following the Persian conquest of Babylonia. A Jewish diaspora population existed for many centuries before the Roman siege of Jerusalem in 70 CE. In the preceding Second Temple period, it existed as a consequence of various factors, including the creation of political and war refugees, enslavement, deportation, overpopulation, indebtedness, military employment, and opportunities in business, commerce, and agriculture. Prior to the mid-1st century CE, in addition to Judea, Syria, and Babylonia, large Jewish communities existed in the Roman provinces of Egypt, Crete and Cyrenaica, and in Rome itself. In 6 CE, most of the Southern Levant was organized as the Roman province of Judaea, where a large uprising led to the First Jewish–Roman War, which destroyed the Second Temple and most of Jerusalem. The Jewish defeat to the Roman army and the accompanying elimination of the symbolic centre of Jewish identity (the Temple in Jerusalem) marked the end of Second Temple Judaism, motivating many Jews to formulate a new self-definition and adjust their existence to the prospect of an indefinite period of displacement. Nevertheless, intermittent warfare between Jewish nationalists and the Roman Empire continued for several decades. In 129/130 CE, the Roman emperor Hadrian ordered the construction of Aelia Capitolina over the ruins of Jerusalem, sparking the Bar Kokhba revolt in 132 CE. Led by Simon bar Kokhba, this uprising endured for four years, but was ultimately unsuccessful and became the last of the Jewish–Roman wars; Jews were massacred or displaced across the province, banned from Jerusalem and its surrounding areas, and forbidden to practice Judaism, leading to a significant rise in the Jewish diaspora. By the Middle Ages, owing to increasing migration and resettlement, diaspora Jews divided into distinct regional groups that are generally addressed according to two primary geographical groupings: the Ashkenazi Jews, who coalesced in the Holy Roman Empire and Eastern Europe; and the Sephardic Jews, who coalesced in the Iberian Peninsula and the Arab world. These groups have parallel histories, sharing many cultural similarities and experiences of persecution and expulsions and exoduses, such as the expulsion from England in 1290, the expulsion from Spain in 1492, and the expulsion from the Muslim world after 1948. Although the two branches comprise many unique ethno-cultural practices and have links to their local host populations (such as Central Europeans for Ashkenazi Jews, and Hispanics and Arabs for Sephardic Jews), their common religious practices and shared ancestry, as well as their continuous communication and population transfers, have been responsible for cementing a unified sense of peoplehood between them since the late Roman period. Origins and uses of the terms Diaspora has been a common phenomenon for many peoples since antiquity, but what is particular about the Jewish instance is the pronounced negative, religious, indeed metaphysical connotations traditionally attached to dispersion and exile (galut), two conditions which were conflated. The English term diaspora, which entered usage as late as 1876, and the Hebrew word galut though covering a similar semantic range, bear some distinct differences in connotation. The former has no traditional equivalent in Hebrew usage. Steven Bowman argues that diaspora in antiquity connoted emigration from an ancestral mother city, with the emigrant community maintaining its cultural ties with the place of origin. Just as the Greek city exported its surplus population, so did Jerusalem, while remaining the cultural and religious centre or metropolis (ir-va-em be-yisrael) for the outlying communities. It could have two senses in Biblical terms, the idea of becoming a 'guiding light unto the nations' by dwelling in the midst of gentiles, or of enduring the pain of exile from one's homeland. The conditions of diaspora in the former case were premised on the free exercise of citizenship or resident alien status. Galut implies by comparison living as a denigrated minority, stripped of such rights, in the host society. Sometimes diaspora and galut are defined as 'voluntary' as opposed to 'involuntary' exile. Diaspora, it has been argued, has a political edge, referring to geopolitical dispersion, which may be involuntary, but which can assume, under different conditions, a positive nuance. Galut is more teleological, and connotes a sense of uprootedness. Daniel Boyarin defines diaspora as a state where people have a dual cultural allegiance, productive of a double consciousness, and in this sense a cultural condition not premised on any particular history, as opposed to galut, which is more descriptive of an existential situation, that properly of exile, conveying a particular psychological outlook. The Greek word διασπορά (dispersion) first appears as a neologism in the translation of the Old Testament known as the Septuagint, where it occurs 14 times, starting with a passage reading: ἔση διασπορὰ ἐν πάσαις βασιλείαις τῆς γῆς ('thou shalt be a diaspora (or dispersion) in all kingdoms of the earth', Deuteronomy 28:25), translating 'ləza'ăwāh', whose root suggests 'trouble, terror'. In these contexts it never translated any term in the original Tanakh drawn from the Hebrew root glt (גלה), which lies behind galah, and golah, nor even galuth. Golah appears 42 times, and galuth in 15 passages, and first occurs in the 2 Kings 17:23's reference to the deportation of the Judean elite to Babylonia. Stéphane Dufoix, in surveying the textual evidence, draws the following conclusion: galuth and diaspora are drawn from two completely different lexicons. The first refers to episodes, precise and datable, in the history of the people of Israel, when the latter was subjected to a foreign occupation, such as that of Babylon, in which most of the occurrences are found. The second, perhaps with a single exception that remains debatable, is never used to speak of the past and does not concern Babylon; the instrument of dispersion is never the historical sovereign of another country. Diaspora is the word for chastisement, but the dispersion in question has not occurred yet: it is potential, conditional on the Jews not respecting the law of God. . . It follows that diaspora belongs, not to the domain of history, but of theology.' In Talmudic and post-Talmudic Rabbinic literature, this phenomenon was referred to as galut (exile), a term with strongly negative connotations, often contrasted with geula (redemption). Eugene Borowitz describes Galut as "fundamentally a theological category The modern Hebrew concept of Tefutzot תפוצות, "scattered", was introduced in the 1930s by the Jewish-American Zionist academic Simon Rawidowicz, who to some degree argued for the acceptance of the Jewish presence outside the Land of Israel as a modern reality and an inevitability. The Greek term for diaspora (διασπορά) also appears three times in the New Testament, where it refers to the scattering of Israel, i.e., the Ten Northern Tribes of Israel as opposed to the Southern Kingdom of Judah, although James (1:1) refers to the scattering of all twelve tribes. In modern times, the contrasting meanings of diaspora/galut have given rise to controversy among Jews. Bowman states this in the following terms, (Diaspora) follows the Greek usage and is considered a positive phenomenon that continues the prophetic call of Israel to be a 'light unto the nations' and establish homes and families among the gentiles. The prophet Jeremiah issues this call to the preexilic emigrants in Egypt. . . Galut is a religious–nationalist term, which implies exile from the homeland as a result of collective sins, an exile that will be redeemed at YHWH's pleasure. Jewish messianism is closely connected with the concept of galut.' In Zionist debates a distinction was made between galut and golus/gola. The latter denoted social and political exile, whereas the former, while consequential on the latter, was a psycho-spiritual framework that was not wholly dependent on the conditions of life in diasporic exile, since one could technically remain in galut even in Eretz Israel. Whereas Theodor Herzl and his follows thought that the establishment of a Jewish state would put an end to the diasporic exile, Ahad Ha-am thought to the contrary that such a state's function would be to 'sustain Jewish nationhood' in the diaspora. Pre-Roman diaspora In 722 BCE, the Assyrians, under Sargon II, successor to Shalmaneser V, conquered the Kingdom of Israel, and many Israelites were deported to Mesopotamia. The Jewish proper diaspora began with the Babylonian exile in the 6th century BCE. After the overthrow of the Kingdom of Judah in 586 BCE by Nebuchadnezzar II of Babylon (see Babylonian captivity) and the deportation of a considerable portion of its inhabitants to Mesopotamia, the Jews had two principal cultural centers: Babylonia and the land of Israel. Deportees returned to the Samaria after the Neo-Babylonian Empire was in turn conquered by Cyrus the Great. The biblical book of Ezra includes two texts said to be decrees allowing the deported Jews to return to their homeland after decades and ordering the Temple rebuilt. The differences in content and tone of the two decrees, one in Hebrew and one in Aramaic, have caused some scholars to question their authenticity. The Cyrus Cylinder, an ancient tablet on which is written a declaration in the name of Cyrus referring to restoration of temples and repatriation of exiled peoples, has often been taken as corroboration of the authenticity of the biblical decrees attributed to Cyrus, but other scholars point out that the cylinder's text is specific to Babylon and Mesopotamia and makes no mention of Judah or Jerusalem. Lester L. Grabbe asserted that the "alleged decree of Cyrus" regarding Judah, "cannot be considered authentic", but that there was a "general policy of allowing deportees to return and to re-establish cult sites". He also stated that archaeology suggests that the return was a "trickle" taking place over decades, rather than a single event. There is no sudden expansion of the population base of 30,000 and no credible indication of any special interest in Yehud. Although most of the Jewish people during this period, especially the wealthy families, were to be found in Babylonia, the existence they led there, under the successive rulers of the Achaemenids, the Seleucids, the Parthians, and the Sassanians, was obscure and devoid of political influence. The poorest but most fervent of the exiles returned to Judah / the Land of Israel during the reign of the Achaemenids (c. 550–330 BCE). There, with the reconstructed Temple in Jerusalem as their center, they organized themselves into a community, animated by a remarkable religious ardor and a tenacious attachment to the Torah as the focus of their identity. As this little nucleus increased in numbers with the accession of recruits from various quarters, it awoke to a consciousness of itself, and strove once again for national independence and political enfranchisement and sovereignty.[citation needed] The first Jewish diaspora in Egypt arose in the last century of pharaonic rule, apparently with the settlement there, either under Ashurbanipal or during the reign of Psammeticus of a colony of Jewish mercenaries, a military class that successively served the Persian, the Ptolemaic and Roman governments down to the early decades of the second century CE, when the revolt against Trajan destroyed them. Their presence was buttressed by numerous Jewish administrators who joined them in Egypt's military and urban centres. According to Josephus, when Ptolemy I took Judea, he led 120,000 Jewish captives to Egypt, and many other Jews, attracted by Ptolemy's liberal and tolerant policies and Egypt's fertile soil, emigrated from Judea to Egypt of their own free will. Ptolemy settled the Jews in Egypt to employ them as mercenaries. Philadelphus subsequently emancipated the Jews taken to Egypt as captives and settled them in cleruchs, or specialized colonies, as Jewish military units.[better source needed] Jews began settling in Cyrenaica (modern-day eastern Libya) around the third century BCE, during the rule of Ptolemy I of Egypt, who sent them to secure the region for his kingdom. By the early first century BCE, the geographer Strabo identified Jews as one of the four main groups residing in the city of Cyrene. While communities in Alexandria and Rome dated back to before the Maccabean Revolt, the population in the Jewish diaspora expanded after the Pompey's campaign in 62 BCE. Under the Hasmonean princes, who were at first high priests and then kings, the Jewish state displayed even a certain luster[clarification needed] and annexed several territories. Soon, however, discord within the royal family and the growing disaffection of the pious towards rulers who no longer evinced any appreciation of the real aspirations of their subjects made the Jewish nation easy prey for the ambitions of the now increasingly autocratic and imperial Romans, the successors of the Seleucids. In 63 BCE Pompey invaded Jerusalem, the Jewish people lost their political sovereignty and independence, and Gabinius subjected the Jewish people to tribute.[citation needed] As early as the third century BCE Jewish communities sprang up in the Aegean islands, Greece, Asia Minor, Cyrenaica, Italy and Egypt.: 8–11 In Palestine, under the favourable auspices of the long period of peace—almost a whole century—which followed the advent of the Ptolemies, the new ways were to flourish. By means of all kinds of contacts, and particularly thanks to the development of commerce, Hellenism infiltrated on all sides in varying degrees. The ports of the Mediterranean coast were indispensable to commerce and, from the very beginning of the Hellenistic period, underwent great development. In the Western diaspora Greek quickly became dominant in Jewish life and little sign remains of profound contact with Hebrew or Aramaic, the latter probably being the more prevalent. Jews migrated to new Greek settlements that arose in the Eastern Mediterranean and former subject areas of the Persian Empire on the heels of Alexander the Great's conquests, spurred on by the opportunities they expected to find. The proportion of Jews in the diaspora in relation to the size of the nation as a whole increased steadily throughout the Hellenistic era and reached astonishing dimensions in the early Roman period, particularly in Alexandria. It was not least for this reason that the Jewish people became a major political factor, especially since the Jews in the diaspora, notwithstanding strong cultural, social and religious tensions, remained firmly united with their homeland. Smallwood writes that, 'It is reasonable to conjecture that many, such as the settlement in Puteoli attested in 4 BCE went back to the late (pre-Roman Empire) Roman Republic or early Empire and originated in voluntary emigration and the lure of trade and commerce." Many Jews migrated to Rome from Alexandria due to flourishing trade relations between the cities. Dating the numerous settlements is difficult. Some settlements may have resulted from Jewish emigration following the defeat of Jewish revolts. Others, such as the Jewish community in Rome, were far older, dating back to at least the mid second century BCE, although it expanded greatly following Pompey's campaign in 62 BCE. In 6 CE the Romans annexed Judaea. Only the Jews in Babylonia remained outside of Roman rule.: 168 Unlike the Greek speaking Hellenized Jews in the west, the Jewish communities in Babylonian and Judea continued the use of Aramaic as a primary language. As early as the middle of the 2nd century BCE the Jewish author of the third book of the Oracula Sibyllina addressed the "chosen people," saying: "Every land is full of thee and every sea." The most diverse witnesses, such as Strabo, Philo, Seneca, Luke (the author of the Acts of the Apostles), Cicero, and Josephus, all mention Jewish populations in the cities of the Mediterranean basin. See also History of the Jews in India and History of the Jews in China for pre-Roman (and post-) diasporic populations. King Agrippa I, in a letter to Caligula, enumerated among the provinces of the Jewish diaspora almost all the Hellenized and non-Hellenized countries of the Orient. This enumeration was far from complete as Italy and Cyrene were not included. The epigraphic discoveries from year to year augment the number of known Jewish communities but must be viewed with caution due to the lack of precise evidence of their numbers. According to the ancient Jewish historian Josephus, the next most dense Jewish population after the Land of Israel and Babylonia was in Syria, particularly in Antioch, and Damascus, where 10,000 to 18,000 Jews were massacred during the great insurrection. The ancient Jewish philosopher Philo gives the number of Jewish inhabitants in Egypt as one million, one-eighth of the population. Alexandria was by far the most important of the Egyptian Jewish communities. The Jews in the Egyptian diaspora were on a par with their Ptolemaic counterparts and close ties existed for them with Jerusalem. As in other Hellenistic diasporas, the Egyptian diaspora was one of choice not of imposition. To judge by the later accounts of wholesale massacres in 115 CE, the number of Jewish residents in Cyrenaica, Cyprus, and Mesopotamia must also have been large. At the commencement of the reign of Caesar Augustus, there were over 7,000 Jews in Rome (though this is only the number that is said to have escorted the envoys who came to demand the deposition of Archelaus; compare: Bringmann: Klaus: Geschichte der Juden im Altertum, Stuttgart 2005, S. 202. Bringmann talks about 8,000 Jews who lived in the city of Rome.). Many sources say that the Jews constituted a full one-tenth (10%) of the population of the ancient city of Rome itself. Finally, if the sums confiscated by the governor Lucius Valerius Flaccus in the year 62/61 BCE represented the tax of a didrachma per head for a single year, it would imply that the Jewish population of Asia Minor numbered 45,000 adult males, for a total of at least 180,000 persons.[citation needed] Under the Roman Empire The 13th-century author Bar Hebraeus gave a figure of 6,944,000 Jews in the Roman world. Salo Wittmayer Baron considered the figure convincing. The figure of seven million within and one million outside the Roman world in the mid-first century became widely accepted, including by Louis Feldman. However, contemporary scholars now accept that Bar Hebraeus based his figure on a census of total Roman citizens and thus, included non-Jews. The figure of 6,944,000 being recorded in Eusebius' Chronicon.: 90, 94, 104–05 Louis Feldman, previously an active supporter of the figure, now states that he and Baron were mistaken.: 185 Philo gives a figure of one million Jews living in Egypt. John R. Bartlett rejects Baron's figures entirely, arguing that we have no clue as to the size of the Jewish demographic in the ancient world.: 97–103 The Romans did not distinguish between Jews inside and outside of the Land of Israel/Judaea. They collected an annual temple tax from Jews both in and outside of Israel. The suppression of the diaspora uprisings of 116–117 CE resulted in the near-total destruction of Jewish communities in Cyrenaica and Egypt. By the third century, Jewish communities began to re-establish themselves in Egypt and Cyrenaica, primarily through immigration from the Land of Israel. Destruction of Judea Roman rule in Judea began in 63 BCE with the capture of Jerusalem by Pompey. After the city fell to Pompey's forces, thousands of Jewish prisoners of war were brought from Judea to Rome and sold into slavery. After these Jewish slaves were manumitted, they settled permanently in Rome on the right bank of the Tiber as traders. In 37 BCE, the forces of the Jewish client king Herod the Great captured Jerusalem with Roman assistance, and there was likely an influx of Jewish slaves taken into the diaspora by Roman forces. In 53 BCE, a minor Jewish revolt was suppressed and the Romans subsequently sold Jewish war captives into slavery. Roman rule continued until the First Jewish-Roman War, or the Great Revolt, a Jewish uprising to fight for independence, which began in 66 CE and was eventually crushed in 73 CE, culminating in the Siege of Jerusalem and the burning and destruction of the Temple, the centre of the national and religious life of the Jews throughout the world. The Jewish diaspora at the time of the Temple's destruction, according to Josephus, was in Parthia (Persia), Babylonia (Iraq), Arabia, as well as some Jews beyond the Euphrates and in Adiabene (Kurdistan). In Josephus' own words, he had informed "the remotest Arabians" about the destruction. Jewish communities also existed in southern Europe, Anatolia, Syria, and North Africa. Jewish pilgrims from the diaspora, undeterred by the rebellion, had actually come to Jerusalem for Passover prior to the arrival of the Roman army, and many became trapped in the city and died during the siege. According to Josephus, about 97,000 Jewish captives from Judea were sold into slavery by the Romans during the revolt. Many other Jews fled from Judea to other areas around the Mediterranean. Josephus wrote that 30,000 Jews were deported from Judea to Carthage by the Romans. Exactly when Roman Anti-Judaism began is a question of scholarly debate, however historian Hayim Hillel Ben-Sasson has proposed that the "Crisis under Caligula" (37–41) was the "first open break between Rome and the Jews". Meanwhile, the Kitos War, a rebellion by Jewish diaspora communities in Roman territories in the Eastern Mediterranean and Mesopotamia, led to the destruction of Jewish communities in Crete, Cyprus, and North Africa in 117 CE, and consequently the dispersal of Jews already living outside of Judea to further reaches of the Empire. Jerusalem had been left in ruins from the time of Vespasian. Sixty years later, Hadrian, who had been instrumental in the expulsion from Palestine of Marcius Turbo after his bloody repression of Jews in the diaspora in 117 CE, on visiting the area of Iudaea, decided to rebuild the city in 130 CE, and settle it, circumstantial evidence suggesting it was he who renamed it Ælia Capitolina, with a Roman colonia and foreign cults. It is commonly held that this was done as an insult to the Jews and as a means of erasing the land's Jewish identity, Others argued that this project was expressive of an intention of establishing administratively and culturally a firm Roman imperial presence, and thus incorporating the province, now called Syro-Palaestina, into the Roman world system. These political measures were, according to Menachem Mor, devoid of any intention to eliminate Judaism, indeed, the pagan reframing of Jerusalem may have been a strategic move designed to challenge, rather, the growing threat, pretensions and influence of converts to Christianity, for whom Jerusalem was likewise a crucial symbol of their faith. Implementation of these plans led to violent opposition, and triggered a full-scale insurrection with the Bar Kokhba revolt (132–136 CE), assisted, according to Dio Cassius, by some other peoples, perhaps Arabs who had recently been subjected by Trajan. The revolt was crushed, with the Jewish population of Judea devastated. Jewish war captives were again captured and sold into slavery by the Romans. According to Jewish tradition, the Romans deported twelve boatloads of Jews to Cyrenaica. Voluntary Jewish emigration from Judea in the aftermath of the Bar-Kokhba revolt also expanded Jewish communities in the diaspora. Jews were forbidden entrance to Jerusalem on pain of death, except for the day of Tisha B'Av. There was a further shift of the center of religious authority from Yavne, as rabbis regrouped in Usha in the western Galilee, where the Mishnah was composed. This ban struck a blow at Jewish national identity within Palestine, while the Romans however continued to allow Jews in the diaspora their distinct national and religious identity throughout the Empire. The military defeats of the Jews in Judaea in 70 CE and again in 135 CE, with large numbers of Jewish captives from Judea sold into slavery and an increase in voluntary Jewish emigration from Judea as a result of the wars, meant a drop in Palestine's Jewish population was balanced by a rise in diaspora numbers. Jewish prisoners sold as slaves in the diaspora and their children were eventually manumitted and joined local free communities. It has been argued that the archaeological evidence is suggestive of a Roman genocide taking place during the Second revolt. A significant movement of gentiles and Samaritans into villages formerly with a Jewish majority appears to have taken place thereafter. During the Crisis of the Third Century, civil wars in the Roman Empire caused great economic disruption, and taxes imposed to finance these wars impacted the Jewish population of Palestine heavily. As a result, many Jews emigrated to Babylon under the more tolerant Sassanid Empire, where autonomous Jewish communities continued to flourish, lured by the promise of economic prosperity and the ability to lead a full Jewish life there. Between the 3rd and 7th centuries, estimates indicate that the Babylonian Jewish community numbered approximately one million, which may have been the largest Jewish diaspora population of the time, possibly outnumbering those in the Land of Israel. Palestine and Babylon were both great centers of Jewish scholarship during this time, but tensions between scholars in these two communities grew as many Jewish scholars in Palestine feared that the centrality of the land to the Jewish religion would be lost with continuing Jewish emigration. Many Palestinian sages refused to consider Babylonian scholars their equals and would not ordain Babylonian students in their academies, fearing they would return to Babylon as rabbis. Significant Jewish emigration to Babylon adversely affected the Jewish academies of Palestine, and by the end of the third century they were reliant on donations from Babylon. The effect that the destruction of Jerusalem had on the Jewish diaspora has been a topic of considerable scholarly discussion. David Aberbach has argued that much of the European Jewish diaspora, by which he means exile or voluntary migration, originated with the Jewish wars which occurred between 66 and 135 CE.: 224 Martin Goodman states that it is only after the destruction of Jerusalem that Jews are found in northern Europe and along the western Mediterranean coast. Howard Adelman and Elazar Barkan challenge the "widespread notion" the Jews in Judea were only expelled after the destruction of the Second Temple in 70 CE and the Jewish defeat during Bar Kokhba revolt in 135 CE. They also contend it is "misleading" that the expulsion from Judea created the diaspora. Israel Bartal contends that Shlomo Sand is incorrect in his claim that the original Jews living in Israel were not exiled by the Romans, instead arguing that this view is negligible among serious Jewish study scholars. These scholars argue that the growth of diaspora Jewish communities was a gradual process that occurred over the centuries, starting with the Assyrian destruction of Israel, the Babylonian destruction of Judah, the Roman destruction of Judea, and the subsequent rule of Christians and Muslims. After the revolt, the Jewish religious and cultural center shifted to the Babylonian Jewish community and its scholars. For the generations that followed, the destruction of the Second Temple event came to represent a fundamental insight about the Jews who had become a dispossessed and persecuted people for much of their history. Erich S. Gruen contends that focusing on the destruction of the Temple misses the point that already before this, the diaspora was well-established. Gruen argues compulsory dislocation of Jews during the Second Temple period (516 BCE – 70 CE) cannot explain more than a fraction of the eventual diaspora. Rather, the Jewish diaspora during this time period was created from various factors, including through the creation of political and war refugees, enslavement, deportation, overpopulation, indebtedness, military employment, and opportunities in business, commerce, and agriculture. Avrum Ehrlich also states that already well before the destruction of the Temple in 70 CE, more Jews lived in the Diaspora than in Israel. Jonathan Adelman estimated that around 60% of Jews lived in the diaspora during the Second Temple period. According to Gruen: Perhaps three to five million Jews dwelled outside Palestine in the roughly four centuries that stretched from Alexander to Titus. The era of the Second Temple brought the issue into sharp focus, inescapably so. The Temple still stood, a reminder of the hallowed past, and, through most of the era, a Jewish regime existed in Palestine. Yet the Jews of the diaspora, from Italy to Iran, far outnumbered those in the homeland. Although Jerusalem loomed large in their self-perception as a nation, few of them had seen it, and few were likely to. Israel Yuval contends the Babylonian captivity created a promise of return in the Jewish consciousness which had the effect of enhancing the Jewish self-perception of Exile after the destruction of the Second Temple, albeit their dispersion was due to an array of non-exilic factors. According to Hasia R. Diner, the destruction of the Second Temple in 70 CE, followed by the dissolution, in 132 CE, of Jewish sovereignty over the territory renamed Syria Palaestina, had launched the second dispersion of the diaspora, the first being the Babylonian exile of 586 BCE. She writes that, "Although many Jews had lived outside Judea even before that [the destruction of Judea], the ending of home rule set in motion the world's longest diaspora." Byzantine, Islamic, and Crusader periods In the 4th century, the Roman Empire split and Palestine came under the control of the Byzantine Empire. There was still a significant Jewish population there, and Jews probably constituted a majority of the population until some time after Constantine converted to Christianity in the 4th century. The ban on Jewish settlement in Jerusalem was maintained. There was a minor Jewish rebellion against a corrupt governor from 351 to 352 which was put down. In the 5th century, the collapse of the Western Roman Empire resulted in Christian migration into Palestine and the development of a firm Christian majority. Judaism was the only non-Christian religion tolerated, but the Jews were discriminated against in various ways. They were prohibited from building new houses of worship, holding public office, or owning slaves. The 7th century saw the Jewish revolt against Heraclius, which broke out in 614 during the Byzantine–Sasanian War. It was the last serious attempt by Jews to gain autonomy in the Land of Israel prior to modern times. Jewish rebels aided the Persians in capturing Jerusalem, where the Jews were permitted autonomous rule until 617, when the Persians reneged on their alliance. After Byzantine Emperor Heraclius promised to restore Jewish rights, the Jews aided him in ousting the Persians. Heraclius subsequently went back on his word and ordered a general massacre of the Jewish population, devastating the Jewish communities of Jerusalem and the Galilee. As a result, many Jews fled to Egypt. In 638, Palestine came under Muslim rule with the Muslim conquest of the Levant. One estimate placed the Jewish population of Palestine at between 300,000 and 400,000 at the time. However, this is contrary to other estimates which place it at 150,000 to 200,000 at the time of the revolt against Heraclius. According to historian Moshe Gil, the majority of the population was Jewish or Samaritan. The land gradually came to have an Arab majority as Arab tribes migrated there. Jewish communities initially grew and flourished. Umar allowed and encouraged Jews to settle in Jerusalem. It was the first time in about 500 years that Jews were allowed to freely enter and worship in their holiest city. In 717, new restrictions were imposed against non-Muslims that negatively affected the Jews. Heavy taxes on agricultural land forced many Jews to migrate from rural areas to towns. Social and economic discrimination caused significant Jewish emigration from Palestine, and Muslim civil wars in the 8th and 9th centuries pushed many Jews out of the country. By the end of the 11th century the Jewish population of Palestine had declined substantially. During the First Crusade, Jews in Palestine, along with Muslims, were indiscriminately massacred and sold into slavery by the Crusaders. The majority of Jerusalem's Jewish population was killed during the Crusader Siege of Jerusalem and the few thousand survivors were sold into slavery. Some of the Jews sold into slavery later had their freedom bought by Jewish communities in Italy and Egypt, and the redeemed slaves were taken to Egypt. Some Jewish prisoners of war were also deported to Apulia in southern Italy. Relief for the Jewish population of Palestine came when the Ayyubid dynasty defeated the Crusaders and conquered Palestine (see 1187 Battle of Hattin). Some Jewish immigration from the diaspora subsequently took place, but this came to an end when Mamluks took over Palestine (see 1291 Fall of Acre). The Mamluks severely oppressed the Jews and greatly mismanaged the economy, resulting in a period of great social and economic decline. The result was large-scale migration from Palestine, and the population declined. The Jewish population shrunk especially heavily, as did the Christian population. Though some Jewish immigration from Europe, North Africa, and Syria also occurred in this period, which potentially saved the collapsing Jewish community of Palestine from disappearing altogether, Jews were reduced to an even smaller minority of the population. The result of these waves of emigration and expulsion was that the Jewish population of Palestine was reduced to a few thousand by the time the Ottoman Empire conquered Palestine, after which the region entered a period of relative stability. At the start of Ottoman rule in 1517, the estimated Jewish population was 5,000, composed of both descendants of Jews who had never left the land and migrants from the diaspora.[better source needed] Post-Roman period Jewish diaspora populations During the Middle Ages, due to increasing geographical dispersion and re-settlement, Jews divided into distinct regional groups which today are generally addressed according to two primary geographical groupings: the Ashkenazi of Northern and Eastern Europe, and the Sephardic Jews of Iberia (Spain and Portugal), North Africa and the Middle East. These groups have parallel histories sharing many cultural similarities as well as a series of massacres, persecutions and expulsions, such as the expulsion from England in 1290, the expulsion from Spain in 1492, and the expulsion from Arab countries in 1948–1973. Although the two branches comprise many unique ethno-cultural practices and have links to their local host populations (such as Central Europeans for the Ashkenazim and Hispanics and Arabs for the Sephardim), their shared religion and ancestry, as well as their continuous communication and population transfers, has been responsible for a unified sense of cultural and religious Jewish identity between Sephardim and Ashkenazim from the late Roman period to the present. By 1764 there were about 750,000 Jews in the Polish–Lithuanian Commonwealth. The worldwide Jewish population (comprising the Middle East and the rest of Europe) was estimated at 1.2 million. After the Persian conquest of Babylon in 539 BCE, Judah (יְהוּדָה Yehuda) became a province of the Persian empire. This status continued into the following Hellenistic period, when Yehud became a disputed province of Ptolemaic Egypt and Seleucid Syria. In the early part of the 2nd century BCE, a revolt against the Seleucids led to the establishment of an independent Jewish kingdom under the Hasmonean dynasty. The Hasmoneans adopted a deliberate policy of imitating and reconstituting the Davidic kingdom, and as part of this forcibly converted to Judaism their neighbours in the Land of Israel. The conversions included Nabateans (Zabadeans) and Itureans, the peoples of the former Philistine cities, the Moabites, Ammonites and Edomites. Attempts were also made to incorporate the Samaritans, following takeover of Samaria. The success of mass-conversions is however questionable, as most groups retained their tribal separations and mostly turned Hellenistic or Christian, with Edomites perhaps being the only exception to merge into the Jewish society under Herodian dynasty and in the following period of Jewish–Roman Wars. Ashkenazi Jews is a general category of Jewish populations who immigrated to what is now Germany and northeastern France during the Middle Ages and until modern times used to adhere to the Yiddish culture and the Ashkenazi prayer style. There is evidence that groups of Jews had immigrated to Germania during the Roman Era; they were probably merchants who followed the Roman Legions during their conquests. However, for the most part, modern Ashkenazi Jews originated with Jews who migrated or were forcibly taken from the Middle East to southern Europe in antiquity, where they established Jewish communities before moving into northern France and lower Germany during the High and Late Middle Ages. They also descend to a lesser degree from Jewish immigrants from Babylon, Persia, and North Africa who migrated to Europe in the Middle Ages. The Ashkenazi Jews later migrated from Germany (and elsewhere in Central Europe) into Eastern Europe as a result of persecution. Some Ashkenazi Jews also have minor ancestry from Sephardi Jews exiled from Spain, first during Islamic persecutions (11th–12th centuries) and later during Christian reconquests (13th–15th centuries) and the Spanish Inquisition (15th–16th centuries). Ashkenazi Jews are of mixed Middle Eastern and European ancestry, as they derive part of their ancestry from non-Jewish Europeans who intermixed with Jews of migrant Middle Eastern origin. In 2006, a study by Doron Behar and Karl Skorecki of the Technion and Ramban Medical Center in Haifa, Israel demonstrated that the vast majority of Ashkenazi Jews, both men and women, have Middle Eastern ancestry. According to Nicholas Wades' 2010 Autosomal study Ashkenazi Jews share a common ancestry with other Jewish groups and Ashkenazi and Sephardi Jews have roughly 30% European ancestry with the rest being Middle Eastern. According to Hammer, the Ashkenazi population expanded through a series of bottlenecks—events that squeeze a population down to small numbers—perhaps as it migrated from the Middle East after the destruction of the Second Temple in 70 CE, to Italy, reaching the Rhine Valley in the 10th century. David Goldstein, a Duke University geneticist and director of the Duke Center for Human Genome Variation, has said that the work of the Technion and Ramban team served only to confirm that genetic drift played a major role in shaping Ashkenazi mitochondrial DNA (mtDNA), which is inherited in a matrilineal manner. Goldstein argues that the Technion and Ramban mtDNA studies fail to actually establish a statistically significant maternal link between modern Jews and historic Middle Eastern populations. This differs from the patrilineal case, where Goldstein said there is no doubt of a Middle Eastern origin. In June 2010, Behar et al. "shows that most Jewish samples form a remarkably tight subcluster with common genetic origin, that overlies Druze and Cypriot samples but not samples from other Levantine populations or paired diaspora host populations. In contrast, Ethiopian Jews (Beta Israel) and Indian Jews (Bene Israel and Cochini) cluster with neighboring autochthonous populations in Ethiopia and western India, respectively, despite a clear paternal link between the Bene Israel and the Levant." "The most parsimonious explanation for these observations is a common genetic origin, which is consistent with an historical formulation of the Jewish people as descending from ancient Hebrew and Israelite residents of the Levant." In conclusion the authors are stating that the genetic results are concordant "with the dispersion of the people of ancient Israel throughout the Old World". Regarding the samples he used Behar points out that "Our conclusion favoring common ancestry (of Jewish people) over recent admixture is further supported by the fact that our sample contains individuals that are known not to be admixed in the most recent one or two generations." A 2013 study of Ashkenazi mitochondrial DNA by Costa et al., reached the conclusion that the four major female founders and most of the minor female founders had ancestry in prehistoric Europe, rather than the Near East or Caucasus. According to the study these findings 'point to a significant role for the conversion of women in the formation of Ashkenazi communities" and their intermarriage with Jewish men of Middle Eastern origin. A study by Haber, et al., (2013) noted that while previous studies of the Levant, which had focused mainly on diaspora Jewish populations, showed that the "Jews form a distinctive cluster in the Middle East", these studies did not make clear "whether the factors driving this structure would also involve other groups in the Levant". The authors found strong evidence that modern Levant populations descend from two major apparent ancestral populations. One set of genetic characteristics which is shared with modern-day Europeans and Central Asians is most prominent in the Levant amongst "Lebanese, Armenians, Cypriots, Druze and Jews, as well as Turks, Iranians and Caucasian populations". The second set of inherited genetic characteristics is shared with populations in other parts of the Middle East as well as some African populations. Levant populations in this category today include "Palestinians, Jordanians, Syrians, as well as North Africans, Ethiopians, Saudis, and Bedouins". Concerning this second component of ancestry, the authors remark that while it correlates with "the pattern of the Islamic expansion", and that "a pre-Islamic expansion Levant was more genetically similar to Europeans than to Middle Easterners," they also say that "its presence in Lebanese Christians, Sephardi and Ashkenazi Jews, Cypriots and Armenians might suggest that its spread to the Levant could also represent an earlier event". The authors also found a strong correlation between religion and apparent ancestry in the Levant: all Jews (Sephardi and Ashkenazi) cluster in one branch; Druze from Mount Lebanon and Druze from Mount Carmel are depicted on a private branch; and Lebanese Christians form a private branch with the Christian populations of Armenia and Cyprus placing the Lebanese Muslims as an outer group. The predominantly Muslim populations of Syrians, Palestinians and Jordanians cluster on branches with other Muslim populations as distant as Morocco and Yemen. Another 2013 study, made by Doron M. Behar of the Rambam Health Care Campus in Israel and others, suggests that: "Cumulatively, our analyses point strongly to ancestry of Ashkenazi Jews primarily from European and Middle Eastern populations and not from populations in or near the Caucasus region. The combined set of approaches suggests that the observations of Ashkenazi proximity to European and Middle Eastern populations in population structure analyses reflect actual genetic proximity of Ashkenazi Jews to populations with predominantly European and Middle Eastern ancestry components, and lack of visible introgression from the region of the Khazar Khaganate—particularly among the northern Volga and North Caucasus populations—into the Ashkenazi community." A 2014 study by Fernández et al. found that Ashkenazi Jews display a frequency of haplogroup K in their maternal (mitochondrial) DNA, suggesting an ancient Near Eastern matrilineal origin, similar to the results of the Behar study in 2006. Fernández noted that this observation clearly contradicts the results of the 2013 study led by Costa, Richards et al. that suggested a European source for 3 exclusively Ashkenazi K lineages. Sephardi Jews are Jews whose ancestors lived in Spain or Portugal. Some 300,000 Jews resided in Spain before the Spanish Inquisition in the 15th century, when the Reyes Católicos reconquered Spain from the Arabs and ordered the Jews to convert to Catholicism, leave the country or face execution without trial. Those who chose not to convert, between 40,000 and 100,000, were expelled from Spain in 1492 in the wake of the Alhambra decree. Sephardic Jews subsequently migrated to North Africa (Maghreb), Christian Europe (Netherlands, Britain, France and Poland), throughout the Ottoman Empire and even the newly discovered Latin America. In the Ottoman Empire, the Sephardim mostly settled in the European portion of the Empire, and mainly in the major cities such as: Istanbul, Selânik and Bursa. Selânik, which is today known as Thessaloniki and found in modern-day Greece, had a large and flourishing Sephardic community as was the community of Maltese Jews in Malta. A small number of Sephardic refugees who fled via the Netherlands as Marranos settled in Hamburg and Altona Germany in the early 16th century, eventually appropriating Ashkenazic Jewish rituals into their religious practice. One famous figure from the Sephardic Ashkenazic population is Glückel of Hameln. Some relocated to the United States, establishing the country's first organized community of Jews and erecting the United States' first synagogue. Nevertheless, the majority of Sephardim remained in Spain and Portugal as Conversos, which would also be the fate for those who had migrated to Spanish and Portuguese ruled Latin America. Sephardic Jews evolved to form most of North Africa's Jewish communities of the modern era, as well as the bulk of the Turkish, Syrian, Galilean and Jerusalemite Jews of the Ottoman period. Mizrahi Jews are Jews descended from the Jewish communities of the Middle East, Central Asia and the Caucasus, largely originating from the Babylonian Jewry of the classic period. The term Mizrahi is used in Israel in the language of politics, media and some social scientists for Jews from the Arab world and adjacent, primarily Muslim-majority countries. The definition of Mizrahi includes the modern Iraqi Jews, Syrian Jews, Lebanese Jews, Persian Jews, Afghan Jews, Bukharian Jews, Kurdish Jews, Mountain Jews, Georgian Jews. Some also include the North-African Sephardic communities and Yemenite Jews under the definition of Mizrahi, but do that from rather political generalization than ancestral reasons. Temanim are Jews who were living in Yemen prior to immigrating to Ottoman Palestine and Israel. Their geographic and social isolation from the rest of the Jewish community over the course of many centuries allowed them to develop a liturgy and set of practices that are significantly distinct from those of other Oriental Jewish groups; they themselves comprise three distinctly different groups, though the distinction is one of religious law and liturgy rather than of ethnicity. Traditionally the genesis of the Yemenite Jewish community came after the Babylonian exile, though the community most probably emerged during Roman times, and it was significantly reinforced during the reign of Dhu Nuwas in the 6th century CE and during later Muslim conquests in the 7th century CE, which drove the Arab Jewish tribes out of central Arabia. Karaim are Jews who used to live mostly in Egypt, Iraq, and Crimea during the Middle Ages. They are distinguished by the form of Judaism which they observe. Rabbinic Jews of varying communities have affiliated with the Karaite community throughout the millennia. As such, Karaite Jews are less an ethnic division, than they are members of a particular branch of Judaism. Karaite Judaism recognizes the Tanakh as the single religious authority for the Jewish people. Linguistic principles and contextual exegesis are used in arriving at the correct meaning of the Torah. Karaite Jews strive to adhere to the plain or most obvious understanding of the text when interpreting the Tanakh. By contrast, Rabbinical Judaism regards an Oral Law (codified and recorded in the Mishnah and the Talmud) as being equally binding on Jews, and mandated by God. In Rabbinical Judaism, the Oral Law forms the basis of religion, morality, and Jewish life. Karaite Jews rely on the use of sound reasoning and the application of linguistic tools to determine the correct meaning of the Tanakh; while Rabbinical Judaism looks towards the Oral law codified in the Talmud, to provide the Jewish community with an accurate understanding of the Hebrew Scriptures. The differences between Karaite and Rabbinic Judaism go back more than a thousand years. Rabbinical Judaism originates from the Pharisees of the Second Temple period. Karaite Judaism may have its origins among the Sadducees of the same era. Karaite Jews hold the entire Hebrew Bible to be a religious authority. As such, the vast majority of Karaites believe in the resurrection of the dead. Karaite Jews are widely regarded as being halachically Jewish by the Orthodox Rabbinate. Similarly, members of the rabbinic community are considered Jews by the Moetzet Hakhamim, if they are patrilineally Jewish.[citation needed] Jews of Israel comprise an increasingly mixed wide range of Jewish communities making aliyah from Europe, North Africa, and elsewhere in the Middle East. While a significant portion of Israeli Jews still retain memories of their Sephardic, Ashkenazi and Mizrahi origins, mixed Jewish marriages among the communities are very common. There are also smaller groups of Yemenite Jews, Indian Jews and others, who still retain a semi-separate communal life. There are also approximately 50,000 adherents of Karaite Judaism, most of whom live in Israel, but their exact numbers are not known, because most Karaites have not participated in any religious censuses. The Beta Israel, though somewhat disputed as the descendants of the ancient Israelites, are widely recognized in Israel as Ethiopian Jews.[citation needed] The ancestry of most American Jews goes back to Ashkenazi Jewish communities that immigrated to the US in the course of the 19th and 20th centuries, as well as more recent influxes of Persian and other Mizrahi Jewish immigrants. The American Jewish community is considered to contain the highest percentage of mixed marriages between Jews and non-Jews, resulting in both increased assimilation and a significant influx of non-Jews becoming identified as Jews. The most widespread practice in the U.S. is Reform Judaism, which does not require members to prove, or consider the Jews to possess direct descent from the ethnic Jews or Biblical Israelites. These attitudes had been present in Reform Judaism for many years but were codified in a 1983 decree by the Central Conference of American Rabbis, On Patrilineal Descent. Among other assertions, the 1983 decree holds that matrilineal descent is not necessary for a person to be considered Jewish. This is in marked contrast to Orthodox Judaism, whose adherents represent around 30% of the Jews in Israel. Orthodox Judaism considers the Jewish people to be a closed ethnoreligious community and consequently possesses very strict procedures for conversion, a practice that it does not generally encourage. The Jews of modern France number around 400,000 persons, largely descendants of North African communities, some of which were Sephardic communities that had come from Spain and Portugal—others were Arab and Berber Jews from Algeria, Morocco and Tunisia, who were already living in North Africa before the Jewish exodus from the Iberian Peninsula—and to a smaller degree members of the Ashkenazi Jewish communities, who survived WWII and the Holocaust. Mountain Jews are Jews from the eastern and northern slopes of the Caucasus, mainly Azerbaijan, Chechnya and Dagestan. They are the descendants of Persian Jews from Iran. Bukharan Jews are an ethnic group from Central Asia who historically practised Judaism and spoke Bukhori, a dialect of the Tajik-Persian language. The Kaifeng Jews are members of a small Jewish community in Kaifeng, in the Henan province of China who have assimilated into Chinese society while preserving some Jewish traditions and customs. Cochin Jews, also called Malabar Jews, are the oldest group of Jews in India, with possible roots that are claimed to date back to the time of King Solomon. The Cochin Jews settled in the Kingdom of Cochin in South India, now part of the state of Kerala. As early as the 12th century, mention is made of the Black Jews in southern India. The Jewish traveler, Benjamin of Tudela, speaking of Kollam (Quilon) on the Malabar Coast, writes in his Itinerary: "...throughout the island, including all the towns thereof, live several thousand Israelites. The inhabitants are all black, and the Jews also. The latter are good and benevolent. They know the law of Moses and the prophets, and to a small extent the Talmud and Halacha." These people later became known as the Malabari Jews. They built synagogues in Kerala beginning in the 12th and 13th centuries. They are known to have developed Judeo-Malayalam, a dialect of the Malayalam language. Paradesi Jews are mainly the descendants of Sephardic Jews who originally immigrated to India from Sepharad (Spain and Portugal) during the 15th and 16th centuries in order to flee forced conversion or persecution in the wake of the Alhambra Decree which expelled the Jews from Spain. They are sometimes referred to as White Jews, although that usage is generally considered pejorative or discriminatory and it is instead used to refer to relatively recent Jewish immigrants (end of the 15th century onwards), who are predominantly Sephardim. The Paradesi Jews of Cochin are a community of Sephardic Jews whose ancestors settled among the larger Cochin Jewish community located in Kerala, a coastal southern state of India. The Paradesi Jews of Madras traded in diamonds, precious stones and corals, they had very good relations with the rulers of Golkonda, they maintained trade connections with Europe, and their language skills were useful. Although the Sephardim spoke Ladino (i.e. Spanish or Judeo-Spanish), in India they learned to speak Tamil and Judeo-Malayalam from the Malabar Jews.[full citation needed] The Georgian Jews are considered ethnically and culturally distinct from neighboring Mountain Jews. They were also traditionally a highly separate group from the Ashkenazi Jews in Georgia. The Krymchaks are Jewish ethno-religious communities of Crimea derived from Turkic-speaking adherents of Orthodox Judaism. During the history of the Jewish diaspora, Jews who lived in Christian Europe were often attacked by the local Christian population, and they were often forced to convert to Christianity. Many, known as "Anusim" ('forced-ones'), continued practicing Judaism in secret while living outwardly as ordinary Christians. The best known Anusim communities were the Jews of Spain and the Jews of Portugal, although they existed throughout Europe. In the centuries since the rise of Islam, many Jews living in the Muslim world were forced to convert to Islam,[citation needed] such as the Mashhadi Jews of Persia, who continued to practice Judaism in secret and eventually moved to Israel. Many of the Anusim's descendants left Judaism over the years. The results of a genetic study of the population of the Iberian Peninsula released in December 2008 "attest to a high level of religious conversion (whether voluntary or enforced) driven by historical episodes of religious intolerance, which ultimately led to the integration of the Anusim's descendants. The Samaritans, who comprised a comparatively large group in classical times, now number 745 people, and today they live in two communities in Israel and the West Bank, and they still regard themselves as descendants of the tribes of Ephraim (named by them as Aphrime) and Manasseh (named by them as Manatch). Samaritans adhere to a version of the Torah known as the Samaritan Pentateuch, which differs in some respects from the Masoretic text, sometimes in important ways, and less so from the Septuagint. The Samaritans consider themselves Bnei Yisrael ("Children of Israel" or "Israelites"), but they do not regard themselves as Yehudim (Jews). They view the term "Jews" as a designation for followers of Judaism, which they assert is a related but an altered and amended religion which was brought back by the exiled Israelite returnees, and is therefore not the true religion of the ancient Israelites, which according to them is Samaritanism. Genetic studies Y DNA studies tend to imply a small number of founders in an old population whose members parted and followed different migration paths. In most Jewish populations, these male line ancestors appear to have been mainly Middle Eastern. For example, Ashkenazi Jews share more common paternal lineages with other Jewish and Middle Eastern groups than with non-Jewish populations in areas where Jews lived in Eastern Europe, Germany and the French Rhine Valley. This is consistent with Jewish traditions which place most Jewish paternal origins in the region of the Middle East. Conversely, the maternal lineages of Jewish populations, studied by looking at mitochondrial DNA, are generally more heterogeneous. Scholars such as Harry Ostrer and Raphael Falk believe this indicates that many Jewish males found new mates from European and other communities in the places where they migrated in the diaspora after fleeing ancient Israel. In contrast, Behar has found evidence that about 40% of Ashkenazi Jews originate maternally from just four female founders, who were of Middle Eastern origin. The populations of Sephardi and Mizrahi Jewish communities "showed no evidence for a narrow founder effect." Subsequent studies carried out by Feder et al. confirmed the large portion of the non-local maternal origin among Ashkenazi Jews. Reflecting on their findings related to the maternal origin of Ashkenazi Jews, the authors conclude "Clearly, the differences between Jews and non-Jews are far larger than those observed among the Jewish communities. Hence, differences between the Jewish communities can be overlooked when non-Jews are included in the comparisons." Studies of autosomal DNA, which look at the entire DNA mixture, have become increasingly important as the technology develops. They show that Jewish populations have tended to form relatively closely related groups in independent communities, with most people in a community sharing significant ancestry in common. For Jewish populations of the diaspora, the genetic composition of Ashkenazi, Sephardi, and Mizrahi Jewish populations show a predominant amount of shared Middle Eastern ancestry. According to Behar, the most parsimonious explanation for this shared Middle Eastern ancestry is that it is "consistent with the historical formulation of the Jewish people as descending from ancient Hebrew and Israelite residents of the Levant" and "the dispersion of the people of ancient Israel throughout the Old World". North African, Italian and others of Iberian origin show variable frequencies of admixture with non-Jewish historical host populations among the maternal lines. In the case of Ashkenazi and Sephardi Jews (in particular Moroccan Jews), who are closely related, the source of non-Jewish admixture is mainly southern European, while Mizrahi Jews show evidence of admixture with other Middle Eastern populations and Sub-Saharan Africans. Behar et al. have remarked on an especially close relationship of Ashkenazi Jews and modern Italians. Jews were found to be more closely related to groups in the north of the Fertile Crescent (Kurds, Turks, and Armenians) than to Arabs. The studies also show that persons of Sephardic Bnei Anusim origin (those who are descendants of the "anusim" who were forced to convert to Catholicism) throughout today's Iberia (Spain and Portugal) and Ibero-America (Hispanic America and Brazil), estimated that up to 19.8% of the modern population of Iberia and at least 10% of the modern population of Ibero-America, has Sephardic Jewish ancestry within the last few centuries. The Bene Israel and the Cochin Jews of India, Beta Israel of Ethiopia, and a portion of the Lemba people of Southern Africa, meanwhile, despite more closely resembling the local populations of their native countries, also have some more remote ancient Jewish descent. Zionist "negation of the Diaspora" According to Eliezer Schweid, the rejection of life in the diaspora is a central assumption in all currents of Zionism. Underlying this attitude was the feeling that the diaspora restricted the full growth of Jewish national life. For instance the poet Hayim Nahman Bialik wrote: And my heart weeps for my unhappy people ... How burned, how blasted must our portion be, If seed like this is withered in its soil. ... According to Schweid, Bialik meant that the "seed" was the potential of the Jewish people. Preserved in the diaspora, this seed could only give rise to deformed results; however, once conditions changed the seed could still provide a plentiful harvest. In this matter Sternhell distinguishes two schools of thought in Zionism. One was the liberal or utilitarian school of Theodor Herzl and Max Nordau. Especially after the Dreyfus Affair, they held that antisemitism would never disappear and they saw Zionism as a rational solution for Jews. The other was the organic nationalist school. It was prevalent among the Zionist olim and they saw the movement as a project to rescue the Jewish nation rather than as a project to only rescue Jews. For them, Zionism was the "Rebirth of the Nation". In the 2008 book The Invention of the Jewish People, Shlomo Sand argued that the formation of the "Jewish-Israeli collective memory" had inculcated a "period of silencing" in Jewish history, particularly with regard to the formation of the Khazar Kingdom out of converted gentile tribes. Israel Bartal, then dean of the humanities faculty of the Hebrew University, countered "that no historian of the Jewish national movement has ever really believed that the origins of the Jews are ethnically and biologically 'pure.' [...] No 'nationalist' Jewish historian has ever tried to conceal the well-known fact that conversions to Judaism had a major impact on Jewish history in the ancient period and in the early Middle Ages. Although the myth of an exile from the Jewish homeland (Palestine) does exist in popular Israeli culture, it is negligible in serious Jewish historical discussions." Mystical explanation Rabbi Tzvi Elimelech of Dinov (Bnei Yissaschar, Chodesh Kislev, 2:25) explains that each exile was characterized by a different negative aspect: The Jewish fast day of Tisha B'Av commemorates the destruction of the First and Second Temples in Jerusalem and the subsequent exile of the Jews from the Land of Israel. The Jewish tradition maintains that the Roman exile would be the last, and that after the people of Israel returned to their land, they would never be exiled again. This statement is based on the verse: "(You paying for) Your sin is over daughter of Zion, he will not exile you (any)more" ["תם עוונך בת ציון, לא יוסף להגלותך"]. In Christian theology According to Aharon Oppenheimer, early Christians developed the concept of exile, beginning after the destruction of the Second Temple. They saw the destruction of the Temple as a punishment for Jewish deicide and, by extension, as an affirmation of Christians as God's new chosen people, "New Israel," having superseded the Jews' chosenness. In the period following the destruction of the Temple, Jews enjoyed many freedoms under Roman rule. The people of Israel had religious, economic, and cultural autonomy, and the Bar Kochba revolt demonstrated Israel's unity and political-military power at that time. Therefore, according to Oppenheimer, the Jewish exile started only after the Bar Kochba revolt, which devastated the Jewish community of Judea. Despite popular conception, Jews have had a continuous presence in the Land of Israel despite the exile of the majority of Judeans. The Jerusalem Talmud was compiled in the 4th century, hundreds of years after the revolt.[citation needed] Moreover, many Jews remained in Israel even centuries later, including during the Byzantine period; many remnants of synagogues are found from this period.[better source needed] Jews have been a majority or a significant plurality in Jerusalem in the millennia since their exile with few exceptions (including the period following the Siege of Jerusalem (1099) by the Crusaders and the 18 years of Jordanian rule of eastern Jerusalem, in which Jerusalem's historic Jewish quarter was expelled). Historical comparison of Jewish population a.^ Austria, Czech republic, Slovenia b.^ Albania, Iraq, Jordan, Lebanon, Macedonia, Syria, Turkey c.^ Croatia, Hungary, Slovakia d.^ Baltic states (Estonia, Latvia, Lithuania), Belarus, Moldova, Russia (including Siberia), Ukraine. e.^ Caucasus (Armenia, Azerbaijan, Georgia), Central Asia (Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan, Uzbekistan). Today As of 2023, about 8.5 million Jews live outside Israel, which hosts the largest Jewish population in the world with 7.2 million. Israel is followed by the United States with approximately 6.3 million. Other countries with significant Jewish populations include France (440,000), Canada (398,000), the United Kingdom (312,000), Argentina (171,000), Russia (132,000), Germany (125,000), Australia (117,200), Brazil (90,000), and South Africa (50,000). These numbers reflect the "core" Jewish population, defined as being "not inclusive of non-Jewish members of Jewish households, persons of Jewish ancestry who profess another monotheistic religion, other non-Jews of Jewish ancestry, and other non-Jews who may be interested in Jewish matters."[citation needed] Jewish populations also remain in Middle Eastern and North African countries outside of Israel, particularly Turkey, Iran, Morocco, Tunisia, and the Emirates. In general, these populations are shrinking due to low growth rates and high rates of emigration (particularly since the 1960s).[citation needed] The Jewish Autonomous Oblast continues to be an Autonomous Oblast of Russia. The Chief Rabbi of Birobidzhan, Mordechai Scheiner, says there are 4,000 Jews in the capital city. Governor Nikolay Mikhaylovich Volkov has stated that he intends to, "support every valuable initiative maintained by our local Jewish organizations." The Birobidzhan Synagogue opened in 2004 on the 70th anniversary of the region's founding in 1934. An estimated 75,000 Jews live in Siberia. Metropolitan areas with the largest Jewish populations are listed below though one source at jewishtemples.org, states that "It is difficult to come up with exact population figures on a country by country basis, let alone city by city around the world. Figures for Russia and other CIS countries are but educated guesses." The source cited here, the 2010 World Jewish Population Survey, also notes that "Unlike our estimates of Jewish populations in individual countries, the data reported here on urban Jewish populations do not fully adjust for possible double counting due to multiple residences. The differences in the United States may be quite significant, in the range of tens of thousands, involving both major and minor metropolitan areas." See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Wealth_inequality_in_the_United_States] | [TOKENS: 6970]
Contents Wealth inequality in the United States The inequality of wealth (i.e., inequality in the distribution of assets) has substantially increased in the United States since the late 1980s. Wealth commonly includes the values of any homes, automobiles, personal valuables, businesses, savings, and investments, as well as any associated debts. Although different from income inequality, the two are related. Wealth is usually not used for daily expenditures or factored into household budgets, but combined with income, it represents a family's total opportunity to secure stature and a meaningful standard of living, or to pass their class status down to their children. Moreover, wealth provides for both short- and long-term financial security, bestows social prestige, contributes to political power, and can be leveraged to obtain more wealth. Hence, wealth provides mobility and agency—the ability to act. The accumulation of wealth enables a variety of freedoms, and removes limits on life that one might otherwise face. Federal Reserve data indicates that as of Q1 2024, the top 1% of households in the United States held 30.5% of the country's wealth, while the bottom 50% held 2.5%. From 1989 to 2019, wealth became increasingly concentrated in the top 1% and top 10% due in large part to corporate stock ownership concentration in those segments of the population; the bottom 50% own little if any corporate stock. From an international perspective, the difference in the US median and mean wealth per adult is over 600%. A 2011 study found that US citizens across the political spectrum dramatically underestimate the current level of wealth inequality in the US, and would prefer a far more egalitarian distribution of wealth. During the COVID-19 pandemic, the wealth held by billionaires in the U.S. increased by 70%, with 2020 marking the steepest increase in billionaires' share of wealth on record. Statistics In 2007, the top 20% of the wealthiest Americans possessed 80% of all financial assets. In 2007, the richest 1% of the American population owned 35% of the country's total wealth, and the next 19% owned 51%. The top 20% of Americans owned 86% of the country's wealth and the bottom 80% of the population owned 14%. In 2011, financial inequality was greater than inequality in total wealth, with the top 1% of the population owning 43%, the next 19% of Americans owning 50%, and the bottom 80% owning 7%. However, after the Great Recession, which began in 2007, the share of total wealth owned by the top 1% of the population grew from 35% to 37%, and that owned by the top 20% of Americans grew from 86% to 88%. The Great Recession also caused a drop of 36% in median household wealth, but a drop of only 11% for the top 1%, further widening the gap between the top 1% and the bottom 99%. According to PolitiFact and other sources, in 2011, the 400 wealthiest Americans had more wealth than half of all Americans combined. Inherited wealth may help explain why many Americans who have become rich may have had a substantial head start. In September 2012, according to the Institute for Policy Studies, over 60 percent of the Forbes richest 400 Americans grew up in substantial privilege. In 2013, wealth inequality in the U.S. was greater than in most developed countries, other than Switzerland and Denmark. In the United States, the use of offshore holdings is exceptionally small compared to Europe, where much of the wealth of the top percentiles is kept in offshore holdings. According to a 2014 Credit Suisse study, the ratio of wealth to household income is the highest it has been since the Great Depression. According to a paper published by the Federal Reserve in 1997, "For most households, pensions and Social Security are the most important sources of income during retirement, and the promised benefit stream constitutes a sizable fraction of household wealth" and "including pensions and Social Security in net worth makes the distribution more even." In Inequality for All—a 2013 documentary, narrated by Robert Reich, in which he argues that income inequality is the defining issue of the United States—Reich states that 95% of economic gains following the economic recovery which began in 2009 went to the top 1% of Americans (by net worth) (HNWI). A September 2017 study by the Federal Reserve reported that the top 1% owned 38.5% of the country's wealth in 2016. According to a June 2017 report by the Boston Consulting Group, around 70% of the nation's wealth will be in the hands of millionaires and billionaires by 2021. A 2019 study by economists Emmanuel Saez and Gabriel Zucman found that the average effective tax rate paid by the richest 400 families (0.003%) in the US was 23 percent, more than a percentage point lower than the 24.2 percent paid by the bottom half of American households. The Urban-Brookings Tax Policy Center found that the bottom 20 percent of earners pay an average 2.9 percent effective income tax rate federally, while the richest 1 percent paid an effective 29.6 percent tax rate and the top 0.01 percent paid an effective 30.6 percent tax rate. In 2019, the Institute on Taxation and Economic Policy found that when state and federal taxes are taken into account, however, the poorest 20 percent pay an effective 20.2 percent rate while the top 1 percent pay an effective 33.7 percent rate. Using Federal Reserve data, the Washington Center for Equitable Growth reported in August 2019 that: "Looking at the cumulative growth of wealth disaggregated by group, we see that the bottom 50 percent of wealth owners experienced no net wealth growth since 1989. At the other end of the spectrum, the top 1 percent have seen their wealth grow by almost 300 percent since 1989. Although cumulative wealth growth was relatively similar among all wealth groups through the 1990s, the top 1 percent and bottom 50 percent diverged around 2000." According to an analysis of Survey of Consumer Finances data from 2019 by the People's Policy Project, 79% of the country's wealth is owned by millionaires and billionaires. Also in 2019, PolitiFact reported that three people (less than the 400 reported in 2011) had more wealth than the bottom half of all Americans. During the COVID-19 pandemic, the wealth held by billionaires in the U.S. increased by 70%. According to the 2022 World Inequality Report, "2020 marked the steepest increase in global billionaires' share of wealth on record." As of late 2022, according to Snopes, 735 billionaires collectively possessed more wealth than the bottom half of U.S. households ($4.5 trillion and $4.1 trillion respectively). The top 1% held a total of $43.45 trillion. In the late 18th century, "incomes were more equally distributed in colonial America than in any other place that can be measured," according to Peter Lindert and Jeffrey Williamson. The richest 1 percent of households held only 8.5% of total income in the late 18th century. The Gini coefficient, which measures inequality on a scale from 0 to 1 (with 1 being very high inequality) was 0.367 in New England and the Middle Atlantic, as compared to 0.57 in Europe. Some reasons for this include the ease that the average American had in buying frontier land, which was abundant at the time, and an overall scarcity of labor in non-slaveholding areas, which forced landowners to pay higher wages. There were also relatively few poor people in America at the time, since only those with at least some money could afford to come to America. Inequality grew in the 19th century; between 1774 and 1860, the Gini coefficient grew from 0.441 to 0.529. In 1860, the top 1 percent collected almost one-third of property incomes, as compared to 13.7% in 1774. There was a great deal of competition for land in the cities and non-frontier areas during this time period, with those who had already acquired land becoming richer than everyone else. The newly burgeoning financial sector also greatly rewarded the already-wealthy, as they were the only ones financially sound enough to invest. Simon Kuznets, using income tax records and his research-based estimates, showed a reduction of about 10% in the movement of national income toward the top 10% of wealth-owners, a reduction from about 45–50% in 1913 to about 30–35% in 1948. This period spans both The Great Depression and World War II, events with significant economic consequences. This is called the Great Compression. Franklin D. Roosevelt's establishment of social programs under the New Deal and efforts towards wealth redistribution also reduced wealth inequality. How wealth is measured affects inequality trends. Some measures count only marketable assets (e.g., stocks, housing, retirement accounts), while others also add the present value of already-accrued Social Security benefits. In the United States, employees pay 6.2% of wages in Social Security (OASDI) payroll tax, matched by employers for a total of 12.4%; of this, 5.3% + 5.3% funds Old-Age and Survivors Insurance (OASI) and 0.9% + 0.9% funds Disability Insurance (DI). These contributions create non-tradeable claims to future benefits. A 2025 study in the Journal of Finance finds that when accrued Social Security benefits are included in household wealth, the top 1% share rises only from about 22% (1989) to just under 24% (2019), and the top 10% increases by roughly 1–2 percentage points; by contrast, on marketable-wealth measures that exclude Social Security, top shares rose by about 6–10 percentage points over the same period. After 2019, marketable-wealth data from the Federal Reserve’s Distributional Financial Accounts indicate further concentration: the top 1% held about 31.0% of total household net worth in 2025:Q2 (versus ~30.5% in 2019:Q4). Comparable estimates that include accrued Social Security wealth for the 2020s have not yet been published. Note: Social Security benefits are non-tradeable and cannot be sold or borrowed against; including them flattens measured wealth trends but does not reflect control over purchasing power, liquid assets or political power. The Federal Reserve publishes information on the distribution of household assets, debt, and equity (net worth) by quarter going back to 1989. The tables below summarize the net worth data, in real terms (adjusted for inflation), for 1989 to 2022, and 2016 to 2022. Journalist Matthew Yglesias explained in June 2019 how the ownership of stock has driven wealth inequality, as the bottom 50% has minimal stock ownership: "...[T]he bottom half of the income distribution had a huge share of its wealth tied up in real estate while owning essentially no shares of corporate stock. The top 1 percent, by contrast, wasn't just rich — it was specifically rich in terms of owning companies, both stock in publicly traded ones ("corporate equities") and shares of closely held ones ("private businesses")...So the value of those specific assets — assets that people in the bottom half of the distribution never had a chance to own in the first place — soared." The National Public Radio, also known as NPR, reported in 2017 that the bottom 50% of U.S. households (by net worth) have little stock market exposure (neither directly nor indirectly through 401k plans), writing: "That means the stock market rally can only directly benefit around half of all Americans — and substantially fewer than it would have a decade ago when nearly two-thirds of families owned stock." Some authors argue that these increases in wealth inequality are indicative of a Second Gilded Age in America. The table below shows changes from Q4 2016 (the end of the Obama Administration) to Q1 2022. Wealth and income There is an important distinction between income and wealth. Income refers to a flow of money over time, commonly in the form of a wage or salary; wealth is a collection of assets owned, minus liabilities. In essence, income is what people receive through work, retirement, or social welfare whereas wealth is what people own. While the two are related, income inequality alone is insufficient for understanding economic inequality for two reasons: In 1998, Dennis Gilbert asserted that the standard of living of the working and middle classes is dependent primarily upon income and wages, while the rich tend to rely on wealth, distinguishing them from the vast majority of Americans. The United States Census Bureau formally defines income as money received on a regular basis (exclusive of certain money receipts such as capital gains) before payments on personal income taxes, social security, union dues, Medicare deductions, etc. By this official measure, the wealthiest families may have low income, but the value of their assets may be enough money to support their lifestyle. Dividends from trusts or gains in the stock market do not fall under the aforementioned definition of income, but are commonly the primary source of capital for the ultra-wealthy. Retired people also have little income, but may have a high net worth, because of money saved over time. Additionally, income does not capture the extent of wealth inequality. Wealth is most commonly obtained over time, through the steady investing of income, and the growth of assets. The income of one year does not typically encompass the accumulation over a lifetime. Income statistics cover too narrow a time span for it to be an adequate indicator of financial inequality. For example, the Gini coefficient for wealth inequality increased from 0.80 in 1983 to 0.84 in 1989. In the same year, 1989, the Gini coefficient for income was only 0.52. The Gini coefficient is an economic tool on a scale from 0 to 1 that measures the level of inequality. 1 signifies perfect inequality and 0 represents perfect equality. From this data, it is evident that in 1989 there was a discrepancy in the level of economic disparity; the extent of wealth inequality was significantly higher than income inequality. Recent research shows that many households, in particular, those headed by young parents (younger than 35), minorities, and individuals with low educational attainment, display very little accumulation. Many have no financial assets and their total net worth is also low. According to the Congressional Budget Office, between 1979 and 2007, incomes of the top 1% of Americans grew by an average of 275%. (Note: The IRS insists that comparisons of adjusted gross income pre-1987 and post-1987 are complicated by large changes in the definition of AGI, which led to households within the top income quintile reporting more of their income on their individual income tax form's AGI, rather than reporting their business income in separate corporate tax returns, or not reporting certain non-taxable income in their AGI at all, such as municipal bond income. In addition, IRS studies consistently show that a majority of households in the top income quintile have moved to a lower quintile within one decade. There are even more changes to households in the top 1%. Without including those data here, a reader is likely to assume households in the top 1% are almost the same from year to year.[citation needed]) In 2009, people in the top 1% of taxpayers made $343,927 or more. According to US economist Joseph Stiglitz the richest 1% of Americans gained 93% of the additional income created in 2010. A study by Emmanuel Saez and Piketty showed that the top 10 percent of earners earned more than half of the country's total income in 2012, the highest level recorded since the government began collecting the relevant data a century ago. People in the top one percent were three times more likely to work more than 50 hours a week, were more likely to be self-employed, and earned a fifth of their income as capital income. The top one percent was composed of many professions and had an annual turnover rate of more than 25%. The five most common professions were managers, physicians, administrators, lawyers, and teachers. A 2022 study in PNAS found that earnings inequality in the United States did not increase over the preceding decade, marking the first reversal of rising earnings inequality since 1980. The reversal was due to a shrinking wage gap between low-wage workers and median-wage earners, which was due to broadly rising pay in low-wage professions. At the same time, the gap between median-wage workers and top earners widened. U.S. stock market ownership distribution In March 2017, NPR summarized the distribution of U.S. stock market ownership (direct and indirect through mutual funds) in the U.S., which is highly concentrated among the wealthiest families: The Federal Reserve reported the median value of stock ownership by income group for 2016: NPR reported that when politicians reference the stock market as a measure of economic success, that success is not relevant to nearly half of Americans. Further, more than one-third of Americans who work full-time have no access to pensions or retirement accounts such as 401(k)s that derive their value from financial assets like stocks and bonds. The NYT reported that the percentage of workers covered by generous defined-benefit pension plans has declined from 62% in 1983 to 17% by 2016. While some economists consider an increase in the stock market to have a "wealth effect" that increases economic growth, economists like Former Dallas Federal Reserve Bank President Richard Fisher believe those effects are limited. Causes of wealth inequality Essentially, the wealthy possess greater financial opportunities that allow their money to make more money. Earnings from the stock market or mutual funds are reinvested to produce a larger return. Over time, the sum that is invested becomes progressively more substantial. Those who are not wealthy, however, do not have the resources to enhance their opportunities and improve their economic position. Rather, "after debt payments, poor families are constrained to spend the remaining income on items that will not produce wealth and will depreciate over time." Scholar David B. Grusky notes that "62 percent of households headed by single parents are without savings or other financial assets." Net indebtedness generally prevents the poor from having any opportunity to accumulate wealth and thereby better their conditions. Economic inequality is also a result of difference in income. Factors that contribute to this gap in wages are things such as level of education, labor market demand and supply, gender differences, growth in technology, and personal abilities. The quality and level of education that a person has often corresponds to their skill level, which is justified by their income. Wages are also determined by the "market price of a skill" at that current time. Although gender inequality is a separate social issue, it plays a role in economic inequality. According to the U.S. Census Report, in America the median full-time salary for women is 77 percent of that for men. Also contributing to the wealth inequality in the U.S, both unskilled and skilled workers are being replaced by machinery. The Seven Pillars Institute for Global Finance and Ethics argues that because of this "technological advance", the income gap between workers and owners has widened. Income inequality contributes to wealth inequality. For example, economist Emmanuel Saez wrote in June 2016 that the top 1% of families captured 52% of the total real income (GDP) growth per family from 2009 to 2015. From 2009 to 2012, the top 1% captured 91% of the income gains. Nepotism perpetuates and increases wealth inequality. Wealthy families pass down their assets allowing future generations to develop even more wealth. The poor, on the other hand, are less able to leave inheritances to their children leaving the latter with little or no wealth on which to build. Wealthy parents often use their economic or political power to advantage their own children, such as by providing extra funding for education, excluding poor families from the local community or schools (usually through exclusionary zoning), using social connections to provide opportunities for advancement like internships, and allowing children to take entrepreneurial risks without risking homelessness or destitution. Corresponding to financial resources, the wealthy strategically organize their money so that it will produce profit. Affluent people are more likely to allocate their money to financial assets such as stocks, bonds, and other investments which hold the possibility of capital appreciation. Those who are not wealthy are more likely to have their money in savings accounts and home ownership. This difference comprises the largest reason for the continuation of wealth inequality in America: the rich are accumulating more assets while the middle and working classes are just getting by. As of 2007, the richest 1% held about 38% of all privately held wealth in the United States. While the bottom 90% held 73.2% of all debt. According to The New York Times, the richest 1 percent in the United States now own more wealth than the bottom 90 percent. However, other studies argue that higher average savings rate will contribute to the reduction of the share of wealth owned by the rich. The reason is that the rich in wealth are not necessarily the individuals with the highest income. Therefore, the relative wealth share of poorer quintiles of the population would increase if the savings rate of income is very large, although the absolute difference from the wealthiest will increase. The nature of tax policies in America has been suggested by economists and politicians such as Emmanuel Saez, Thomas Piketty, and Barack Obama to perpetuate economic inequality in America by steering large sums of wealth into the hands of the wealthiest Americans. The mechanism for this is that when the wealthy avoid paying taxes, wealth concentrates to their coffers and the poor go into debt. The economist Joseph Stiglitz argues that "Strong unions have helped to reduce inequality, whereas weaker unions have made it easier for CEOs, sometimes working with market forces that they have helped shape, to increase it." The long fall in unionization in the U.S. since WWII has seen a corresponding rise in the inequality of wealth and income. Some tax policies subsidize wealthy people more than poor people; critics often argue the home mortgage interest deduction should be abolished because it provides more tax relief for people in higher tax brackets and with more expensive homes, and that poorer people are more often renters and therefore less likely to be able to use this deduction at all. Regressive taxes include payroll taxes, sales taxes, and fuel taxes. A 2022 study in the American Economic Journal found that greater economic inequality in the United States than in Europe was not because of the nature of tax and transfer systems in the United States. The study found that the U.S. redistributes a greater share of its wealth to the bottom half of the income distribution than any European country. The study found instead that Europe had less economic inequality because it had been more successful at ensuring that the bottom half of the income distribution are able to get relatively well-paying jobs. Racial disparities The wealth gap between white and black families nearly tripled from $85,000 in 1984 to $236,500 in 2009. A Brandeis University Institute on Assets and Social Policy paper cites the number of years of homeownership, household income, unemployment, education, and inheritance as leading causes for the growth of the gap, concluding homeownership to be the most important. Inheritance can directly link the disadvantaged economic position and prospects of today's blacks to the disadvantaged positions of their parents' and grandparents' generations, according to a report done by Robert B. Avery and Michael S. Rendall that pointed out "one in three white households will receive a substantial inheritance during their lifetime compared to only one in ten black households." In the journal Sociological Perspectives, Lisa Keister reports that family size and structure during childhood "are related to racial differences in adult wealth accumulation trajectories, allowing whites to begin accumulating high-yield assets earlier in life." The article "America's Financial Divide" added context to racial wealth inequality, stating: ... nearly 96.1 percent of the 1.2 million households in the top one percent by income were white, a total of about 1,150,000 households. In addition, these families were found to have a median net asset worth of $8.3 million. In stark contrast, in the same piece, black households were shown as a mere 1.4 percent of the top one percent by income, that's only 16,800 homes. In addition, their median net asset worth was just $1.2 million. Using this data as an indicator only several thousand of the over 14 million African American households have more than $1.2 million in net assets ... Relying on data from Credit Suisse and Brandeis University's Institute on Assets and Social Policy, the Harvard Business Review in the article "How America's Wealthiest Black Families Invest Money" stated:[citation needed] If you're white and have a net worth of about $356,000, that's good enough to put you in the 72nd percentile of white families. If you're black, it's good enough to catapult you into the 95th percentile." This means 28 percent of the total 83 million white homes, or over 23 million white households, have more than $356,000 in net assets. While only 700,000 of the 14 million black homes have more than $356,000 in total net worth. According to Inequality.org, the median black family is only worth $1,700 when durables are deducted. In contrast, the median white family holds $116,800 of wealth using the same accounting methods. Today, using Wolff's analysis, the median African American family holds a mere 1.5 percent of median white American family wealth. A recent piece on Eurweb/Electronic Urban Report, "Black Wealth Hardly Exists, Even When You Include NBA, NFL and Rap Stars", stated this about the difference between black middle-class families and white middle-class families: Going even further into the data, a recent study by the Institute for Policy Studies (IPS) and the Corporation For Economic Development (CFED) found that it would take 228 years for the average black family to amass the same level of wealth the average white family holds today in 2016. All while white families create even more wealth over those same two hundred years. In fact, this is a gap that will never close if America stays on its current economic path. According to the Institute on Assets and Social Policy, for each dollar of increase in average income an African American household saw from 1984 to 2009 just $0.69 in additional wealth was generated, compared with the same dollar in increased income creating an additional $5.19 in wealth for a similarly situated white household. Author Lilian Singh wrote on why the perceptions about black life created by media are misleading in the American Prospect article "Black Wealth On TV: Realities Don't Match Perceptions": Black programming features TV shows that collectively create false perceptions of wealth for African-American families. The images displayed are in stark contrast to the economic conditions the average black family is battling each day. According to an article by the Pew Research Center, the median wealth of non-Hispanic black households fell nearly 38% from 2010 to 2013. During that time, the median wealth of those households fell from $16,600 to $13,700. The median wealth of Hispanic families fell 14.3% as well, from $16,000 to $14,000. Despite the median net worth of all households in the United States decreasing with time, as of 2013, white households had a median net worth of $141,900 while black house households had a median net worth of just $11,000. Hispanic households had a median net worth of just $13,700 over that time as well. In 2023, the Federal Reserve Board published median and mean family wealth statistics for 2022, based on a nationwide survey of 4,602 families. The average white family's median net worth was $285,000. Hispanic families had a median net worth of $61,600, and for black families, this figure was $44,900. Although black families had the lowest median net worth of all racial groups, they experienced the greatest percent increase in net worth from 2019 to 2022, at 60 percent. For the first time, the survey calculated net worth for Asian families separately (Asian families had previously been grouped into an "other" category along with Native American, Pacific Islander, and multiracial families). Asian families had the highest median net worth, at $536,000. Effect on democracy A 2014 study by researchers at Princeton and Northwestern concludes that government policies reflect the desires of the wealthy, and that the vast majority of American citizens have "minuscule, near-zero, statistically non-significant impact upon public policy. When a majority of citizens disagrees with economic elites and/or with organized interests, they generally lose." When Janet Yellen, the chair of the Federal Reserve was questioned by Senator Bernie Sanders about the study at a congressional hearing in May 2014, she responded "There's no question that we've had a trend toward growing inequality" and that this trend "can shape [and] determine the ability of different groups to participate equally in a democracy and have grave effects on social stability over time." In Capital in the Twenty-First Century, French economist Thomas Piketty argues that "extremely high levels" of wealth inequality are "incompatible with the meritocratic values and principles of social justice fundamental to modern democratic societies" and that "the risk of a drift towards oligarchy is real and gives little reason for optimism about where the United States is headed." Proposals to reduce wealth inequality There is a political debate over the estate tax in the United States, which reduces inequality by taxing the estate of large quantities of wealth. The Tax Cuts and Jobs Act of 2017 doubled the exemption of estates by increasing the exemption from $5.49 million in 2017 to $11.18 million in 2018. This increase in estate exemption was estimated to affect about 3,200 estates in 2018. A 2021 investigation using leaked IRS documents found more than half of the richest 100 Americans use grantor retained annuity trusts to avoid paying estate taxes when they die. On top of the federal estate tax, 17 states have an estate or inheritance tax. In President Joe Biden's proposed budget for 2023 there are two proposed tax changes for households with wealth above $100 million. First, is a new "minimum tax" at death for unrealized capital gains above $1 million. Second is to realized capital gains as ordinary income; which is expected to effectively raise the percent of capital taxed from 23.8% to 43.4%. Combined it is estimated that these tax changes will place these households at an effective tax rate of 61.1%, which is nearly double the effective tax rate in 2022. Senator Bernie Sanders pitched the idea of a wealth tax in the US in 2014. Later, Senator Elizabeth Warren proposed an annual tax on wealth in January 2019, specifically a 2% tax for wealth over $50 million and another 1% surcharge on wealth over $1 billion. Wealth is defined as including all asset classes, including financial assets and real estate. In 2021, officials in the state of Washington considered proposals to tax wealthy residents within the state. Warren's plan received both praise and criticism. Economist Paul Krugman wrote in January 2019 that polls indicate the idea of taxing the rich more is very popular. Two billionaires, Michael Bloomberg and Howard Schultz, criticized the proposal as "unconstitutional" and "ridiculous," respectively. Economists Emmanuel Saez and Gabriel Zucman analyzed the Warren's proposal and estimated that about 75,000 households (less than 0.1%) would pay the tax. The tax was expected to raise around $2.75 trillion over 10 years, roughly 1% GDP on average per year. This was expected to raise the total tax burden for those subject to the wealth tax from 3.2% relative to their wealth under current law to about 4.3% on average, versus the 7.2% for the bottom 99% families. For scale, the federal budget deficit in 2018 was 3.9% GDP and was expected to rise towards 5% GDP over the next decade. An analysis by the think tank Tax Foundation found that Warren's proposal would reduce long-term GDP by 0.37% and raise $2.2 trillion over a period of ten years, after factoring in macroeconomic feedback effects. It expected the tax to "face serious administrative and compliance challenges due to valuation difficulties and tax evasion and avoidance issues." It also expected foreign investors to replace American billionaires as the owners of capital. In January 2019, Senators Charles Schumer and Bernie Sanders advocated limiting stock buybacks to reduce income and wealth inequality. See also References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Joke#cite_note-FOOTNOTEAttardo200127-88] | [TOKENS: 8460]
Contents Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_note-33] | [TOKENS: 10628]
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links
========================================