text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/List_of_countries_and_dependencies_by_area] | [TOKENS: 270] |
Contents List of countries and dependencies by area This is a list of the world's countries and their dependencies, ranked by total area, including land and water. This list includes entries that are not limited to those in the ISO 3166-1 standard, which covers sovereign states and dependent territories. All 193 member states of the United Nations plus the two observer states are given a rank number. Largely unrecognised states not in ISO 3166-1 are included in the list in ranked order. The areas of such largely unrecognised states are in most cases also included in the areas of the more widely recognised states that claim the same territory; see the notes in the "Notes" column for each country for clarification. Not included in the list are individual country claims to parts of the continent of Antarctica or entities such as the European Union[a] that have some degree of sovereignty but do not consider themselves to be sovereign countries or dependent territories. This list includes three measurements of area: Total area is taken from the United Nations Statistics Division unless otherwise noted. Land and water are taken from the Food and Agriculture Organization unless otherwise noted. The CIA World Factbook is most often used when different UN departments disagree. Other sources and details for each entry may be specified in the relevant footnote. Countries and dependencies by area See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Roman_period] | [TOKENS: 20077] |
Contents Roman Empire During the classical period, the Roman Empire controlled the Mediterranean and much of Europe, Western Asia, and North Africa. The Romans conquered most of these territories in the time of the Republic, and it was ruled by emperors following Octavian's assumption of power in 27 BC. Over the 4th century AD, the empire split into western and eastern halves. The Western Empire collapsed in 476 AD, while the Eastern Empire endured until the fall of Constantinople in 1453. By 100 BC, the city of Rome had expanded its rule from the Italian peninsula to most of the Mediterranean and beyond. However, it was severely destabilised by civil wars and political conflicts, which culminated in the victory of Octavian over Mark Antony and Cleopatra at the Battle of Actium in 31 BC, and the subsequent conquest of the Ptolemaic Kingdom in Egypt. In 27 BC, the Roman Senate granted Octavian overarching military power (imperium) and the new title of Augustus, marking his accession as the first Roman emperor. The vast Roman territories were organized into senatorial provinces, governed by proconsuls who were appointed by lot annually, and imperial provinces, which belonged to the emperor but were governed by legates. The first two centuries of the Empire saw a period of unprecedented stability and prosperity known as the Pax Romana (lit. 'Roman Peace'). Rome reached its greatest territorial extent under Trajan (r. 98–117 AD), but a period of increasing trouble and decline began under Commodus (r. 180–192). In the 3rd century, the Empire underwent a 49-year crisis that threatened its existence due to civil war, plagues and barbarian invasions. The Gallic and Palmyrene empires broke away from the state and a series of short-lived emperors led the Empire, which was later reunified under Aurelian (r. 270–275). The civil wars ended with the victory of Diocletian (r. 284–305), who set up two different imperial courts in the Greek East and Latin West. Constantine the Great (r. 306–337), the first Christian emperor, moved the imperial seat from Rome to Byzantium in 330, and renamed it Constantinople. The Migration Period, involving large invasions by Germanic peoples and by the Huns of Attila, led to the decline of the Western Roman Empire. With the fall of Ravenna to the Germanic Herulians and the deposition of Romulus Augustus in 476 by Odoacer, the Western Empire finally collapsed. The Byzantine (Eastern Roman) Empire survived for another millennium with Constantinople as its sole capital, until the city's fall in 1453.[f] Due to the Empire's extent and endurance, its institutions and culture had a lasting influence on the development of language, religion, art, architecture, literature, philosophy, law, and forms of government across its territories. Latin evolved into the Romance languages while Medieval Greek became the language of the East. The Empire's adoption of Christianity resulted in the formation of medieval Christendom. Roman and Greek art had a profound impact on the Italian Renaissance. Rome's architectural tradition served as the basis for Romanesque, Renaissance, and Neoclassical architecture, influencing Islamic architecture. The rediscovery of classical science and technology (which formed the basis for Islamic science) in medieval Europe contributed to the Scientific Renaissance and Scientific Revolution. Many modern legal systems, such as the Napoleonic Code, descend from Roman law. Rome's republican institutions have influenced the Italian city-state republics of the medieval period, the early United States, and modern democratic republics. History Rome had begun expanding shortly after the founding of the Roman Republic in the 6th century BC, though not outside the Italian Peninsula until the 3rd century BC. The Republic was not a nation-state in the modern sense, but a network of self-ruled towns (with varying degrees of independence from the Senate) and provinces administered by military commanders. It was governed by annually elected magistrates (Roman consuls above all) in conjunction with the Senate. The 1st century BC was a time of political and military upheaval, which ultimately led to rule by emperors. The consuls' military power rested in the Roman legal concept of imperium, meaning "command" (typically in a military sense). Occasionally, successful consuls or generals were given the honorary title imperator (commander); this is the origin of the word emperor, since this title was always bestowed to the early emperors.[g] Rome suffered a long series of internal conflicts, conspiracies, and civil wars from the late second century BC (see Crisis of the Roman Republic) while greatly extending its power beyond Italy. In 44 BC Julius Caesar was briefly perpetual dictator before being assassinated by a faction that opposed his concentration of power. This faction was driven from Rome and defeated at the Battle of Philippi in 42 BC by Mark Antony and Caesar's adopted son Octavian. Antony and Octavian divided the Roman world between them, but this did not last long. Octavian's forces defeated those of Mark Antony and Cleopatra at the Battle of Actium in 31 BC. In 27 BC the Senate gave him the title Augustus ("venerated") and made him princeps ("foremost") with proconsular imperium, thus beginning the Principate, the first epoch of Roman imperial history. Although the republic stood in name, Augustus had all meaningful authority. During his 40-year rule, a new constitutional order emerged so that, upon his death, Tiberius would succeed him as the new de facto monarch. The 200 years that began with Augustus's rule are traditionally regarded as the Pax Romana ("Roman Peace"). The cohesion of the empire was furthered by a degree of social stability and economic prosperity that Rome had never before experienced. Uprisings in the provinces were infrequent and put down "mercilessly and swiftly". The success of Augustus in establishing principles of dynastic succession was limited by his outliving a number of talented potential heirs. The Julio-Claudian dynasty lasted for four more emperors—Tiberius, Caligula, Claudius, and Nero—before it yielded in 69 AD to the strife-torn Year of the Four Emperors, from which Vespasian emerged as the victor. Vespasian became the founder of the brief Flavian dynasty, followed by the Nerva–Antonine dynasty which produced the "Five Good Emperors": Nerva, Trajan, Hadrian, Antoninus Pius, and Marcus Aurelius. Among the so-called "Five Good Emperors", Hadrian (r. 117–138) is particularly noted for consolidating the empire's frontiers and embarking on ambitious building projects throughout the provinces. In Judaea, which had long been the center of Jewish national and religious life, his reign marked a decisive turning point. After earlier Jewish resistance to Roman rule, Hadrian visited the region in 129/130 AD and refounded Jerusalem as the Roman colony Aelia Capitolina, naming it after his family (Aelius) and the Capitoline Triad. The refoundation overlaid the destroyed Jewish city with a new Roman urban plan, and included the construction of a Temple to Jupiter on the site of the former Jewish Temple. Later tradition and archaeological evidence also indicate a Temple of Venus near the site of the Holy Sepulchre. Hadrian's measures, combined with restrictions on Jewish practices, helped spark the Bar Kokhba Revolt (132–135 AD). After crushing the uprising, Roman forces expelled most Jews from Jerusalem, barring their entry except on certain days, and rebuilt the city as a statement of imperial power and domination. Most scholars consider Hadrianic Aelia to have been unwalled, with free-standing gate complexes (such as the northern gate beneath today's Damascus Gate) rather than a continuous defensive circuit. In the view of contemporary Greek historian Cassius Dio, the accession of Commodus in 180 marked the descent "from a kingdom of gold to one of rust and iron", a comment which has led some historians, notably Edward Gibbon, to take Commodus' reign as the beginning of the Empire's decline. In 212, during the reign of Caracalla, Roman citizenship was granted to all freeborn inhabitants of the empire. The Severan dynasty was tumultuous; an emperor's reign was ended routinely by his murder or execution and, following its collapse, the Empire was engulfed by the Crisis of the Third Century, a period of invasions, civil strife, economic disorder, and plague. In defining historical epochs, this crisis sometimes marks the transition from classical to late antiquity. Aurelian (r. 270–275) stabilised the empire militarily and Diocletian reorganised and restored much of it in 285. Diocletian's reign brought the empire's most concerted effort against the perceived threat of Christianity, the "Great Persecution". Diocletian divided the empire into four regions, each ruled by a separate tetrarch. Confident that he fixed the disorder plaguing Rome, he abdicated along with his co-emperor, but the Tetrarchy collapsed shortly after. Order was eventually restored by Constantine the Great, who became the first emperor to convert to Christianity, and who established Constantinople as the new capital of the Eastern Empire. During the decades of the Constantinian and Valentinian dynasties, the empire was divided along an east–west axis, with dual power centres in Constantinople and Rome. Julian, who under the influence of his adviser Mardonius attempted to restore Classical Roman and Hellenistic religion, only briefly interrupted the succession of Christian emperors. Theodosius I, the last emperor to rule over both East and West, died in 395 after making Christianity the state religion. The Western Roman Empire began to disintegrate in the early 5th century. The Romans fought off all invaders, most famously Attila, but the empire had assimilated so many Germanic peoples of dubious loyalty to Rome that the empire started to dismember itself. Most chronologies place the end of the Western Roman Empire in 476, when Romulus Augustulus was forced to abdicate to the Germanic warlord Odoacer. Odoacer ended the Western Empire by declaring Zeno sole emperor and placing himself as Zeno's nominal subordinate. In reality, Italy was ruled by Odoacer alone. The Eastern Roman Empire, called the Byzantine Empire by later historians, continued until the reign of Constantine XI Palaiologos, the last Roman emperor. He died in battle in 1453 against Mehmed II and his Ottoman forces during the siege of Constantinople. Mehmed II adopted the title of caesar in an attempt to claim a connection to the former Empire. His claim was soon recognized by the Patriarchate of Constantinople, but not by European monarchs. Geography and demography The Roman Empire was one of the largest in history, with contiguous territories throughout Europe, North Africa, and the Middle East. The Latin phrase imperium sine fine ("empire without end") expressed the ideology that neither time nor space limited the Empire. In Virgil's Aeneid, limitless empire is said to be granted to the Romans by Jupiter. This claim of universal dominion was renewed when the Empire came under Christian rule in the 4th century.[h] In addition to annexing large regions, the Romans directly altered their geography, for example cutting down entire forests. Roman expansion was mostly accomplished under the Republic, though parts of northern Europe were conquered in the 1st century, when Roman control in Europe, Africa, and Asia was strengthened. Under Augustus, a "global map of the known world" was displayed for the first time in public at Rome, coinciding with the creation of the most comprehensive political geography that survives from antiquity, the Geography of Strabo. When Augustus died, the account of his achievements (Res Gestae) prominently featured the geographical cataloguing of the Empire. Geography alongside meticulous written records were central concerns of Roman Imperial administration. The Empire reached its largest expanse under Trajan (r. 98–117), encompassing 5 million km2. The traditional population estimate of 55–60 million inhabitants accounted for between one-sixth and one-fourth of the world's total population and made it the most populous unified political entity in the West until the mid-19th century. 21st-century demographic studies have argued for a population peak from 70 million to more than 100 million. Each of the three largest cities in the Empire—Rome, Alexandria, and Antioch—was almost twice the size of any European city at the beginning of the 17th century. As the historian Christopher Kelly described it: Then the empire stretched from Hadrian's Wall in drizzle-soaked northern England to the sun-baked banks of the Euphrates in Syria; from the great Rhine–Danube river system, which snaked across the fertile, flat lands of Europe from the Low Countries to the Black Sea, to the rich plains of the North African coast and the luxuriant gash of the Nile Valley in Egypt. The empire completely circled the Mediterranean ... referred to by its conquerors as mare nostrum—'our sea'. Trajan's successor Hadrian adopted a policy of maintaining rather than expanding the empire. Borders (fines) were marked, and the frontiers (limites) patrolled. The most heavily fortified borders were the most unstable. Hadrian's Wall, which separated the Roman world from what was perceived as an ever-present barbarian threat, is the primary surviving monument of this effort. In the eastern provinces, rural administration often relied on inscribed boundary stones to demarcate land and regulate taxation. Languages Latin and Greek were the main languages of the Empire,[i] but the Empire was deliberately multilingual. Andrew Wallace-Hadrill says "The main desire of the Roman government was to make itself understood". At the start of the Empire, knowledge of Greek was useful to pass as educated nobility and knowledge of Latin was useful for a career in the military, government, or law. Bilingual inscriptions indicate the everyday interpenetration of the two languages. Latin and Greek's mutual linguistic and cultural influence is a complex topic. Latin words incorporated into Greek were very common by the early imperial era, especially for military, administration, and trade and commerce matters. Greek grammar, literature, poetry and philosophy shaped Latin language and culture. There was never a legal requirement for Latin in the Empire, but it represented a certain status. High standards of Latin, Latinitas, started with the advent of Latin literature. Due to the flexible language policy of the Empire, a natural competition of language emerged that spurred Latinitas, to defend Latin against the stronger cultural influence of Greek. Over time Latin usage was used to project power and a higher social class. Most of the emperors were bilingual but had a preference for Latin in the public sphere for political reasons, a "rule" that first started during the Punic Wars. Different emperors up until Justinian would attempt to require the use of Latin in various sections of the administration but there is no evidence that a linguistic imperialism existed during the early Empire. After all freeborn inhabitants were universally enfranchised in 212, many Roman citizens lacked a knowledge of Latin. The wide use of Koine Greek was what enabled the spread of Christianity and reflects its role as the lingua franca of the Mediterranean during the time of the Empire. Following Diocletian's reforms in the 3rd century AD, there was a decline in the knowledge of Greek in the west. Spoken Latin later fragmented into the incipient romance languages in the 7th century AD following the collapse of the Empire's west. The dominance of Latin and Greek among the literate elite obscures the continuity of other spoken languages within the Empire. Latin, referred to in its spoken form as Vulgar Latin, gradually replaced Celtic and Italic languages. References to interpreters indicate the continuing use of local languages, particularly in Egypt with Coptic, and in military settings along the Rhine and Danube. Roman jurists also show a concern for local languages such as Punic, Gaulish, and Aramaic in assuring the correct understanding of laws and oaths. In Africa, Libyco-Berber and Punic were used in inscriptions into the 2nd century. In Syria, Palmyrene soldiers used their dialect of Aramaic for inscriptions, an exception to the rule that Latin was the language of the military. The last reference to Gaulish was between 560 and 575. The emergent Gallo-Romance languages would then be shaped by Gaulish. Proto-Basque or Aquitanian evolved with Latin loan words to modern Basque. The Thracian language, as were several now-extinct languages in Anatolia, are attested in Imperial-era inscriptions. Society The Empire was multicultural, with "astonishing cohesive capacity" to create shared identity while encompassing diverse peoples. Public monuments and communal spaces open to all—such as forums, amphitheatres, racetracks and baths—helped foster a sense of "Romanness". Roman society had multiple, overlapping social hierarchies. The civil war preceding Augustus caused upheaval, but did not effect an immediate redistribution of wealth and social power. From the perspective of the lower classes, a peak was merely added to the social pyramid. Personal relationships—patronage, friendship (amicitia), family, marriage—continued to influence politics. By the time of Nero, however, it was not unusual to find a former slave who was richer than a freeborn citizen, or an equestrian who exercised greater power than a senator. The blurring of the Republic's more rigid hierarchies led to increased social mobility, both upward and downward, to a greater extent than all other well-documented ancient societies. Women, freedmen, and slaves had opportunities to profit and exercise influence in ways previously less available to them. Social life, particularly for those whose personal resources were limited, was further fostered by a proliferation of voluntary associations and confraternities (collegia and sodalitates): professional and trade guilds, veterans' groups, religious sodalities, drinking and dining clubs, performing troupes, and burial societies. According to the jurist Gaius, the essential distinction in the Roman "law of persons" was that all humans were either free (liberi) or slaves (servi). The legal status of free persons was further defined by their citizenship. Most citizens held limited rights (such as the ius Latinum, "Latin right"), but were entitled to legal protections and privileges not enjoyed by non-citizens. Free people not considered citizens, but living within the Roman world, were peregrini, non-Romans. In 212, the Constitutio Antoniniana extended citizenship to all freeborn inhabitants of the empire. This legal egalitarianism required a far-reaching revision of existing laws that distinguished between citizens and non-citizens. Freeborn Roman women were considered citizens, but did not vote, hold political office, or serve in the military. A mother's citizen status determined that of her children, as indicated by the phrase ex duobus civibus Romanis natos ("children born of two Roman citizens").[j] A Roman woman kept her own family name (nomen) for life. Children most often took the father's name, with some exceptions. Women could own property, enter contracts, and engage in business. Inscriptions throughout the Empire honour women as benefactors in funding public works, an indication they could hold considerable fortunes. The archaic manus marriage in which the woman was subject to her husband's authority was largely abandoned by the Imperial era, and a married woman retained ownership of any property she brought into the marriage. Technically she remained under her father's legal authority, even though she moved into her husband's home, but when her father died she became legally emancipated. This arrangement was a factor in the degree of independence Roman women enjoyed compared to many other cultures up to the modern period: although she had to answer to her father in legal matters, she was free of his direct scrutiny in daily life, and her husband had no legal power over her. Although it was a point of pride to be a "one-man woman" (univira) who had married only once, there was little stigma attached to divorce, nor to speedy remarriage after being widowed or divorced. Girls had equal inheritance rights with boys if their father died without leaving a will. A mother's right to own and dispose of property, including setting the terms of her will, gave her enormous influence over her sons into adulthood. As part of the Augustan programme to restore traditional morality and social order, moral legislation attempted to regulate conduct as a means of promoting "family values". Adultery was criminalized, and defined broadly as an illicit sex act (stuprum) between a male citizen and a married woman, or between a married woman and any man other than her husband. That is, a double standard was in place: a married woman could have sex only with her husband, but a married man did not commit adultery if he had sex with a prostitute or person of marginalized status. Childbearing was encouraged: a woman who had given birth to three children was granted symbolic honours and greater legal freedom (the ius trium liberorum). At the time of Augustus, as many as 35% of the people in Roman Italy were slaves, making Rome one of five historical "slave societies" in which slaves constituted at least a fifth of the population and played a major role in the economy.[k] In urban settings, slaves might be professionals such as teachers, physicians, chefs, and accountants; the majority of slaves provided trained or unskilled labour. Agriculture and industry, such as milling and mining, relied on the exploitation of slaves. Outside Italy, slaves were on average an estimated 10 to 20% of the population, sparse in Roman Egypt but more concentrated in some Greek areas. Expanding Roman ownership of arable land and industries affected preexisting practices of slavery in the provinces. Although slavery has often been regarded as waning in the 3rd and 4th centuries, it remained an integral part of Roman society until gradually ceasing in the 6th and 7th centuries with the disintegration of the complex Imperial economy. Laws pertaining to slavery were "extremely intricate". Slaves were considered property and had no legal personhood. They could be subjected to forms of corporal punishment not normally exercised on citizens, sexual exploitation, torture, and summary execution. A slave could not as a matter of law be raped; a slave's rapist had to be prosecuted by the owner for property damage under the Aquilian Law. Slaves had no right to the form of legal marriage called conubium, but their unions were sometimes recognized. Technically, a slave could not own property, but a slave who conducted business might be given access to an individual fund (peculium) that he could use, depending on the degree of trust and co-operation between owner and slave. Within a household or workplace, a hierarchy of slaves might exist, with one slave acting as the master of others. Talented slaves might accumulate a large enough peculium to justify their freedom, or be manumitted for services rendered. Manumission had become frequent enough that in 2 BC a law (Lex Fufia Caninia) limited the number of slaves an owner was allowed to free in his will. Following the Servile Wars of the Republic, legislation under Augustus and his successors shows a driving concern for controlling the threat of rebellions through limiting the size of work groups, and for hunting down fugitive slaves. Over time slaves gained increased legal protection, including the right to file complaints against their masters. A bill of sale might contain a clause stipulating that the slave could not be employed for prostitution, as prostitutes in ancient Rome were often slaves. The burgeoning trade in eunuchs in the late 1st century prompted legislation that prohibited the castration of a slave against his will "for lust or gain". Roman slavery was not based on race. Generally, slaves in Italy were indigenous Italians, with a minority of foreigners (including both slaves and freedmen) estimated at 5% of the total in the capital at its peak, where their number was largest. Foreign slaves had higher mortality and lower birth rates than natives and were sometimes even subjected to mass expulsions. The average recorded age at death for the slaves of the city of Rome was seventeen and a half years (17.2 for males; 17.9 for females). During the period of republican expansionism when slavery had become pervasive, war captives were a main source of slaves. The range of ethnicities among slaves to some extent reflected that of the armies Rome defeated in war, and the conquest of Greece brought a number of highly skilled and educated slaves. Slaves were also traded in markets and sometimes sold by pirates. Infant abandonment and self-enslavement among the poor were other sources. Vernae, by contrast, were "homegrown" slaves born to female slaves within the household, estate or farm. Although they had no special legal status, an owner who mistreated or failed to care for his vernae faced social disapproval, as they were considered part of the family household and in some cases might actually be the children of free males in the family. Rome differed from Greek city-states in allowing freed slaves to become citizens; any future children of a freedman were born free, with full rights of citizenship. After manumission, a slave who had belonged to a Roman citizen enjoyed active political freedom (libertas), including the right to vote. His former master became his patron (patronus): the two continued to have customary and legal obligations to each other. During the early Empire, freedmen held key positions in the government bureaucracy, so much so that Hadrian limited their participation by law. The rise of successful freedmen—through political influence or wealth—is a characteristic of early Imperial society. The prosperity of a high-achieving group of freedmen is attested by inscriptions throughout the Empire. The Latin word ordo (plural ordines) is translated variously and inexactly into English as "class, order, rank". One purpose of the Roman census was to determine the ordo to which an individual belonged. Two of the highest ordines in Rome were the senatorial and equestrian. Outside Rome, cities or colonies were led by decurions, also known as curiales. "Senator" was not itself an elected office in ancient Rome; an individual gained admission to the Senate after he had been elected to and served at least one term as an executive magistrate. A senator also had to meet a minimum property requirement of 1 million sestertii. Not all men who qualified for the ordo senatorius chose to take a Senate seat, which required legal domicile at Rome. Emperors often filled vacancies in the 600-member body by appointment. A senator's son belonged to the ordo senatorius, but he had to qualify on his own merits for admission to the Senate. A senator could be removed for violating moral standards. In the time of Nero, senators were still primarily from Italy, with some from the Iberian peninsula and southern France; men from the Greek-speaking provinces of the East began to be added under Vespasian. The first senator from the easternmost province, Cappadocia, was admitted under Marcus Aurelius.[l] By the Severan dynasty (193–235), Italians made up less than half the Senate. During the 3rd century, domicile at Rome became impractical, and inscriptions attest to senators who were active in politics and munificence in their homeland (patria). Senators were the traditional governing class who rose through the cursus honorum, the political career track, but equestrians often possessed greater wealth and political power. Membership in the equestrian order was based on property; in Rome's early days, equites or knights had been distinguished by their ability to serve as mounted warriors, but cavalry service was a separate function in the Empire.[m] A census valuation of 400,000 sesterces and three generations of free birth qualified a man as an equestrian. The census of 28 BC uncovered large numbers of men who qualified, and in 14 AD, a thousand equestrians were registered at Cádiz and Padua alone.[n] Equestrians rose through a military career track (tres militiae) to become highly placed prefects and procurators within the Imperial administration. The rise of provincial men to the senatorial and equestrian orders is an aspect of social mobility in the early Empire. Roman aristocracy was based on competition, and unlike later European nobility, a Roman family could not maintain its position merely through hereditary succession or having title to lands. Admission to the higher ordines brought distinction and privileges, but also responsibilities. In antiquity, a city depended on its leading citizens to fund public works, events, and services (munera). Maintaining one's rank required massive personal expenditures. Decurions were so vital for the functioning of cities that in the later Empire, as the ranks of the town councils became depleted, those who had risen to the Senate were encouraged to return to their hometowns, in an effort to sustain civic life. In the later Empire, the dignitas ("worth, esteem") that attended on senatorial or equestrian rank was refined further with titles such as vir illustris ("illustrious man"). The appellation clarissimus (Greek lamprotatos) was used to designate the dignitas of certain senators and their immediate family, including women. "Grades" of equestrian status proliferated. As the republican principle of citizens' equality under the law faded, the symbolic and social privileges of the upper classes led to an informal division of Roman society into those who had acquired greater honours (honestiores) and humbler folk (humiliores). In general, honestiores were the members of the three higher "orders", along with certain military officers. The granting of universal citizenship in 212 seems to have increased the competitive urge among the upper classes to have their superiority affirmed, particularly within the justice system. Sentencing depended on the judgment of the presiding official as to the relative "worth" (dignitas) of the defendant: an honestior could pay a fine for a crime for which an humilior might receive a scourging. Execution, which was an infrequent legal penalty for free men under the Republic, could be quick and relatively painless for honestiores, while humiliores might suffer the kinds of torturous death previously reserved for slaves, such as crucifixion and condemnation to the beasts. In the early Empire, those who converted to Christianity could lose their standing as honestiores, especially if they declined to fulfil religious responsibilities, and thus became subject to punishments that created the conditions of martyrdom. Government and military The three major elements of the Imperial state were the central government, the military, and the provincial government. The military established control of a territory through war, but after a city or people was brought under treaty, the mission turned to policing: protecting Roman citizens, agricultural fields, and religious sites. The Romans lacked sufficient manpower or resources to rule through force alone. Cooperation with local elites was necessary to maintain order, collect information, and extract revenue. The Romans often exploited internal political divisions. Communities with demonstrated loyalty to Rome retained their own laws, could collect their own taxes locally, and in exceptional cases were exempt from Roman taxation. Legal privileges and relative independence incentivized compliance. Roman government was thus limited, but efficient in its use of available resources. The Imperial cult of ancient Rome identified emperors and some members of their families with divinely sanctioned authority (auctoritas). The rite of apotheosis (also called consecratio) signified the deceased emperor's deification. The dominance of the emperor was based on the consolidation of powers from several republican offices. The emperor made himself the central religious authority as pontifex maximus, and centralized the right to declare war, ratify treaties, and negotiate with foreign leaders. While these functions were clearly defined during the Principate, the emperor's powers over time became less constitutional and more monarchical, culminating in the Dominate. The emperor was the ultimate authority in policy- and decision-making, but in the early Principate, he was expected to be accessible and deal personally with official business and petitions. A bureaucracy formed around him only gradually. The Julio-Claudian emperors relied on an informal body of advisors that included not only senators and equestrians, but trusted slaves and freedmen. After Nero, the influence of the latter was regarded with suspicion, and the emperor's council (consilium) became subject to official appointment for greater transparency. Though the Senate took a lead in policy discussions until the end of the Antonine dynasty, equestrians played an increasingly important role in the consilium. The women of the emperor's family often intervened directly in his decisions. Access to the emperor might be gained at the daily reception (salutatio), a development of the traditional homage a client paid to his patron; public banquets hosted at the palace; and religious ceremonies. The common people who lacked this access could manifest their approval or displeasure as a group at games. By the 4th century, the Christian emperors became remote figureheads who issued general rulings, no longer responding to individual petitions. Although the Senate could do little short of assassination and open rebellion to contravene the will of the emperor, it retained its symbolic political centrality. The Senate legitimated the emperor's rule, and the emperor employed senators as legates (legati): generals, diplomats, and administrators. The practical source of an emperor's power and authority was the military. The legionaries were paid by the Imperial treasury, and swore an annual oath of loyalty to the emperor. Most emperors chose a successor, usually a close family member or adopted heir. The new emperor had to seek a swift acknowledgement of his status and authority to stabilize the political landscape. No emperor could hope to survive without the allegiance of the Praetorian Guard and the legions. To secure their loyalty, several emperors paid the donativum, a monetary reward. In theory, the Senate was entitled to choose the new emperor, but did so mindful of acclamation by the army or Praetorians. After the Punic Wars, the Roman army comprised professional soldiers who volunteered for 20 years of active duty and five as reserves. The transition to a professional military began during the late Republic and was one of the many profound shifts away from republicanism, under which an army of conscript citizens defended the homeland against a specific threat. The Romans expanded their war machine by "organizing the communities that they conquered in Italy into a system that generated huge reservoirs of manpower for their army". By Imperial times, military service was a full-time career. The pervasiveness of military garrisons throughout the Empire was a major influence in the process of Romanization. The primary mission of the military of the early empire was to preserve the Pax Romana. The three major divisions of the military were: Through his military reforms, which included consolidating or disbanding units of questionable loyalty, Augustus regularized the legion. A legion was organized into ten cohorts, each of which comprised six centuries, with a century further made up of ten squads (contubernia); the exact size of the Imperial legion, which was likely determined by logistics, has been estimated to range from 4,800 to 5,280. After Germanic tribes wiped out three legions in the Battle of the Teutoburg Forest in 9 AD, the number of legions was increased from 25 to around 30. The army had about 300,000 soldiers in the 1st century, and under 400,000 in the 2nd, "significantly smaller" than the collective armed forces of the conquered territories. No more than 2% of adult males living in the Empire served in the Imperial army. Augustus also created the Praetorian Guard: nine cohorts, ostensibly to maintain the public peace, which were garrisoned in Italy. Better paid than the legionaries, the Praetorians served only sixteen years. The auxilia were recruited from among the non-citizens. Organized in smaller units of roughly cohort strength, they were paid less than the legionaries, and after 25 years of service were rewarded with Roman citizenship, also extended to their sons. According to Tacitus there were roughly as many auxiliaries as there were legionaries—thus, around 125,000 men, implying approximately 250 auxiliary regiments. The Roman cavalry of the earliest Empire were primarily from Celtic, Hispanic or Germanic areas. Several aspects of training and equipment derived from the Celts. The Roman navy not only aided in the supply and transport of the legions but also in the protection of the frontiers along the rivers Rhine and Danube. Another duty was protecting maritime trade against pirates. It patrolled the Mediterranean, parts of the North Atlantic coasts, and the Black Sea. Nevertheless, the army was considered the senior and more prestigious branch. An annexed territory became a Roman province in three steps: making a register of cities, taking a census, and surveying the land. Further government recordkeeping included births and deaths, real estate transactions, taxes, and juridical proceedings. In the 1st and 2nd centuries, the central government sent out around 160 officials annually to govern outside Italy. Among these officials were the Roman governors: magistrates elected at Rome who in the name of the Roman people governed senatorial provinces; or governors, usually of equestrian rank, who held their imperium on behalf of the emperor in imperial provinces, most notably Roman Egypt. A governor had to make himself accessible to the people he governed, but he could delegate various duties. His staff, however, was minimal: his official attendants (apparitores), including lictors, heralds, messengers, scribes, and bodyguards; legates, both civil and military, usually of equestrian rank; and friends who accompanied him unofficially. Other officials were appointed as supervisors of government finances. Separating fiscal responsibility from justice and administration was a reform of the Imperial era, to avoid provincial governors and tax farmers exploiting local populations for personal gain. Equestrian procurators, whose authority was originally "extra-judicial and extra-constitutional", managed both state-owned property and the personal property of the emperor (res privata). Because Roman government officials were few, a provincial who needed help with a legal dispute or criminal case might seek out any Roman perceived to have some official capacity. In the High Empire, Italy was legally distinguished from the provinces, and along with some favored provincial communities, enjoyed immunity from the property tax and poll tax. However, under the Emperor Diocletian, Italy lost these privileges and was subdivided into provinces. Roman courts held original jurisdiction over cases involving Roman citizens throughout the empire, but there were too few judicial functionaries to impose Roman law uniformly in the provinces. Most parts of the Eastern Empire already had well-established law codes and juridical procedures. Generally, it was Roman policy to respect the mos regionis ("regional tradition" or "law of the land") and to regard local laws as a source of legal precedent and social stability. The compatibility of Roman and local law was thought to reflect an underlying ius gentium, the "law of nations" or international law regarded as common and customary. If provincial law conflicted with Roman law or custom, Roman courts heard appeals, and the emperor held final decision-making authority.[o] In the West, law had been administered on a highly localized or tribal basis, and private property rights may have been a novelty of the Roman era, particularly among Celts. Roman law facilitated the acquisition of wealth by a pro-Roman elite. The extension of universal citizenship to all free inhabitants of the Empire in 212 required the uniform application of Roman law, replacing local law codes that had applied to non-citizens. Diocletian's efforts to stabilize the Empire after the Crisis of the Third Century included two major compilations of law in four years, the Codex Gregorianus and the Codex Hermogenianus, to guide provincial administrators in setting consistent legal standards. The pervasiveness of Roman law throughout Western Europe enormously influenced the Western legal tradition, reflected by continued use of Latin legal terminology in modern law. Taxation under the Empire amounted to about 5% of its gross product. The typical tax rate for individuals ranged from 2 to 5%. The tax code was "bewildering" in its complicated system of direct and indirect taxes, some paid in cash and some in kind. Taxes might be specific to a province, or kinds of properties such as fisheries; they might be temporary. Tax collection was justified by the need to maintain the military, and taxpayers sometimes got a refund if the army captured a surplus of booty. In-kind taxes were accepted from less-monetized areas, particularly those who could supply grain or goods to army camps. The primary source of direct tax revenue was individuals, who paid a poll tax and a tax on their land, construed as a tax on its produce or productive capacity. Tax obligations were determined by the census: each head of household provided a headcount of his household, as well as an accounting of his property. A major source of indirect-tax revenue was the portoria, customs and tolls on trade, including among provinces. Towards the end of his reign, Augustus instituted a 4% tax on the sale of slaves, which Nero shifted from the purchaser to the dealers, who responded by raising their prices. An owner who manumitted a slave paid a "freedom tax", calculated at 5% of value.[p] An inheritance tax of 5% was assessed when Roman citizens above a certain net worth left property to anyone outside their immediate family. Revenues from the estate tax and from an auction tax went towards the veterans' pension fund (aerarium militare). Low taxes helped the Roman aristocracy increase their wealth, which equalled or exceeded the revenues of the central government. An emperor sometimes replenished his treasury by confiscating the estates of the "super-rich", but in the later period, the resistance of the wealthy to paying taxes was one of the factors contributing to the collapse of the Empire. Economy The Empire is best thought of as a network of regional economies, based on a form of "political capitalism" in which the state regulated commerce to assure its own revenues. Economic growth, though not comparable to modern economies, was greater than that of most other societies prior to industrialization. Territorial conquests permitted a large-scale reorganization of land use that resulted in agricultural surplus and specialization, particularly in north Africa. Some cities were known for particular industries. The scale of urban building indicates a significant construction industry. Papyri preserve complex accounting methods that suggest elements of economic rationalism, and the Empire was highly monetized. Although the means of communication and transport were limited in antiquity, transportation in the 1st and 2nd centuries expanded greatly, and trade routes connected regional economies. The supply contracts for the army drew on local suppliers near the base (castrum), throughout the province, and across provincial borders. Economic historians vary in their calculations of the gross domestic product during the Principate. In the sample years of 14, 100, and 150 AD, estimates of per capita GDP range from 166 to 380 HS. The GDP per capita of Italy is estimated as 40 to 66% higher than in the rest of the Empire, due to tax transfers from the provinces and the concentration of elite income. Economic dynamism resulted in social mobility. Although aristocratic values permeated traditional elite society, wealth requirements for rank indicate a strong tendency towards plutocracy. Prestige could be obtained through investing one's wealth in grand estates or townhouses, luxury items, public entertainments, funerary monuments, and religious dedications. Guilds (collegia) and corporations (corpora) provided support for individuals to succeed through networking. "There can be little doubt that the lower classes of ... provincial towns of the Roman Empire enjoyed a high standard of living not equaled again in Western Europe until the 19th century". Households in the top 1.5% of income distribution captured about 20% of income. The "vast majority" produced more than half of the total income, but lived near subsistence. The early Empire was monetized to a near-universal extent, using money to state prices and debts. Augustus established a practical three-tier currency system that Romans used in their daily lives: gold coins (aureus) for major purchases and wealth storage, silver coins (denarius) that workers earned and used to pay taxes, and bronze/brass coins, especially the brass sestertius and dupondius, along with the copper as and smaller denominations, that people used for everyday shopping and small transactions. Most accounts, rents, and public fees were reckoned in sesterces (HS), even when payment arrived as denarii, aurei, or the bronze pieces converted at the fixed ratios of one aureus to twenty-five denarii, one denarius to four sesterces, and one sesterce to four asses. Because the bronzes circulated by face value rather than metal content, Romans in the first and second centuries counted coins rather than weighing them, and bullion or ingots were rarely treated as pecunia ("money") outside frontier contexts. This reliance on token bronzes underpinned the fiduciary character of Roman coinage and contributed to the debasement of the silver denominations in the later Empire, even as standardized money promoted trade, market integration, and a substantial money supply for commerce and saving. Rome had no central bank, and regulation of the banking system was minimal. Banks of classical antiquity typically kept less in reserves than the full total of customers' deposits. A typical bank had fairly limited capital, and often only one principal. Seneca assumes that anyone involved in Roman commerce needs access to credit. A professional deposit banker received and held deposits for a fixed or indefinite term, and lent money to third parties. The senatorial elite were involved heavily in private lending, both as creditors and borrowers. The holder of a debt could use it as a means of payment by transferring it to another party, without cash changing hands. Although it has sometimes been thought that ancient Rome lacked documentary transactions, the system of banks throughout the Empire permitted the exchange of large sums without physically transferring coins, in part because of the risks of moving large amounts of cash. Only one serious credit shortage is known to have occurred in the early Empire, in 33 AD; generally, available capital exceeded the amount needed by borrowers. The central government itself did not borrow money, and without public debt had to fund deficits from cash reserves. Emperors of the Antonine and Severan dynasties debased the currency, particularly the denarius, under the pressures of meeting military payrolls. Sudden inflation under Commodus damaged the credit market. In the mid-200s, the supply of specie contracted sharply. Conditions during the Crisis of the Third Century—such as reductions in long-distance trade, disruption of mining operations, and the physical transfer of gold coinage outside the empire by invading enemies—greatly diminished the money supply and the banking sector. Although Roman coinage had long been fiat money or fiduciary currency, general economic anxieties came to a head under Aurelian, and bankers lost confidence in coins. Despite Diocletian's introduction of the gold solidus and monetary reforms, the credit market of the Empire never recovered its former robustness. The main mining regions of the Empire were the Iberian Peninsula (silver, copper, lead, iron and gold); Gaul (gold, silver, iron); Britain (mainly iron, lead, tin), the Danubian provinces (gold, iron); Macedonia and Thrace (gold, silver); and Asia Minor (gold, silver, iron, tin). Intensive large-scale mining—of alluvial deposits, and by means of open-cast mining and underground mining—took place from the reign of Augustus up to the early 3rd century, when the instability of the Empire disrupted production.[citation needed] Hydraulic mining allowed base and precious metals to be extracted on a proto-industrial scale. The total annual iron output is estimated at 82,500 tonnes. Copper and lead production levels were unmatched until the Industrial Revolution. At its peak around the mid-2nd century, the Roman silver stock is estimated at 10,000 t, five to ten times larger than the combined silver mass of medieval Europe and the Caliphate around 800 AD. As an indication of the scale of Roman metal production, lead pollution in the Greenland ice sheet quadrupled over prehistoric levels during the Imperial era and dropped thereafter. The Empire completely encircled the Mediterranean, which they called "our sea" (Mare Nostrum). Roman sailing vessels navigated the Mediterranean as well as major rivers. Transport by water was preferred where possible, as moving commodities by land was more difficult. Vehicles, wheels, and ships indicate the existence of a great number of skilled woodworkers. Land transport utilized the advanced system of Roman roads, called "viae". These roads were primarily built for military purposes, but also served commercial ends. The in-kind taxes paid by communities included the provision of personnel, animals, or vehicles for the cursus publicus, the state mail and transport service established by Augustus. Relay stations were located along the roads every seven to twelve Roman miles, and tended to grow into villages or trading posts. A mansio (plural mansiones) was a privately run service station franchised by the imperial bureaucracy for the cursus publicus. The distance between mansiones was determined by how far a wagon could travel in a day. Carts were usually pulled by mules, travelling about 4 mph. Roman provinces traded among themselves, but trade extended outside the frontiers to regions as far away as China and India. Chinese trade was mostly conducted overland through middlemen along the Silk Road; Indian trade also occurred by sea from Egyptian ports. The main commodity was grain. Also traded were olive oil, foodstuffs, garum (fish sauce), slaves, ore and manufactured metal objects, fibres and textiles, timber, pottery, glassware, marble, papyrus, spices and materia medica, ivory, pearls, and gemstones. Though most provinces could produce wine, regional varietals were desirable and wine was a central trade good. Inscriptions record 268 different occupations in Rome and 85 in Pompeii. Professional associations or trade guilds (collegia) are attested for a wide range of occupations, some quite specialized. Work performed by slaves falls into five general categories: domestic, with epitaphs recording at least 55 different household jobs; imperial or public service; urban crafts and services; agriculture; and mining. Convicts provided much of the labour in the mines or quarries, where conditions were notoriously brutal. In practice, there was little division of labour between slave and free, and most workers were illiterate and without special skills. The greatest number of common labourers were employed in agriculture: in Italian industrial farming (latifundia), these may have been mostly slaves, but elsewhere slave farm labour was probably less important. Textile and clothing production was a major source of employment. Both textiles and finished garments were traded and products were often named for peoples or towns, like a fashion "label". Better ready-to-wear was exported by local businessmen (negotiatores or mercatores). Finished garments might be retailed by their sales agents, by vestiarii (clothing dealers), or peddled by itinerant merchants. The fullers (fullones) and dye workers (coloratores) had their own guilds. Centonarii were guild workers who specialized in textile production and the recycling of old clothes into pieced goods.[q] Architecture and engineering The chief Roman contributions to architecture were the arch, vault, and dome. Some Roman structures still stand today, due in part to sophisticated methods of making cements and concrete. Roman temples developed Etruscan and Greek forms, with some distinctive elements. Roman roads are considered the most advanced built until the early 19th century.[citation needed] Roman bridges were among the first large and lasting bridges, built from stone (and in most cases concrete) with the arch as the basic structure. The largest Roman bridge was Trajan's bridge over the lower Danube, constructed by Apollodorus of Damascus, which remained for over a millennium the longest bridge to have been built. The Romans built many dams and reservoirs for water collection, such as the Subiaco Dams, two of which fed the Anio Novus, one of the largest aqueducts of Rome. The Romans constructed numerous aqueducts. De aquaeductu, a treatise by Frontinus, who served as water commissioner, reflects the administrative importance placed on the water supply. Masonry channels carried water along a precise gradient, using gravity alone. It was then collected in tanks and fed through pipes to public fountains, baths, toilets, or industrial sites. The main aqueducts in Rome were the Aqua Claudia and the Aqua Marcia. The complex system built to supply Constantinople had its most distant supply drawn from over 120 km away along a route of more than 336 km. Roman aqueducts were built to remarkably fine tolerance, and to a technological standard not equalled until modern times. The Romans also used aqueducts in their extensive mining operations across the empire. Insulated glazing (or "double glazing") was used in the construction of public baths. Elite housing in cooler climates might have hypocausts, a form of central heating. The Romans were the first culture to assemble all essential components of the much later steam engine: the crank and connecting rod system, Hero's aeolipile (generating steam power), the cylinder and piston (in metal force pumps), non-return valves (in water pumps), and gearing (in water mills and clocks). Daily life The city was viewed as fostering civilization by being "properly designed, ordered, and adorned". Augustus undertook a vast building programme in Rome, supported public displays of art that expressed imperial ideology, and reorganized the city into neighbourhoods (vici) administered at the local level with police and firefighting services. A focus of Augustan monumental architecture was the Campus Martius, an open area outside the city centre: the Altar of Augustan Peace (Ara Pacis Augustae) was located there, as was an obelisk imported from Egypt that formed the pointer (gnomon) of a horologium. With its public gardens, the Campus was among the most attractive places in Rome to visit. City planning and urban lifestyles was influenced by the Greeks early on, and in the Eastern Empire, Roman rule shaped the development of cities that already had a strong Hellenistic character. Cities such as Athens, Aphrodisias, Ephesus and Gerasa tailored city planning and architecture to imperial ideals, while expressing their individual identity and regional preeminence. In areas inhabited by Celtic-speaking peoples, Rome encouraged the development of urban centres with stone temples, forums, monumental fountains, and amphitheatres, often on or near the sites of preexisting walled settlements known as oppida.[r] Urbanization in Roman Africa expanded on Greek and Punic coastal cities. The network of cities (coloniae, municipia, civitates or in Greek terms poleis) was a primary cohesive force during the Pax Romana. Romans of the 1st and 2nd centuries were encouraged to "inculcate the habits of peacetime". As the classicist Clifford Ando noted: Most of the cultural appurtenances popularly associated with imperial culture—public cult and its games and civic banquets, competitions for artists, speakers, and athletes, as well as the funding of the great majority of public buildings and public display of art—were financed by private individuals, whose expenditures in this regard helped to justify their economic power and legal and provincial privileges. In the city of Rome, most people lived in multistory apartment buildings (insulae) that were often squalid firetraps. Public facilities—such as baths (thermae), toilets with running water (latrinae), basins or elaborate fountains (nymphea) delivering fresh water, and large-scale entertainments such as chariot races and gladiator combat—were aimed primarily at the common people. The public baths served hygienic, social and cultural functions. Bathing was the focus of daily socializing. Roman baths were distinguished by a series of rooms that offered communal bathing in three temperatures, with amenities that might include an exercise room, sauna, exfoliation spa, ball court, or outdoor swimming pool. Baths had hypocaust heating: the floors were suspended over hot-air channels. Public baths were part of urban culture throughout the provinces, but in the late 4th century, individual tubs began to replace communal bathing. Christians were advised to go to the baths only for hygiene. Rich families from Rome usually had two or more houses: a townhouse (domus) and at least one luxury home (villa) outside the city. The domus was a privately owned single-family house, and might be furnished with a private bath (balneum), but it was not a place to retreat from public life. Although some neighbourhoods show a higher concentration of such houses, they were not segregated enclaves. The domus was meant to be visible and accessible. The atrium served as a reception hall in which the paterfamilias (head of household) met with clients every morning. It was a centre of family religious rites, containing a shrine and images of family ancestors. The houses were located on busy public roads, and ground-level spaces were often rented out as shops (tabernae). In addition to a kitchen garden—windowboxes might substitute in the insulae—townhouses typically enclosed a peristyle garden. The villa by contrast was an escape from the city, and in literature represents a lifestyle that balances intellectual and artistic interests (otium) with an appreciation of nature and agriculture. Ideally a villa commanded a view or vista, carefully framed by the architectural design. Augustus' programme of urban renewal, and the growth of Rome's population to as many as one million, was accompanied by nostalgia for rural life. Poetry idealized the lives of farmers and shepherds. Interior decorating often featured painted gardens, fountains, landscapes, vegetative ornament, and animals, rendered accurately enough to be identified by species. On a more practical level, the central government took an active interest in supporting agriculture. Producing food was the priority of land use. Larger farms (latifundia) achieved an economy of scale that sustained urban life. Small farmers benefited from the development of local markets in towns and trade centres. Agricultural techniques such as crop rotation and selective breeding were disseminated throughout the Empire, and new crops were introduced from one province to another. Maintaining an affordable food supply to the city of Rome had become a major political issue in the late Republic, when the state began to provide a grain dole (Cura Annonae) to citizens who registered for it (about 200,000–250,000 adult males in Rome). The dole cost at least 15% of state revenues, but improved living conditions among the lower classes, and subsidized the rich by allowing workers to spend more of their earnings on the wine and olive oil produced on estates. The grain dole also had symbolic value: it affirmed the emperor's position as universal benefactor, and the right of citizens to share in "the fruits of conquest". The annona, public facilities, and spectacular entertainments mitigated the otherwise dreary living conditions of lower-class Romans, and kept social unrest in check. The satirist Juvenal, however, saw "bread and circuses" (panem et circenses) as emblematic of the loss of republican political liberty: The public has long since cast off its cares: the people that once bestowed commands, consulships, legions and all else, now meddles no more and longs eagerly for just two things: bread and circuses. Epidemics were common in the ancient world, and occasional pandemics in the Empire killed millions. The Roman population was unhealthy. About 20 percent—a large percentage by ancient standards—lived in cities, Rome being the largest. The cities were a "demographic sink": the death rate exceeded the birth rate and constant immigration was necessary to maintain the population. Average lifespan is estimated at the mid-twenties, and perhaps more than half of children died before reaching adulthood. Dense urban populations and poor sanitation contributed to disease. Land and sea connections facilitated and sped the transfer of infectious diseases across the empire's territories. The rich were not immune; only two of emperor Marcus Aurelius's fourteen children are known to have reached adulthood. The importance of a good diet to health was recognized by medical writers such as Galen (2nd century). Views on nutrition were influenced by beliefs like humoral theory. A good indicator of nutrition and disease burden is average height: the average Roman was shorter in stature than the population of pre-Roman Italian societies and medieval Europe. Most apartments in Rome lacked kitchens, though a charcoal brazier could be used for rudimentary cookery. Prepared food was sold at pubs and bars, inns, and food stalls (tabernae, cauponae, popinae, thermopolia). Carryout and restaurants were for the lower classes; fine dining appeared only at dinner parties in wealthy homes with a chef (archimagirus) and kitchen staff, or banquets hosted by social clubs (collegia). Most Romans consumed at least 70% of their daily calories in the form of cereals and legumes. Puls (pottage) was considered the food of the Romans, and could be elaborated to produce dishes similar to polenta or risotto. Urban populations and the military preferred bread. By the reign of Aurelian, the state had begun to distribute the annona as a daily ration of bread baked in state factories, and added olive oil, wine, and pork to the dole. Roman literature focuses on the dining habits of the upper classes, for whom the evening meal (cena) had important social functions. Guests were entertained in a finely decorated dining room (triclinium) furnished with couches. By the late Republic, women dined, reclined, and drank wine along with men. The poet Martial describes a dinner, beginning with the gustatio ("tasting" or "appetizer") salad. The main course was kid, beans, greens, a chicken, and leftover ham, followed by a dessert of fruit and wine. Roman "foodies" indulged in wild game, fowl such as peacock and flamingo, large fish (mullet was especially prized), and shellfish. Luxury ingredients were imported from the far reaches of empire. A book-length collection of Roman recipes is attributed to Apicius, a name for several figures in antiquity that became synonymous with "gourmet". Refined cuisine could be moralized as a sign of either civilized progress or decadent decline. Most often, because of the importance of landowning in Roman culture, produce—cereals, legumes, vegetables, and fruit—were considered more civilized foods than meat. The Mediterranean staples of bread, wine, and oil were sacralized by Roman Christianity, while Germanic meat consumption became a mark of paganism. Some philosophers and Christians resisted the demands of the body and the pleasures of food, and adopted fasting as an ideal. Food became simpler in general as urban life in the West diminished and trade routes were disrupted; the Church formally discouraged gluttony, and hunting and pastoralism were seen as simple and virtuous. When Juvenal complained that the Roman people had exchanged their political liberty for "bread and circuses", he was referring to the state-provided grain dole and the circenses, events held in the entertainment venue called a circus. The largest such venue in Rome was the Circus Maximus, the setting of horse races, chariot races, the equestrian Troy Game, staged beast hunts (venationes), athletic contests, gladiator combat, and historical re-enactments. From earliest times, several religious festivals had featured games (ludi), primarily horse and chariot races (ludi circenses). The races retained religious significance in connection with agriculture, initiation, and the cycle of birth and death.[s] Under Augustus, public entertainments were presented on 77 days of the year; by the reign of Marcus Aurelius, this had expanded to 135. Circus games were preceded by an elaborate parade (pompa circensis) that ended at the venue. Competitive events were held also in smaller venues such as the amphitheatre, which became the characteristic Roman spectacle venue, and stadium. Greek-style athletics included footraces, boxing, wrestling, and the pancratium. Aquatic displays, such as the mock sea battle (naumachia) and a form of "water ballet", were presented in engineered pools. State-supported theatrical events (ludi scaenici) took place on temple steps or in grand stone theatres, or in the smaller enclosed theatre called an odeon. Circuses were the largest structure regularly built in the Roman world. The Flavian Amphitheatre, better known as the Colosseum, became the regular arena for blood sports in Rome. Many Roman amphitheatres, circuses and theatres built in cities outside Italy are visible as ruins today. The local ruling elite were responsible for sponsoring spectacles and arena events, which both enhanced their status and drained their resources. The physical arrangement of the amphitheatre represented the order of Roman society: the emperor in his opulent box; senators and equestrians in reserved advantageous seats; women seated at a remove from the action; slaves given the worst places, and everybody else in-between. The crowd could call for an outcome by booing or cheering, but the emperor had the final say. Spectacles could quickly become sites of social and political protest, and emperors sometimes had to deploy force to put down crowd unrest, most notoriously at the Nika riots in 532. The chariot teams were known by the colours they wore, with the two main teams being the Blues and the Greens. Fan loyalty was fierce and at times erupted into sports riots. Racing was perilous, but charioteers were among the most celebrated and well-compensated athletes. Circuses were designed to ensure that no team had an unfair advantage and to minimize collisions (naufragia), which were nonetheless frequent and satisfying to the crowd. The races retained a magical aura through their early association with chthonic rituals: circus images were considered protective or lucky, curse tablets have been found buried at the site of racetracks, and charioteers were often suspected of sorcery. Chariot racing continued into the Byzantine period under imperial sponsorship, but the decline of cities in the 6th and 7th centuries led to its eventual demise. The Romans thought gladiator contests had originated with funeral games and sacrifices. Some of the earliest styles of gladiator fighting had ethnic designations such as "Thracian" or "Gallic". The staged combats were considered munera, "services, offerings, benefactions", initially distinct from the festival games (ludi). To mark the opening of the Colosseum, Titus presented 100 days of arena events, with 3,000 gladiators competing on a single day. Roman fascination with gladiators is indicated by how widely they are depicted on mosaics, wall paintings, lamps, and in graffiti. Gladiators were trained combatants who might be slaves, convicts, or free volunteers. Death was not a necessary or even desirable outcome in matches between these highly skilled fighters, whose training was costly and time-consuming. By contrast, noxii were convicts sentenced to the arena with little or no training, often unarmed, and with no expectation of survival; physical suffering and humiliation were considered appropriate retributive justice. These executions were sometimes staged or ritualized as re-enactments of myths, and amphitheatres were equipped with elaborate stage machinery to create special effects. Modern scholars have found the pleasure Romans took in the "theatre of life and death" difficult to understand. Pliny the Younger rationalized gladiator spectacles as good for the people, "to inspire them to face honourable wounds and despise death, by exhibiting love of glory and desire for victory". Some Romans such as Seneca were critical of the brutal spectacles, but found virtue in the courage and dignity of the defeated fighter—an attitude that finds its fullest expression with the Christians martyred in the arena. Tertullian considered deaths in the arena to be nothing more than a dressed-up form of human sacrifice. Even martyr literature, however, offers "detailed, indeed luxuriant, descriptions of bodily suffering", and became a popular genre at times indistinguishable from fiction. The singular ludus, "play, game, sport, training", had a wide range of meanings such as "word play", "theatrical performance", "board game", "primary school", and even "gladiator training school" (as in Ludus Magnus). Activities for children and young people in the Empire included hoop rolling and knucklebones (astragali or "jacks"). Girls had dolls made of wood, terracotta, and especially bone and ivory. Ball games include trigon and harpastum. People of all ages played board games, including latrunculi ("Raiders") and XII scripta ("Twelve Marks"). A game referred to as alea (dice) or tabula (the board) may have been similar to backgammon. Dicing as a form of gambling was disapproved of, but was a popular pastime during the festival of the Saturnalia. After adolescence, most physical training for males was of a military nature. The Campus Martius originally was an exercise field where young men learned horsemanship and warfare. Hunting was also considered an appropriate pastime. According to Plutarch, conservative Romans disapproved of Greek-style athletics that promoted a fine body for its own sake, and condemned Nero's efforts to encourage Greek-style athletic games. Some women trained as gymnasts and dancers, and a rare few as female gladiators. The "Bikini Girls" mosaic shows young women engaging in routines comparable to rhythmic gymnastics.[t] Women were encouraged to maintain health through activities such as playing ball, swimming, walking, or reading aloud (as a breathing exercise). In a status-conscious society like that of the Romans, clothing and personal adornment indicated the etiquette of interacting with the wearer. Wearing the correct clothing reflected a society in good order. There is little direct evidence of how Romans dressed in daily life, since portraiture may show the subject in clothing with symbolic value, and surviving textiles are rare. The toga was the distinctive national garment of the male citizen, but it was heavy and impractical, worn mainly for conducting political or court business and religious rites. It was a "vast expanse" of semi-circular white wool that could not be put on and draped correctly without assistance. The drapery became more intricate and structured over time. The toga praetexta, with a purple or purplish-red stripe representing inviolability, was worn by children who had not come of age, curule magistrates, and state priests. Only the emperor could wear an all-purple toga (toga picta). Ordinary clothing was dark or colourful. The basic garment for all Romans, regardless of gender or wealth, was the simple sleeved tunic, with length differing by wearer. The tunics of poor people and labouring slaves were made from coarse wool in natural, dull shades; finer tunics were made of lightweight wool or linen. A man of the senatorial or equestrian order wore a tunic with two purple stripes (clavi) woven vertically: the wider the stripe, the higher the wearer's status. Other garments could be layered over the tunic. Common male attire also included cloaks, and in some regions, trousers. In the 2nd century, emperors and elite men are often portrayed wearing the pallium, an originally Greek mantle; women are also portrayed in the pallium. Tertullian considered the pallium an appropriate garment both for Christians, in contrast to the toga, and for educated people. Roman clothing styles changed over time. In the Dominate, clothing worn by both soldiers and bureaucrats became highly decorated with geometrical patterns, stylized plant motifs, and in more elaborate examples, human or animal figures. Courtiers of the later Empire wore elaborate silk robes. The militarization of Roman society, and the waning of urban life, affected fashion: heavy military-style belts were worn by bureaucrats as well as soldiers, and the toga was abandoned, replaced by the pallium as a garment embodying social unity. Arts Greek art had a profound influence on Roman art. Public art—including sculpture, monuments such as victory columns or triumphal arches, and the iconography on coins—is often analysed for historical or ideological significance. In the private sphere, artistic objects were made for religious dedications, funerary commemoration, domestic use, and commerce. The wealthy advertised their appreciation of culture through artwork and decorative arts in their homes. Despite the value placed on art, even famous artists were of low social status, partly as they worked with their hands. Portraiture, which survives mainly in sculpture, was the most copious form of imperial art. Portraits during the Augustan period utilize classical proportions, evolving later into a mixture of realism and idealism. Republican portraits were characterized by verism, but as early as the 2nd century BC, Greek heroic nudity was adopted for conquering generals. Imperial portrait sculptures may model a mature head atop a youthful nude or semi-nude body with perfect musculature. Clothed in the toga or military regalia, the body communicates rank or role, not individual characteristics. Portraiture in painting is represented primarily by the Fayum mummy portraits, which evoke Egyptian and Roman traditions of commemorating the dead with realistic painting. Marble portrait sculpture were painted, but traces have rarely survived. Examples of Roman sculpture survive abundantly, though often in damaged or fragmentary condition, including freestanding statuary in marble, bronze and terracotta, and reliefs from public buildings and monuments. Niches in amphitheatres were originally filled with statues, as were formal gardens. Temples housed cult images of deities, often by famed sculptors. Elaborately carved marble and limestone sarcophagi are characteristic of the 2nd to 4th centuries. Sarcophagus relief has been called the "richest single source of Roman iconography", depicting mythological scenes or Jewish/Christian imagery as well as the deceased's life. Initial Roman painting drew from Etruscan and Greek models and techniques. Examples of Roman paintings can be found in palaces, catacombs and villas. Much of what is known of Roman painting is from the interior decoration of private homes, particularly as preserved by the eruption of Vesuvius. In addition to decorative borders and panels with geometric or vegetative motifs, wall painting depicts scenes from mythology and theatre, landscapes and gardens, spectacles, everyday life, and erotic art. Mosaics are among the most enduring of Roman decorative arts, and are found on floors and other architectural features. The most common is the tessellated mosaic, formed from uniform pieces (tesserae) of materials such as stone and glass. Opus sectile is a related technique in which flat stone, usually coloured marble, is cut precisely into shapes from which geometric or figurative patterns are formed. This more difficult technique became especially popular for luxury surfaces in the 4th century (e.g. the Basilica of Junius Bassus). Figurative mosaics share many themes with painting, and in some cases use almost identical compositions. Geometric patterns and mythological scenes occur throughout the Empire. In North Africa, a particularly rich source of mosaics, homeowners often chose scenes of life on their estates, hunting, agriculture, and local wildlife. Plentiful and major examples of Roman mosaics come also from present-day Turkey (particularly the (Antioch mosaics), Italy, southern France, Spain, and Portugal. Decorative arts for luxury consumers included fine pottery, silver and bronze vessels and implements, and glassware. Pottery manufacturing was economically important, as were the glass and metalworking industries. Imports stimulated new regional centres of production. Southern Gaul became a leading producer of the finer red-gloss pottery (terra sigillata) that was a major trade good in 1st-century Europe. Glassblowing was regarded by the Romans as originating in Syria in the 1st century BC, and by the 3rd century, Egypt and the Rhineland had become noted for fine glass. In Roman tradition, borrowed from the Greeks, literary theatre was performed by all-male troupes that used face masks with exaggerated facial expressions to portray emotion. Female roles were played by men in drag (travesti). Roman literary theatre tradition is represented in Latin literature by the tragedies of Seneca, for example. More popular than literary theatre was the genre-defying mimus theatre, which featured scripted scenarios with free improvisation, risqué language and sex scenes, action sequences, and political satire, along with dance, juggling, acrobatics, tightrope walking, striptease, and dancing bears. Unlike literary theatre, mimus was played without masks, and encouraged stylistic realism. Female roles were performed by women. Mimus was related to pantomimus, an early form of story ballet that contained no spoken dialogue but rather a sung libretto, often mythological, either tragic or comic. Although sometimes regarded as foreign, music and dance existed in Rome from earliest times. Music was customary at funerals, and the tibia, a woodwind instrument, was played at sacrifices. Song (carmen) was integral to almost every social occasion. Music was thought to reflect the orderliness of the cosmos. Various woodwinds and "brass" instruments were played, as were stringed instruments such as the cithara, and percussion. The cornu, a long tubular metal wind instrument, was used for military signals and on parade. These instruments spread throughout the provinces and are widely depicted in Roman art. The hydraulic pipe organ (hydraulis) was "one of the most significant technical and musical achievements of antiquity", and accompanied gladiator games and events in the amphitheatre. Although certain dances were seen at times as non-Roman or unmanly, dancing was embedded in religious rituals of archaic Rome. Ecstatic dancing was a feature of the mystery religions, particularly the cults of Cybele and Isis. In the secular realm, dancing girls from Syria and Cadiz were extremely popular. Like gladiators, entertainers were legally infames, technically free but little better than slaves. "Stars", however, could enjoy considerable wealth and celebrity, and mingled socially and often sexually with the elite. Performers supported each other by forming guilds, and several memorials for theatre members survive. Theatre and dance were often condemned by Christian polemicists in the later Empire. Literacy, books, and education Estimates of the average literacy rate range from 5 to over 30%. The Roman obsession with documents and inscriptions indicates the value placed on the written word.[u] Laws and edicts were posted as well as read out. Illiterate Roman subjects could have a government scribe (scriba) read or write their official documents for them. The military produced extensive written records. Numeracy was necessary for commerce. Slaves were numerate and literate in significant numbers; some were highly educated. Graffiti and low-quality inscriptions with misspellings and solecisms indicate casual literacy among non-elites.[v] The Romans had an extensive priestly archive, and inscriptions appear throughout the Empire in connection with votives dedicated by ordinary people, as well as "magic spells" (e.g. the Greek Magical Papyri). Books were expensive, since each copy had to be written out on a papyrus roll (volumen) by scribes. The codex—pages bound to a spine—was still a novelty in the 1st century, but by the end of the 3rd century was replacing the volumen. Commercial book production was established by the late Republic, and by the 1st century certain neighbourhoods of Rome and Western provincial cities were known for their bookshops. The quality of editing varied wildly, and plagiarism or forgery were common, since there was no copyright law. Collectors amassed personal libraries, and a fine library was part of the cultivated leisure (otium) associated with the villa lifestyle. Significant collections might attract "in-house" scholars, and an individual benefactor might endow a community with a library (as Pliny the Younger did in Comum). Imperial libraries were open to users on a limited basis, and represented a literary canon. Books considered subversive might be publicly burned, and Domitian crucified copyists for reproducing works deemed treasonous. Literary texts were often shared aloud at meals or with reading groups. Public readings (recitationes) expanded from the 1st through the 3rd century, giving rise to "consumer literature" for entertainment. Illustrated books, including erotica, were popular, but are poorly represented by extant fragments. Literacy began to decline during the Crisis of the Third Century. The emperor Julian banned Christians from teaching the classical curriculum, but the Church Fathers and other Christians adopted Latin and Greek literature, philosophy and science in biblical interpretation. As the Western Roman Empire declined, reading became rarer even for those within the Church hierarchy, although it continued in the Byzantine Empire. Traditional Roman education was moral and practical. Stories were meant to instil Roman values (mores maiorum). Parents were expected to act as role models, and working parents passed their skills to their children, who might also enter apprenticeships. Young children were attended by a pedagogue, usually a Greek slave or former slave, who kept the child safe, taught self-discipline and public behaviour, attended class and helped with tutoring. Formal education was available only to families who could pay for it; lack of state support contributed to low literacy. Primary education in reading, writing, and arithmetic might take place at home if parents hired or bought a teacher. Other children attended "public" schools organized by a schoolmaster (ludimagister) paid by parents. Vernae (homeborn slave children) might share in-home or public schooling. Boys and girls received primary education generally from ages 7 to 12, but classes were not segregated by grade or age. Most schools employed corporal punishment. For the socially ambitious, education in Greek as well as Latin was necessary. Schools became more numerous during the Empire, increasing educational opportunities. At the age of 14, upperclass males made their rite of passage into adulthood, and began to learn leadership roles through mentoring from a senior family member or family friend. Higher education was provided by grammatici or rhetores. The grammaticus or "grammarian" taught mainly Greek and Latin literature, with history, geography, philosophy or mathematics treated as explications of the text. With the rise of Augustus, contemporary Latin authors such as Virgil and Livy also became part of the curriculum. The rhetor was a teacher of oratory or public speaking. The art of speaking (ars dicendi) was highly prized, and eloquentia ("speaking ability, eloquence") was considered the "glue" of civilized society. Rhetoric was not so much a body of knowledge (though it required a command of the literary canon) as it was a mode of expression that distinguished those who held social power. The ancient model of rhetorical training—"restraint, coolness under pressure, modesty, and good humour"—endured into the 18th century as a Western educational ideal. In Latin, illiteratus could mean both "unable to read and write" and "lacking in cultural awareness or sophistication". Higher education promoted career advancement. Urban elites throughout the Empire shared a literary culture imbued with Greek educational ideals (paideia). Hellenistic cities sponsored schools of higher learning to express cultural achievement. Young Roman men often went abroad to study rhetoric and philosophy, mostly to Athens. The curriculum in the East was more likely to include music and physical training. On the Hellenistic model, Vespasian endowed chairs of grammar, Latin and Greek rhetoric, and philosophy at Rome, and gave secondary teachers special exemptions from taxes and legal penalties. In the Eastern Empire, Berytus (present-day Beirut) was unusual in offering a Latin education, and became famous for its school of Roman law. The cultural movement known as the Second Sophistic (1st–3rd century AD) promoted the assimilation of Greek and Roman social, educational, and esthetic values. Literate women ranged from cultured aristocrats to girls trained to be calligraphers and scribes. The ideal woman in Augustan love poetry was educated and well-versed in the arts. Education seems to have been standard for daughters of the senatorial and equestrian orders. An educated wife was an asset for the socially ambitious household. Literature under Augustus, along with that of the Republic, has been viewed as the "Golden Age" of Latin literature, embodying classical ideals. The three most influential Classical Latin poets—Virgil, Horace, and Ovid—belong to this period. Virgil's Aeneid was a national epic in the manner of the Homeric epics of Greece. Horace perfected the use of Greek lyric metres in Latin verse. Ovid's erotic poetry was enormously popular, but ran afoul of Augustan morality, contributing to his exile. Ovid's Metamorphoses wove together Greco-Roman mythology; his versions of Greek myths became a primary source of later classical mythology, and his work was hugely influential on medieval literature. The early Principate produced satirists such as Persius and Juvenal. The mid-1st through mid-2nd century has conventionally been called the "Silver Age" of Latin literature. The three leading writers—Seneca, Lucan, and Petronius—committed suicide after incurring Nero's displeasure. Epigrammatist and social observer Martial and the epic poet Statius, whose poetry collection Silvae influenced Renaissance literature, wrote during the reign of Domitian. Other authors of the Silver Age included Pliny the Elder, author of the encyclopedic Natural History; his nephew, Pliny the Younger; and the historian Tacitus. The principal Latin prose author of the Augustan age is the historian Livy, whose account of Rome's founding became the most familiar version in modern-era literature. The Twelve Caesars by Suetonius is a primary source for imperial biography. Among Imperial historians who wrote in Greek are Dionysius of Halicarnassus, Josephus, and Cassius Dio. Other major Greek authors of the Empire include the biographer Plutarch, the geographer Strabo, and the rhetorician and satirist Lucian. From the 2nd to the 4th centuries, Christian authors were in active dialogue with the classical tradition. Tertullian was one of the earliest prose authors with a distinctly Christian voice. After the conversion of Constantine, Latin literature is dominated by the Christian perspective. In the late 4th century, Jerome produced the Latin translation of the Bible that became authoritative as the Vulgate. Around that same time, Augustine wrote The City of God against the Pagans, considered "a masterpiece of Western culture". In contrast to the unity of Classical Latin, the literary esthetic of late antiquity has a tessellated quality. A continuing interest in the religious traditions of Rome prior to Christian dominion is found into the 5th century, with the Saturnalia of Macrobius and The Marriage of Philology and Mercury of Martianus Capella. Latin poets of late antiquity include Ausonius, Prudentius, Claudian, and Sidonius Apollinaris. Religion The Romans thought of themselves as highly religious, and attributed their success to their collective piety (pietas) and good relations with the gods (pax deorum). The archaic religion believed to have come from the earliest kings of Rome was the foundation of the mos maiorum, "the way of the ancestors", central to Roman identity. Roman religion was practical and contractual, based on the principle of do ut des, "I give that you might give". Religion depended on knowledge and the correct practice of prayer, ritual, and sacrifice, not on faith or dogma, although Latin literature preserves learned speculation on the nature of the divine. For ordinary Romans, religion was a part of daily life. Each home had a household shrine to offer prayers and libations to the family's domestic deities. Neighbourhood shrines and sacred places such as springs and groves dotted the city. The Roman calendar was structured around religious observances; as many as 135 days were devoted to religious festivals and games (ludi). In the wake of the Republic's collapse, state religion adapted to support the new regime. Augustus justified one-man rule with a vast programme of religious revivalism and reform. Public vows now were directed at the wellbeing of the emperor. So-called "emperor worship" expanded on a grand scale the traditional veneration of the ancestral dead and of the Genius, the divine tutelary of every individual. Upon death, an emperor could be made a state divinity (divus) by vote of the Senate. The Roman imperial cult, influenced by Hellenistic ruler cult, became one of the major ways Rome advertised its presence in the provinces and cultivated shared cultural identity. Cultural precedent in the Eastern provinces facilitated a rapid dissemination of Imperial cult, extending as far as Najran, in present-day Saudi Arabia.[w] Rejection of the state religion became tantamount to treason. The Romans are known for the great number of deities they honoured. As the Romans extended their territories, their general policy was to promote stability among diverse peoples by absorbing local deities and cults rather than eradicating them,[x] building temples that framed local theology within Roman religion. Inscriptions throughout the Empire record the side-by-side worship of local and Roman deities, including dedications made by Romans to local gods. By the height of the Empire, numerous syncretic or reinterpreted gods were cultivated, among them cults of Cybele, Isis, Epona, and of solar gods such as Mithras and Sol Invictus, found as far north as Roman Britain. Because Romans had never been obligated to cultivate one god or cult only, religious tolerance was not an issue. Mystery religions, which offered initiates salvation in the afterlife, were a matter of personal choice, practiced in addition to one's family rites and public religion. The mysteries, however, involved exclusive oaths and secrecy, which conservative Romans viewed with suspicion as characteristic of "magic", conspiracy, and subversive activity. Thus, sporadic and sometimes brutal attempts were made to suppress religionists. In Gaul, the power of the druids was checked, first by forbidding Roman citizens to belong to the order, and then by banning druidism altogether. However, Celtic traditions were reinterpreted within the context of Imperial theology, and a new Gallo-Roman religion coalesced; its capital at the Sanctuary of the Three Gauls established precedent for Western cult as a form of Roman-provincial identity. The monotheistic rigour of Judaism posed difficulties for Roman policy that led at times to compromise and granting of special exemptions. Tertullian noted that Judaism, unlike Christianity, was considered a religio licita, "legitimate religion". The Jewish–Roman wars resulted from political as well as religious conflicts; the siege of Jerusalem in 70 AD led to the sacking of the Second Temple and the dispersal of Jewish political power (see Jewish diaspora). Christianity emerged in Roman Judaea as a Jewish religious sect in the 1st century and gradually spread out of Jerusalem throughout the Empire and beyond. Imperially authorized persecutions were limited and sporadic, with martyrdoms occurring most often under the authority of local officials. Tacitus reports that after the Great Fire of Rome in AD 64, the emperor attempted to deflect blame from himself onto the Christians. A major persecution occurred under the emperor Domitian and a persecution in 177 took place at Lugdunum, the Gallo-Roman religious capital. A letter from Pliny the Younger, governor of Bithynia, describes his persecution and executions of Christians. The Decian persecution of 246–251 seriously threatened the Christian Church, but ultimately strengthened Christian defiance. Diocletian undertook the most severe persecution of Christians, from 303 to 311. From the 2nd century onward, the Church Fathers condemned the diverse religions practiced throughout the Empire as "pagan". In the early 4th century, Constantine I became the first emperor to convert to Christianity. He supported the Church financially and made laws that favored it, but the new religion was already successful, having moved from less than 50,000 to over a million adherents between 150 and 250. Constantine and his successors banned public sacrifice while tolerating other traditional practices. Constantine never engaged in a purge, there were no "pagan martyrs" during his reign, and people who had not converted to Christianity remained in important positions at court.: 302 Julian attempted to revive traditional public sacrifice and Hellenistic religion, but met Christian resistance and lack of popular support. Christians of the 4th century believed the conversion of Constantine showed that Christianity had triumphed over paganism (in Heaven) and little further action besides such rhetoric was necessary. Thus, their focus shifted towards heresy. According to Peter Brown, "In most areas, polytheists were not molested, and apart from a few ugly incidents of local violence, Jewish communities also enjoyed a century of stable, even privileged, existence".: 641–643 There were anti-pagan laws, but they were not generally enforced; through the 6th century, centers of paganism existed in Athens, Gaza, Alexandria, and elsewhere. According to recent Jewish scholarship, toleration of the Jews was maintained under Christian emperors. This did not extend to heretics: Theodosius I made multiple laws and acted against alternate forms of Christianity, and heretics were persecuted and killed by both the government and the church throughout late antiquity. Non-Christians were not persecuted until the 6th century. Rome's original religious hierarchy and ritual influenced Christian forms, and many pre-Christian practices survived in Christian festivals and local traditions. Legacy Several states claimed to be the Roman Empire's successor. The Holy Roman Empire was established in 800 when Pope Leo III crowned Charlemagne as Roman emperor. The Russian Tsardom, as inheritor of the Byzantine Empire's Orthodox Christian tradition, counted itself the Third Rome (Constantinople having been the second), in accordance with the concept of translatio imperii. The last Eastern Roman titular, Andreas Palaiologos, sold the title of Emperor of Constantinople to Charles VIII of France; upon Charles' death, Palaiologos reclaimed the title and on his death granted it to Ferdinand and Isabella and their successors, who never used it. When the Ottomans, who based their state on the Byzantine model, took Constantinople in 1453, Mehmed II established his capital there and claimed to sit on the throne of the Roman Empire. He even launched an invasion of Otranto with the purpose of re-uniting the Empire, which was aborted by his death. In the medieval West, "Roman" came to mean the church and the Catholic Pope. The Greek form Romaioi remained attached to the Greek-speaking Christian population of the Byzantine Empire and is still used by Greeks. The Roman Empire's control of the Italian Peninsula influenced Italian nationalism and the unification of Italy (Risorgimento) in 1861. In the United States, the founders saw Athenian democracy and Roman republicanism as models for the mixed constitution, but regarded the emperor as a figure of tyranny. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Personal_area_network] | [TOKENS: 580] |
Contents Personal area network A personal area network (PAN) is a computer network for interconnecting electronic devices within an individual person's workspace. A PAN provides data transmission among devices such as computers, smartphones, tablets and personal digital assistants. PANs can be used for communication among the personal devices themselves, or for connecting to a higher-level network and the Internet, where one master device takes up the role as gateway. A PAN may be carried over wired interfaces such as USB, but may be carried wirelessly, and specifically called a wireless personal area network (WPAN) using short-distance wireless network technology such as IrDA, Wireless USB, Bluetooth, NearLink or Zigbee. The reach of a WPAN varies from a few centimeters to a few meters. WPANs specifically tailored for low-power operation of wireless sensors are sometimes called low-power personal area networks (LPPANs) to better distinguish them from low-power wide-area networks (LPWANs). Wired Wired personal area networks provide short connections between peripherals. Example technologies include USB, IEEE 1394 and Thunderbolt.[citation needed] Wireless A wireless personal area network (WPAN) is a personal area network in which the connections are wireless. IEEE 802.15 has produced standards for several types of PANs operating in the ISM band, including Bluetooth. The Infrared Data Association (IrDA) has produced standards for WPANs that operate using infrared communications. Bluetooth uses short-range radio waves. Uses in a WPAN include, for example, Bluetooth devices such as keyboards, pointing devices, audio headsets, and printers that may connect to smartwatches, cell phones, or computers. A Bluetooth WPAN is also called a piconet, and is composed of up to 8 active devices in a master-slave relationship (a very large number of additional devices can be connected in parked mode). The first Bluetooth device in the piconet is the master, and all other devices are slaves that communicate with the master. A piconet typically has a range of 10 metres (33 ft), although ranges of up to 100 metres (330 ft) can be reached under ideal circumstances. Long-range Bluetooth routers with augmented antenna arrays connect Bluetooth devices up to 1,000 feet (300 m). With Bluetooth mesh networking, the range and number of devices are extended by using mesh networking techniques to relay information from one device to another. Such a network doesn't have a master device and may or may not be treated as a WPAN. IrDA uses infrared light, which has a frequency below the human eye's sensitivity. Infrared is used in other wireless communications applications, for instance, in remote controls. Typical WPAN devices that use IrDA include printers, keyboards, and other serial communication interfaces. See also References External links Media related to Personal area networks (PAN) at Wikimedia Commons |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Joke#cite_note-FOOTNOTERaskin198599-29] | [TOKENS: 8460] |
Contents Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/FastAPI] | [TOKENS: 565] |
Contents FastAPI FastAPI is a web framework for building HTTP-based service APIs in Python 3.8+. It uses Pydantic and type hints to validate, serialize and deserialize data. FastAPI also automatically generates OpenAPI documentation for APIs built with it. It was first released in 2018. Components Pydantic is a data validation library for Python. While writing code in an IDE, Pydantic provides type hints based on annotations. FastAPI extensively utilizes Pydantic models for data validation, serialization, and automatic API documentation. These models are using standard Python type hints, providing a declarative way to specify the structure and types of data for incoming requests (e.g., HTTP bodies) and outgoing responses. Starlette is a lightweight ASGI framework/toolkit, to support async functionality in Python. Uvicorn is a minimal low-level server/application web server for async frameworks, following the ASGI specification. Technically, it implements a multi-process model with one main process, which is responsible for managing a pool of worker processes and distributing incoming HTTP requests to them. The number of worker processes is pre-configured, but can also be adjusted up or down at runtime. FastAPI automatically generates OpenAPI documentation for APIs. This documentation includes both Swagger UI and ReDoc, which provide interactive API documentation that you can use to explore and test your endpoints in real time. This is particularly useful for developing, testing, and sharing APIs with other developers or users. Swagger UI is accessible by default at /docs and ReDoc at /redoc route. Features FastAPI's architecture inherently supports asynchronous programming. This design allows the single-threaded event loop to handle a large number of concurrent requests efficiently, particularly when dealing with I/O-bound operations like database queries or external API calls. For reference, see async/await pattern. FastAPI incorporates a Dependency Injection (DI) system to manage and provide services to HTTP endpoints. This mechanism allows developers to declare components such as database sessions or authentication logic as function parameters. FastAPI automatically resolves these dependencies for each request, injecting the necessary instances. WebSockets allow full-duplex communication between a client and the server. This capability is fundamental for applications requiring continuous data exchange, such as instant messaging platforms, live data dashboards, or multiplayer online games. FastAPI leverages the underlying Starlette implementation, allowing for efficient management of connections and message handling. FastAPI enables the execution of background tasks after an HTTP response has been sent to the client. This allows the API to immediately respond to user requests while simultaneously processing non-critical or time-consuming operations in the background. Typical applications include sending email notifications, updating caches, or performing data post-processing. Example The following code shows a simple web application that displays "Hello, World!" when visited: See also External links References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-:02_196-1] | [TOKENS: 12858] |
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Effective_temperature] | [TOKENS: 1739] |
Contents Effective temperature The effective temperature (aka ET) of a body such as a star or planet is the temperature of a black body that would emit the same total energy as electromagnetic radiation. Effective temperature is often used as an estimate of a body's surface temperature when the body's emissivity curve (as a function of wavelength) is not known. When the star's or planet's net emissivity in the relevant wavelength band is less than unity (less than that of a black body), the actual temperature of the body will be higher than the effective temperature. The net emissivity may be low due to surface or atmospheric properties, such as the greenhouse effect. Star The effective temperature of a star is the temperature of a black body with the same luminosity per surface area (FBol) as the star and is defined according to the Stefan–Boltzmann law FBol = σTeff4. Notice that the total (bolometric) luminosity of a star is then L = 4πR2σTeff4, where R is the stellar radius. The definition of the stellar radius is obviously not straightforward. More rigorously the effective temperature corresponds to the temperature at the radius that is defined by a certain value of the Rosseland optical depth (usually 1) within the stellar atmosphere. The effective temperature and the bolometric luminosity are the two fundamental physical parameters needed to place a star on the Hertzsprung–Russell diagram. Both effective temperature and bolometric luminosity depend on the chemical composition of a star. The effective temperature of the Sun is around 5,778 K. The nominal value defined by the International Astronomical Union for use as a unit of measure of temperature is 5,772±0.8 K. Stars have a decreasing temperature gradient, going from their central core up to the atmosphere. The "core temperature" of the Sun—the temperature at the centre of the Sun where nuclear reactions take place—is estimated to be 15,000,000 K. The color index of a star indicates its temperature from the very cool—by stellar standards—red M stars that radiate heavily in the infrared to the very hot blue O stars that radiate largely in the ultraviolet. Various colour-effective temperature relations exist in the literature. Their relations also have smaller dependencies on other stellar parameters, such as the stellar metallicity and surface gravity. The effective temperature of a star indicates the amount of heat that the star radiates per unit of surface area. From the hottest surfaces to the coolest is the sequence of stellar classifications known as O, B, A, F, G, K, M. A red star could be a tiny red dwarf, a star of feeble energy production and a small surface or a bloated giant or even supergiant star such as Antares or Betelgeuse, either of which generates far greater energy but passes it through a surface so large that the star radiates little per unit of surface area. A star near the middle of the spectrum, such as the modest Sun or the giant Capella radiates more energy per unit of surface area than the feeble red dwarf stars or the bloated supergiants, but much less than such a white or blue star as Vega or Rigel. Planet To find the effective (blackbody) temperature of a planet, it can be calculated by equating the power received by the planet to the known power emitted by a blackbody of temperature T. Take the case of a planet at a distance D from the star, of luminosity L. Assuming the star radiates isotropically and that the planet is a long way from the star, the power absorbed by the planet is given by treating the planet as a disc of radius r, which intercepts some of the power which is spread over the surface of a sphere of radius D (the distance of the planet from the star). The calculation assumes the planet reflects some of the incoming radiation by incorporating a parameter called the albedo (a). An albedo of 1 means that all the radiation is reflected, an albedo of 0 means all of it is absorbed. The expression for absorbed power is then: The next assumption we can make is that the entire planet is at the same temperature T, and that the planet radiates as a blackbody. The Stefan–Boltzmann law gives an expression for the power radiated by the planet: Equating these two expressions and rearranging gives an expression for the effective temperature: Where σ {\displaystyle \sigma } is the Stefan–Boltzmann constant. Note that the planet's radius has cancelled out of the final expression. The effective temperature for Jupiter from this calculation is 88 K and 51 Pegasi b (Dimidium) is 1,258 K.[citation needed] A better estimate of effective temperature for some planets, such as Jupiter, would need to include the internal heating as a power input. The actual temperature depends on albedo and atmosphere effects. The actual temperature from spectroscopic analysis for HD 209458 b (Osiris) is 1,130 K, but the effective temperature is 1,359 K.[citation needed] The internal heating within Jupiter raises the effective temperature to about 152 K.[citation needed] The surface temperature of a planet can be estimated by modifying the effective-temperature calculation to account for emissivity and temperature variation. The area of the planet that absorbs the power from the star is Aabs which is some fraction of the total surface area Atotal = 4πr2, where r is the radius of the planet. This area intercepts some of the power which is spread over the surface of a sphere of radius D. We also allow the planet to reflect some of the incoming radiation by incorporating a parameter a called the albedo. An albedo of 1 means that all the radiation is reflected, an albedo of 0 means all of it is absorbed. The expression for absorbed power is then: The next assumption we can make is that although the entire planet is not at the same temperature, it will radiate as if it had a temperature T over an area Arad which is again some fraction of the total area of the planet. There is also a factor ε, which is the emissivity and represents atmospheric effects. ε ranges from 1 to 0 with 1 meaning the planet is a perfect blackbody and emits all the incident power. The Stefan–Boltzmann law gives an expression for the power radiated by the planet: Equating these two expressions and rearranging gives an expression for the surface temperature: Note the ratio of the two areas. Common assumptions for this ratio are 1/4 for a rapidly rotating body and 1/2 for a slowly rotating body, or a tidally locked body on the sunlit side. This ratio would be 1 for the subsolar point, the point on the planet directly below the sun and gives the maximum temperature of the planet — a factor of √2 (1.414) greater than the effective temperature of a rapidly rotating planet. Also note here that this equation does not take into account any effects from internal heating of the planet, which can arise directly from sources such as radioactive decay and also be produced from frictions resulting from tidal forces. Earth has an albedo of about 0.306 and a solar irradiance (L / 4 π D2) of 1361 W m−2 at its mean orbital radius of 1.5×108 km. The calculation with ε=1 and remaining physical constants then gives an Earth effective temperature of 254 K (−19 °C). The actual temperature of Earth's surface is an average 288 K (15 °C) as of 2020. The difference between the two values is called the greenhouse effect. The greenhouse effect results from materials in the atmosphere (greenhouse gases and clouds) absorbing thermal radiation and reducing emissions to space, i.e., reducing the planet's emissivity of thermal radiation from its surface into space. Substituting the surface temperature into the equation and solving for ε gives an effective emissivity of about 0.61 for a 288 K Earth. Furthermore, these values calculate an outgoing thermal radiation flux of 238 W m−2 (with ε=0.61 as viewed from space) versus a surface thermal radiation flux of 390 W m−2 (with ε≈1 at the surface). Both fluxes are near the confidence ranges reported by the IPCC.: 934 See also References External links |
======================================== |
[SOURCE: https://techcrunch.com/video/go-to-market-strategies-for-an-ai-era/] | [TOKENS: 780] |
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Go-to-market strategies for an AI era Loading the player… In the season finale of Build Mode, Isabelle Johannessen sits down with Paul Irving, partner and COO of GTMfund, to discuss go-to-market strategies for the AI era. Paul shares specific, actionable advice on how early-stage startups can win even when facing well-funded competitors who iterate at lightning speed. He also explains why distribution has become the final remaining moat when technical advantages disappear in months instead of years, and why every company needs a unique go-to-market motion tailored to their specific ICP. They also dive into the power of warm-introduction mapping and building authentic relationships with operators who can open doors. Irving highlights one of the best parts of the startup ecosystem: the altruistic nature of founders and operators who are genuinely willing to help when you approach them with curiosity and authenticity. Key takeaways: Season 2 of Build Mode is launching mid-February. Isabelle Johannessen is our host. Build Mode is produced and edited by Maggie Nye. Audience Development is led by Morgan Little. And a special thanks to the Foundry and Cheddar video teams. Topics Head of the Startup Battlefield Program Isabelle leads Startup Battlefield, TechCrunch’s iconic launchpad and competition for the world’s most promising early-stage startups. You can contact or verify outreach from Isabelle by emailing isabelle.johannessen@techcrunch.com. She scouts top founders across 99+ countries and prepares them to pitch on the Disrupt stage in front of tier-one investors and global media. Before TechCrunch, she designed and led international startup acceleration programs across Japan, Korea, Italy, and Spain—connecting global founders with VCs and helping them successfully enter the U.S. market. With a Master’s in Entrepreneurship & Disruptive Innovation—and a past life as a professional singer—she brings a blend of strategic rigor and stage presence to help founders craft compelling stories and stand out in crowded markets. Save up to $680 on your pass before February 27.Meet investors. Discover your next portfolio company. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what’s next. Don’t miss these one-time savings. Most Popular FBI says ATM ‘jackpotting’ attacks are on the rise, and netting hackers millions in stolen cash Meta’s own research found parental supervision doesn’t really help curb teens’ compulsive social media use How Ricursive Intelligence raised $335M at a $4B valuation in 4 months After all the hype, some AI experts don’t think OpenClaw is all that exciting OpenClaw creator Peter Steinberger joins OpenAI Hollywood isn’t happy about the new Seedance 2.0 video generator The great computer science exodus (and where students are going instead) Subscribe for the industry’s biggest tech news Every weekday and Sunday, you can get the best of TechCrunch’s coverage. TechCrunch's AI experts cover the latest news in the fast-moving field. Every Monday, gets you up to speed on the latest advances in aerospace. Startups are the core of TechCrunch, so get our best coverage delivered weekly. By submitting your email, you agree to our Terms and Privacy Notice. Related © 2025 TechCrunch Media LLC. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Joke#cite_note-FOOTNOTEShultz197612–13Carrell2008312-30] | [TOKENS: 8460] |
Contents Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Baliwag] | [TOKENS: 3326] |
Contents Baliwag Baliwag, officially the City of Baliwag (Tagalog: [bɐˈliʊag]; Filipino: Lungsod ng Baliwag, Kapampangan: Lakanbalen ning Baliwag/Siudad ning Baliwag, also spelled as Baliuag), is a component city in the province of Bulacan, Philippines. According to the 2024 census, it has a population of 174,194 people. The name Baliwag, hispanized as Baliuag, is an old Kapampangan word for "untouched." It was founded in 1732 by Augustinian friars and was incorporated by the Spanish Governor-General on May 26, 1733. It was carved out from the town of Quingua (now Plaridel). Through the years of Spanish domination, Baliuag was predominantly agricultural. People had to depend on rice farming for the main source of livelihood. Orchards and tumanas yielded fruits and vegetables, which were sold in the public market. Commerce and industry also played important contributions to the economy of the people. Buntal hat weaving in Baliwag together with silk weaving popularly known in the world as Thai silk; the manufacturer of cigar cases, piña fibers, petates (mats), and Sillas de Bejucos (cane chairs) all of the fine quality became known in many parts of the world. The local market also grew. During the early part of the 19th century, Baliwag was already considered one of the most progressive and richest towns in Bulacan. The growth of the public market has significantly changed the model of the economy of the city. Baliwag is the major commerce, transportation, entertainment, and educational center of Northern Bulacan. On July 22, 2022, Republic Act No. 11929 lapsed into law. The said measure converted the municipality into a component city and standardized its name as the City of Baliwag. On December 17, 2022, a plebiscite was held, 17,814 residents voted in favor of conversion to a component city while only 5,702 voted against. History Fr. Joaquín Martínez de Zúñiga, OSA, a friar, in his "1803 Historia de las Islas Filipinas" wrote that the Convent or Parochial house of San Agustin, in Baliuag, is the best in the whole Archipelago and that no edifice in Manila can be compared to it in symmetry and beauty amid its towering belfry, having been a viewing point of the town's panorama. The frayle further stated that the Convent was a repository of priceless parish records that dated to the founding of Baliuag as a pueblo or parrochia by the OSA or Augustinians in 1733. But the first convent was erected at Barangay Santa Barbara, Baliuag before the Parokya was formally established at the now Plaza Naning, Poblacion. Fr. Joaquín Martínez de Zúñiga arrived in the Philippines on August 3, 1786, and visited Baliuag on February 17, 1802, with Ignacio Maria de Álava y Sáenz de Navarrete. Their host was Baliuag's Parish Priest, Fray Esteban Diez Hidalgo. Fr. Diez served as the longest cura parroco of Baliuag from 1789, having built the church and convent from 1790 to 1801. Spanish records "Apuntes históricos de la provincia augustiniana del Santísimo Nombre de Jesús de Filipinas" reveal that Fr. Juan de Albarran, OSA was assigned Parish Priest of Baliuag in 1733. The first baptism in Baliuag Church was ordered by Fr. Lector and Fr. Feliz Trillo, Provincial of the Province on June 7, 1733, while Baliuag was founded and began its de jure existence on May 26, 1733. The pueblo or town was created in the provincial Chapter on May 15, 1734, with the appointment of Fr. Manuel Bazeta/Baseta as first cura parroco. In 1769–1774, the Church of Baliuag was built by Father Gregorio Giner. The present structure (the third church to be rebuilt, due to considerable damage during the 1880 Luzon earthquakes) was later rebuilt by Father Esteban Diaz using mortar and stone. The 1866 Belfry was also completed by Father Matias Novoa but the July 19, 1880, quake damaged the same which was later repaired by Father Thomas Gresa. The earthquake of June 3, 1863, one of the strongest to ever hit Manila, destroyed the Governor's Palace in Intramuros. Malacañang then became the permanent residence of the head of the country. The massive quake also damaged the Baliuag Church. In 1870, the reconstruction began when a temporary house of worship, the “Provincial”, along Año 1733 street, emerged as a narrow, and simple edifice which later used by the RVM Sisters of the Colegio de la Sagrada Familia (now St. Mary's College of Baliuag) as the classroom. Antonio de Mesa, “Maestrong Tonio" fabricated the parts to have finished the Spanish-era Baliuag Church. Baliwag City was the 10th town founded by the Augustinians in the province of Bulacan. Baliuag had 30 curates (1733–1898): Fr. Esteban Diez Hidalgo and Fr. Fausto Lopez served 40 and 24 years, respectively. Fr. Lopez had 6 children with a beautiful native, Mariquita: Dr. Joaquin Gonzalez, Francisco, the former Assemblyman Ricardo Lloret Gonzales (Legislative districts of Bulacan, 5th Philippine Legislature), and Jose the eldest who was widely known as “Pepeng Mariquita", inter alia. Spanish cura parroco, Fr. Ysidoro Prada served in Baliuag during the last decade of the Spaniard regime. The Philippine-American civil and military authorities supervised the first municipal elections, having chosen Baliuag as the site of the 1899 Philippine local elections, the first Philippine elections of May 7, 1899. Francisco Guererro was elected the First Presidente Municipal. The Filipinos gathered at the plaza of the St. Augustine Church after the Holy Mass, and thereafter the officials were selected based on the qualifications for voters set by the Americans. The first town Gobernadorcillo (1789 title) of Baliuag was Cap. Jose de Guzman. He was assisted by the Tribunal's teniente mayor (chief lieutenant), juez de ganadas (judge of the cattle), juez de sementeras (judge of the field) and juez de policia (judge of the police). In the History of the Philippines (1521–1898), the 1893 Maura Law, the title of Gobernadorcillo became "capitan municipal" and that of each juez to teniente. From Baliuag's independence from Quingua, now Plaridel, Bulacan to 1898, 49 served as capitan, 13 alcalde and 92 as Gobernadorcillo. Felix de Lara (1782) and Agustin de Castro (1789) were the 1st alcalde and Gobernadorcillo, respectively. Municipal President Fernando Enrile, in 1908, honored some of these officials, even naming some of Baliuag calles in their honor, later. But all these political officials remained under the thumbs and the habito, of the autocratic Augustinian friars, the Baliuag Kura Parokos. Mariano Ponce was a native of Baliuag. He was a founding member of the Propaganda Movement together with José Rizal and Marcelo del Pilar; a former assemblyman of the second district of Bulacan to the Philippine Assembly; and the co-founder of La Solidaridad with fellow co-founder Graciano López-Jaena. His most common names are Naning (the Plaza Naning in Baliuag being named after his nickname); Kalipulako, named after the Cebuano hero Lapulapu; and Tagibalang or Tigbalang (Tikbalang), a supernatural being in Filipino folklore. The local government of Baliuag used as first Municipio under the American regime (History of the Philippines (1898–1946)) the Mariano Yoyongko (Gobernadorcillo in 1885) Principalia in Poblacion (now a part of the market site), which it bought from Yoyongko. On September 15, 1915, Baliuag municipality bought the heritage mansion and a lot of Dr. Joaquin Gonzalez. The Gonzalez old mansion served as Lumang Municipio (the Old Municipio or Town Hall Building, as the seat of the local government) for 65 years. It is now the Baliuag Museum and Library. Baliuag produced not less than 30 priests, including 3 during the Spanish-Dominican, and 2 Jesuits during the American regimes. Jeorge Allan R. Tengco and Amy R. Tengco (wife of Lito S. Tengco), philanthropists, owners of Baliwag Transit and other chains of business establishments had been conferred the Papal Orders of Chivalry October 3, 2000 Pro Ecclesia et Pontifice and the 2012 Dame of the Order of St. Gregory the Great awards. On June 16, 1995, communist guerrilla Melencio Salamat Jr., a local leader of the New People's Army (NPA) in Bulacan, surrendered to the authorities along with 94 other members of the NPA at the Baliwag municipal building. Prior to the surrender, Salamat's group was responsible for collecting "revolutionary taxes" from residents along the coastal towns of Bulacan, and had chosen to give up arms after NPA officials were killed on April 28 in Barangay Catulinan, Baliwag. In 2018, the Sangguniang Bayan filed a resolution to request Bulacan 2nd District Representative Gavini Pancho, to file a house bill to convert Baliuag into a city. Representatives Eric Go Yap (ACT-CIS Partylist) and Paolo Duterte (Davao City–1st) filed House Bill No. 7362, seeking to convert Baliuag into a city. House Bill No. 7362 was filed last August 12, 2020, for the conversion of the municipality of Baliuag into a component city in the province of Bulacan. House Bill No. 10444, filed by the three aforementioned representatives, was concurred by the Senate and submitted to the President for signature on June 29, 2022, a day before the end of the 18th Congress. The bill lapsed into law without the President's signature on July 30, 2022 as Republic Act No. 11929. The plebiscite was originally set by the Commission on Elections on January 14, 2023, but its date was later moved to December 17, 2022, following the postponement of the December 2022 Barangay and Sangguniang Kabataan Elections to October 2023. Despite having a low voter turnout, majority of participated voters ratified the cityhood, making Baliwag the Bulacan's fourth component city and the country's 148th. Geography With the continuous expansion of Metro Manila, Baliwag is part of Manila's built-up area which reaches San Ildefonso, Bulacan at its northernmost part. Baliwag is 28 kilometres (17 mi) from Malolos, 51 kilometres (32 mi) from Manila, and 8 kilometres (5.0 mi) from Pulilan. Baliwag is politically subdivided into 27 barangays, as shown in the matrix below. Each barangay consists of puroks and some have sitios. Demographics In the 2020 census, the population of Baliwag, Bulacan, was 168,470 people, with a density of 3,700 inhabitants per square kilometer or 9,600 inhabitants per square mile. Baliwag at present has six Roman Catholic parishes, a sub-parish and a quasi-parish under the administration of Diocese of Malolos. Their patron saint of Baliwag is St. Augustine because Baliwag was founded by the Augustinians in 1733. Other Christian denominations are also present in the city, including Iglesia ni Cristo, The Church of Jesus Christ of Latter-day Saints, Members Church of God International, Bible Baptist Church and Evangelical Christianity. Economy Poverty incidence of Baliwag Source: Philippine Statistics Authority Government According to Republic Act No. 11929, the official seal of the city shall be circular in form with the dominant colors of green and blue representing the city's vision to promote economic and social progress, sustainable development, and technological advancement. The year 2022 at the center upper part of the official seal indicates the year that Baliwag became a component city. The building structure represents the facade of the town’s seat of government. On top of this image is the year 1733, when Baliwag was founded by the Augustinians. The official seal shall display rice stalks to indicate that the City of Baliwag maintains its commitment to national food security as one of the top rice yielders in the Province of Bulacan. The Baliwag buntal hat, a product woven in this City and is regarded as superior in quality to other types of buntal hats produced in the country, is likewise depicted in the official seal. The City of Baliwag may alter its official seal, provided that any change of the seal shall be approved by Congress and registered with the Department of the Interior and Local Government (DILG). Tourism Buntal Hat Festival is a celebration of the culture of buntal hat making in the city that is simultaneously celebrated with Mother's Day annually. Early versions of the buntal hat were wide-brimmed farmer's hats and used unsoftened strips of buntal fiber. The industry expanded into Baliwag, Bulacan between 1907 and 1909, originally introduced by Mariano Deveza who originally hailed from Lucban, Quezon. Colorful and grandiose decorations and street dancing are the highlights of this celebration. Transportation Public transportation in Baliwag is served by provincial buses, jeepneys, UV Express AUVs, and intra-municipal tricycles. Baliwag Transit, Inc., one of the largest bus transportation system in the Philippines, is headquartered in Barangay Tibag. It mainly services routes to and from Metro Manila and Central Luzon. There are three major transport lines in the municipality: The Baliwag - Candaba (Benigno S. Aquino Avenue) road going to Pampanga (from the Downtown Baliwag to Candaba Town Proper), the Old Cagayan Valley road (Calle Rizal) and the Dona Remedios Trinidad Highway (N1, AH26) going to Manila and Nueva Ecija. The city is located 52 kilometers north of Manila, the capital of the Philippines. Education The Baliwag Schools Division Office governs all educational institutions within the municipality. It oversees the management and operations of all private and public, from primary to secondary schools. Gallery See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Visual_sociology] | [TOKENS: 1224] |
Contents Visual sociology 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias Visual sociology is an area of sociology concerned with the visual dimensions of social life. Theory and method Visual sociology can be theoretically framed around three themes. Luc Pauwels suggests that the framework is based on the origin and nature of visuals, research focus and design, and format and purpose. There are at least three approaches to doing visual sociology: In this context, the camera is analogous to a tape recorder. Film and video cameras are particularly well suited as data gathering technologies for experiments and small group interactions, classroom studies, ethnography, participant observation, oral history, the use of urban space, etc. The tape recorder captures things that are not preserved in even the best researchers' field notes. Similarly, tape recordings preserve audible data not available in even the most carefully annotated transcripts: timbre, the music of a voice, inflection, intonation, grunts and groans, pace, and space convey meanings easily (mis)understood but not easily gleaned from written words alone. By opening another channel of information, visual recordings preserve still more information. For instance, the raised eyebrow, the wave of a hand, the blink of an eye might convert the apparent meaning of words into their opposite, convey irony, sarcasm, or contradiction. So, regardless of how one analyzes the data or what is done with the visual record, sociologists can use cameras to record and preserve data of interest so it can be studied in detail. Visual recording technology also allows us to manipulate the data. Visual recording can be used to represent other forms of recording technology and non-digital multimedia. Visual recordings have long been employed by natural scientists because they make it possible to speed up, slow down, repeat, stop, and zoom in on things of interest. It is the same in the social sciences, recordings facilitate the study of phenomena that are too fast, or too slow, or too infrequent or too big or too small to study directly "in the life." Most importantly, through editing visual sociologists can juxtapose events to produce meanings. Sociologists may also be able to put cameras in places where one would not put a researcher: where it is dangerous, or where a person would be unwelcome, or simply to remove the observer effect from particular situations, e.g., studying social behavior among school children on a playground. Photo elicitation is another technique of data gathering. This methodological tool is a combination of photography as the visual equivalent of a tape recorder, and ethnography or other qualitative methods. Photo elicitation techniques involve using photographs or film as part of the interview—in essence asking research subjects to discuss the meaning of photographs, films or videos. In this case the images can be taken specially by the researcher with the idea of using them to elicit information, they can belong to the subject, for example family photographs or movies, or they can be gathered from other sources including archives, newspaper and television morgues, or corporate collections. Typically the interviewee's comments or analysis of the visual material is itself recorded, either on audio tape or video, etc. Photo voice is a related research method in which researchers give those being studied still or movie cameras. Research participants are taught to use the image making technology but are then responsible for making photos or movies which are subsequently analyzed either by the researchers or the participants, or both. The first use of photo voice was by Wang and Burris (published in 1994), where they defined it as "a method through which knowledge would be generated by people who were normally passive objects in the research process." In any case, in this first sense visual sociology means including and incorporating visual methods of data gathering and analysis in the work of sociology. This method has recently been transferred to other academic disciplines, notably having been pioneered in contemporary religious research. Visual sociology attempts to study visual images produced as part of culture. Art, photographs, film, video, fonts, advertisements, computer icons, landscape, architecture, machines, fashion, makeup, hair style, facial expressions, tattoos, and so on are parts of the complex visual communication system produced by members of societies. The use and understanding of visual images is governed by socially established symbolic codes. Visual images are constructed and may be deconstructed. They may be read as texts in a variety of ways. They can be analyzed with techniques developed in diverse fields of literary criticism, art theory and criticism, content analysis, semiotics, deconstructionism, or the more mundane tools of ethnography. Visual sociologists can categorize and count them; ask people about them; or study their use and the social settings in which they are produced and consumed. So the second meaning of visual sociology is a discipline to study the visual products of society—their production, consumption and meaning. A third dimension of visual sociology is both the use of visual media to communicate sociological understandings to professional and public audiences, and also the use of visual media within sociological research itself. In this context, visual sociology draws on the work of Edward Tufte, whose books Envisioning Information and The Visual Display of Quantitative Information address the communication of quantitative information. Qualitatively, visual sociology can be analyzed through content analysis, semiotics, and conversation analysis. Visual sociology considers the logics of presentation of sociological and anthropological documentarians and ethnographers like Robert Flaherty, Konrad Lorenz, Margaret Mead and Gregory Bateson, and Frederick Wiseman. Visual sociology also requires the development of new forms—for example, data driven computer graphics to represent complex relationships e.g., changing social networks over time, the primitive accumulation of capital, the flow of labor, relations between theory and practice. Visual methods have been popular in various disciplines and fields, such as tourism and event studies. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/OpenAI#cite_note-OpenAI-2018-7] | [TOKENS: 8773] |
Contents OpenAI OpenAI is an American artificial intelligence research organization comprising both a non-profit foundation and a controlled for-profit public benefit corporation (PBC), headquartered in San Francisco. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". OpenAI is widely recognized for its development of the GPT family of large language models, the DALL-E series of text-to-image models, and the Sora series of text-to-video models, which have influenced industry research and commercial applications. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI. The organization was founded in 2015 in Delaware but evolved a complex corporate structure. As of October 2025, following restructuring approved by California and Delaware regulators, the non-profit OpenAI Foundation holds 26% of the for-profit OpenAI Group PBC, with Microsoft holding 27% and employees/other investors holding 47%. Under its governance arrangements, the OpenAI Foundation holds the authority to appoint the board of the for-profit OpenAI Group PBC, a mechanism designed to align the entity’s strategic direction with the Foundation’s charter. Microsoft previously invested over $13 billion into OpenAI, and provides Azure cloud computing resources. In October 2025, OpenAI conducted a $6.6 billion share sale that valued the company at $500 billion. In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed AI safety researchers left OpenAI, citing the company's prominent role in an industry-wide problem. Founding In December 2015, OpenAI was founded as a not for profit organization by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the co-chairs. A total of $1 billion in capital was pledged by Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), and Infosys. However, the actual capital collected significantly lagged pledges. According to company disclosures, only $130 million had been received by 2019. In its founding charter, OpenAI stated an intention to collaborate openly with other institutions by making certain patents and research publicly available, but later restricted access to its most capable models, citing competitive and safety concerns. OpenAI was initially run from Brockman's living room. It was later headquartered at the Pioneer Building in the Mission District, San Francisco. According to OpenAI's charter, its founding mission is "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity." Musk and Altman stated in 2015 that they were partly motivated by concerns about AI safety and existential risk from artificial general intelligence. OpenAI stated that "it's hard to fathom how much human-level AI could benefit society", and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly". The startup also wrote that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible", and that "because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest." Co-chair Sam Altman expected a decades-long project that eventually surpasses human intelligence. Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list of great AI researchers. Brockman was able to hire nine of them as the first employees in December 2015. OpenAI did not pay AI researchers salaries comparable to those of Facebook or Google. It also did not pay stock options which AI researchers typically get. Nevertheless, OpenAI spent $7 million on its first 52 employees in 2016. OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission." OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead. In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research. Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours. In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications. Corporate structure In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit being capped at 100 times any investment. According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company. Many top researchers work for Google Brain, DeepMind, or Facebook, which offer equity that a nonprofit would be unable to match. Before the transition, OpenAI was legally required to publicly disclose the compensation of its top employees. The company then distributed equity to its employees and partnered with Microsoft, announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. OpenAI Global, LLC then announced its intention to commercially license its technologies. It planned to spend $1 billion "within five years, and possibly much faster". Altman stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence. The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global, LLC. In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest. Some researchers have argued that OpenAI Global, LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI. On February 29, 2024, Elon Musk filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of shifting focus from public benefit to profit maximization—a case OpenAI dismissed as "incoherent" and "frivolous," though Musk later revived legal action against Altman and others in August. On April 9, 2024, OpenAI countersued Musk in federal court, alleging that he had engaged in "bad-faith tactics" to slow the company's progress and seize its innovations for his personal benefit. OpenAI also argued that Musk had previously supported the creation of a for-profit structure and had expressed interest in controlling OpenAI himself. The countersuit seeks damages and legal measures to prevent further alleged interference. On February 10, 2025, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI, declaring willingness to match or exceed any better offer. The offer was rejected on 14 February 2025, with OpenAI stating that it was not for sale, but the offer complicated Altman's restructuring plan by suggesting a lower bar for how much the nonprofit should be valued. OpenAI, Inc. was originally designed as a nonprofit in order to ensure that AGI "benefits all of humanity" rather than "the private gain of any person". In 2019, it created OpenAI Global, LLC, a capped-profit subsidiary controlled by the nonprofit. In December 2024, OpenAI proposed a restructuring plan to convert the capped-profit into a Delaware-based public benefit corporation (PBC), and to release it from the control of the nonprofit. The nonprofit would sell its control and other assets, getting equity in return, and would use it to fund and pursue separate charitable projects, including in science and education. OpenAI's leadership described the change as necessary to secure additional investments, and claimed that the nonprofit's founding mission to ensure AGI "benefits all of humanity" would be better fulfilled. The plan has been criticized by former employees. A legal letter named "Not For Private Gain" asked the attorneys general of California and Delaware to intervene, stating that the restructuring is illegal and would remove governance safeguards from the nonprofit and the attorneys general. The letter argues that OpenAI's complex structure was deliberately designed to remain accountable to its mission, without the conflicting pressure of maximizing profits. It contends that the nonprofit is best positioned to advance its mission of ensuring AGI benefits all of humanity by continuing to control OpenAI Global, LLC, whatever the amount of equity that it could get in exchange. PBCs can choose how they balance their mission with profit-making. Controlling shareholders have a large influence on how closely a PBC sticks to its mission. On October 28, 2025, OpenAI announced that it had adopted the new PBC corporate structure after receiving approval from the attorneys general of California and Delaware. Under the new structure, OpenAI's for-profit branch became a public benefit corporation known as OpenAI Group PBC, while the non-profit was renamed to the OpenAI Foundation. The OpenAI Foundation holds a 26% stake in the PBC, while Microsoft holds a 27% stake and the remaining 47% is owned by employees and other investors. All members of the OpenAI Group PBC board of directors will be appointed by the OpenAI Foundation, which can remove them at any time. Members of the Foundation's board will also serve on the for-profit board. The new structure allows the for-profit PBC to raise investor funds like most traditional tech companies, including through an initial public offering, which Altman claimed was the most likely path forward. In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value. On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft's cloud-computing service Azure. From September to December, 2023, Microsoft rebranded all variants of its Copilot to Microsoft Copilot, and they added MS-Copilot to many installations of Windows and released Microsoft Copilot mobile apps. Following OpenAI's 2025 restructuring, Microsoft owns a 27% stake in the for-profit OpenAI Group PBC, valued at $135 billion. In a deal announced the same day, OpenAI agreed to purchase $250 billion of Azure services, with Microsoft ceding their right of first refusal over OpenAI's future cloud computing purchases. As part of the deal, OpenAI will continue to share 20% of its revenue with Microsoft until it achieves AGI, which must now be verified by an independent panel of experts. The deal also loosened restrictions on both companies working with third parties, allowing Microsoft to pursue AGI independently and allowing OpenAI to develop products with other companies. In 2017, OpenAI spent $7.9 million, a quarter of its functional expenses, on cloud computing alone. In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks. In October 2024, OpenAI completed a $6.6 billion capital raise with a $157 billion valuation including investments from Microsoft, Nvidia, and SoftBank. On January 21, 2025, Donald Trump announced The Stargate Project, a joint venture between OpenAI, Oracle, SoftBank and MGX to build an AI infrastructure system in conjunction with the US government. The project takes its name from OpenAI's existing "Stargate" supercomputer project and is estimated to cost $500 billion. The partners planned to fund the project over the next four years. In July, the United States Department of Defense announced that OpenAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and xAI. In the same month, the company made a deal with the UK Government to use ChatGPT and other AI tools in public services. OpenAI subsequently began a $50 million fund to support nonprofit and community organizations. In April 2025, OpenAI raised $40 billion at a $300 billion post-money valuation, which was the highest-value private technology deal in history. The financing round was led by SoftBank, with other participants including Microsoft, Coatue, Altimeter and Thrive. In July 2025, the company reported annualized revenue of $12 billion. This was an increase from $3.7 billion in 2024, which was driven by ChatGPT subscriptions, which reached 20 million paid subscribers by April 2025, up from 15.5 million at the end of 2024, alongside a rapidly expanding enterprise customer base that grew to five million business users. The company’s cash burn remains high because of the intensive computational costs required to train and operate large language models. It projects an $8 billion operating loss in 2025. OpenAI reports revised long-term spending projections totaling approximately $115 billion through 2029, with annual expenditures projected to escalate significantly, reaching $17 billion in 2026, $35 billion in 2027, and $45 billion in 2028. These expenditures are primarily allocated toward expanding compute infrastructure, developing proprietary AI chips, constructing data centers, and funding intensive model training programs, with more than half of the spending through the end of the decade expected to support research-intensive compute for model training and development. The company's financial strategy prioritizes market expansion and technological advancement over near-term profitability, with OpenAI targeting cash-flow-positive operations by 2029 and projecting revenue of approximately $200 billion by 2030. This aggressive spending trajectory underscores both the enormous capital requirements of scaling cutting-edge AI technology and OpenAI's commitment to maintaining its position as a leader in the artificial intelligence industry. In October 2025, OpenAI completed an employee share sale of up to $10 billion to existing investors which valued the company at $500 billion. The deal values OpenAI as the most valuable privately owned company in the world—surpassing SpaceX as the world's most valuable private company. On November 17, 2023, Sam Altman was removed as CEO when its board of directors (composed of Helen Toner, Ilya Sutskever, Adam D'Angelo and Tasha McCauley) cited a lack of confidence in him. Chief Technology Officer Mira Murati took over as interim CEO. Greg Brockman, the president of OpenAI, was also removed as chairman of the board and resigned from the company's presidency shortly thereafter. Three senior OpenAI researchers subsequently resigned: director of research and GPT-4 lead Jakub Pachocki, head of AI risk Aleksander Mądry, and researcher Szymon Sidor. On November 18, 2023, there were reportedly talks of Altman returning as CEO amid pressure placed upon the board by investors such as Microsoft and Thrive Capital, who objected to Altman's departure. Although Altman himself spoke in favor of returning to OpenAI, he has since stated that he considered starting a new company and bringing former OpenAI employees with him if talks to reinstate him didn't work out. The board members agreed "in principle" to resign if Altman returned. On November 19, 2023, negotiations with Altman to return failed and Murati was replaced by Emmett Shear as interim CEO. The board initially contacted Anthropic CEO Dario Amodei (a former OpenAI executive) about replacing Altman, and proposed a merger of the two companies, but both offers were declined. On November 20, 2023, Microsoft CEO Satya Nadella announced Altman and Brockman would be joining Microsoft to lead a new advanced AI research team, but added that they were still committed to OpenAI despite recent events. Before the partnership with Microsoft was finalized, Altman gave the board another opportunity to negotiate with him. About 738 of OpenAI's 770 employees, including Murati and Sutskever, signed an open letter stating they would quit their jobs and join Microsoft if the board did not rehire Altman and then resign. This prompted OpenAI investors to consider legal action against the board as well. In response, OpenAI management sent an internal memo to employees stating that negotiations with Altman and the board had resumed and would take some time. On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members Bret Taylor (as chairman) and Lawrence Summers, with D'Angelo remaining. According to subsequent reporting, shortly before Altman’s firing, some employees raised concerns to the board about how he had handled the safety implications of a recent internal AI capability discovery. On November 29, 2023, OpenAI announced that an anonymous Microsoft employee had joined the board as a non-voting member to observe the company's operations; Microsoft resigned from the board in July 2024. In February 2024, the Securities and Exchange Commission subpoenaed OpenAI's internal communication to determine if Altman's alleged lack of candor misled investors. In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers. In August 2023, it was announced that OpenAI had acquired the New York-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools. In June 2024, OpenAI acquired Multi, a startup focused on remote collaboration. In March 2025, OpenAI reached a deal with CoreWeave to acquire $350 million worth of CoreWeave shares and access to AI infrastructure, in return for $11.9 billion paid over five years. Microsoft was already CoreWeave's biggest customer in 2024. Alongside their other business dealings, OpenAI and Microsoft were renegotiating the terms of their partnership to facilitate a potential future initial public offering by OpenAI, while ensuring Microsoft's continued access to advanced AI models. On May 21, OpenAI announced the $6.5 billion acquisition of io, an AI hardware start-up founded by former Apple designer Jony Ive in 2024. In September 2025, OpenAI agreed to acquire the product testing startup Statsig for $1.1 billion in an all-stock deal and appointed Statsig's founding CEO Vijaye Raji as OpenAI's chief technology officer of applications. The company also announced development of an AI-driven hiring service designed to rival LinkedIn. OpenAI acquired personal finance app Roi in October 2025. In October 2025, OpenAI acquired Software Applications Incorporated, the developer of Sky, a macOS-based natural language interface designed to operate across desktop applications. The Sky team joined OpenAI, and the company announced plans to integrate Sky’s capabilities into ChatGPT. In December 2025, it was announced OpenAI had agreed to acquire Neptune, an AI tooling startup that helps companies track and manage model training, for an undisclosed amount. In January 2026, it was announced OpenAI had acquired healthcare technology startup Torch for approximately $60 million. The acquisition followed the launch of OpenAI’s ChatGPT Health product and was intended to strengthen the company’s medical data and healthcare artificial intelligence capabilities. OpenAI has been criticized for outsourcing the annotation of data sets to Sama, a company based in San Francisco that employed workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to moderate toxic content, notably from ChatGPT's training data and outputs. However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The investigation uncovered that OpenAI began sending snippets of data to Sama as early as November 2021. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management. In 2024, OpenAI began collaborating with Broadcom to design a custom AI chip capable of both training and inference, targeted for mass production in 2026 and to be manufactured by TSMC on a 3 nm process node. This initiative intended to reduce OpenAI's dependence on Nvidia GPUs, which are costly and face high demand in the market. In January 2024, Arizona State University purchased ChatGPT Enterprise in OpenAI's first deal with a university. In June 2024, Apple Inc. signed a contract with OpenAI to integrate ChatGPT features into its products as part of its new Apple Intelligence initiative. In June 2025, OpenAI began renting Google Cloud's Tensor Processing Units (TPUs) to support ChatGPT and related services, marking its first meaningful use of non‑Nvidia AI chips. In September 2025, it was revealed that OpenAI signed a contract with Oracle to purchase $300 billion in computing power over the next five years. In September 2025, OpenAI and NVIDIA announced a memorandum of understanding that included a potential deployment of at least 10 gigawatts of NVIDIA systems and a $100 billion investment from NVIDIA in OpenAI. OpenAI expected the negotiations to be completed within weeks. As of January 2026, this has not been realized, and the two sides are rethinking the future of their partnership. In October 2025, OpenAI announced a multi-billion dollar deal with AMD. OpenAI committed to purchasing six gigawatts worth of AMD chips, starting with the MI450. OpenAI will have the option to buy up to 160 million shares of AMD, about 10% of the company, depending on development, performance and share price targets. In December 2025, Disney said it would make a $1 billion investment in OpenAI, and signed a three-year licensing deal that will let users generate videos using Sora—OpenAI's short-form AI video platform. More than 200 Disney, Marvel, Star Wars and Pixar characters will be available to OpenAI users. In early 2026, Amazon entered advanced discussions to invest up to $50 billion in OpenAI as part of a potential artificial intelligence partnership. Under the proposed agreement, OpenAI’s models could be integrated into Amazon’s digital assistant Alexa and other internal projects. OpenAI provides LLMs to the Artificial Intelligence Cyber Challenge and to the Advanced Research Projects Agency for Health. In October 2024, The Intercept revealed that OpenAI's tools are considered "essential" for AFRICOM's mission and included in an "Exception to Fair Opportunity" contractual agreement between the United States Department of Defense and Microsoft. In December 2024, OpenAI said it would partner with defense-tech company Anduril to build drone defense technologies for the United States and its allies. In 2025, OpenAI's Chief Product Officer, Kevin Weil, was commissioned lieutenant colonel in the U.S. Army to join Detachment 201 as senior advisor. In June 2025, the U.S. Department of Defense awarded OpenAI a $200 million one-year contract to develop AI tools for military and national security applications. OpenAI announced a new program, OpenAI for Government, to give federal, state, and local governments access to its models, including ChatGPT. Services In February 2019, GPT-2 was announced, which gained attention for its ability to generate human-like text. In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named the API, would form the heart of its first commercial product. Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish Anthropic. In 2021, OpenAI introduced DALL-E, a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture. In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024. After ChatGPT was launched, Google announced a similar chatbot, Bard, amid internal concerns that ChatGPT could threaten Google’s position as a primary source of online information. On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products. On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus. On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries. On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand. Access for newer subscribers re-opened a month later on December 13. In December 2024, the company launched the Sora model. It also launched OpenAI o1, an early reasoning model that was internally codenamed strawberry. Additionally, ChatGPT Pro—a $200/month subscription service offering unlimited o1 access and enhanced voice features—was introduced, and preliminary benchmark results for the upcoming OpenAI o3 models were shared. On January 23, 2025, OpenAI released Operator, an AI agent and web automation tool for accessing websites to execute goals defined by users. The feature was only available to Pro users in the United States. OpenAI released deep research agent, nine days later. It scored a 27% accuracy on the benchmark Humanity's Last Exam (HLE). Altman later stated GPT-4.5 would be the last model without full chain-of-thought reasoning. In July 2025, reports indicated that AI models by both OpenAI and Google DeepMind solved mathematics problems at the level of top-performing students in the International Mathematical Olympiad. OpenAI's large language model was able to achieve gold medal-level performance, reflecting significant progress in AI's reasoning abilities. On October 6, 2025, OpenAI unveiled its Agent Builder platform during the company's DevDay event. The platform includes a visual drag-and-drop interface that lets developers and businesses design, test, and deploy agentic workflows with limited coding. On October 21, 2025, OpenAI introduced ChatGPT Atlas, a browser integrating the ChatGPT assistant directly into web navigation, to compete with existing browsers such as Google Chrome and Apple Safari. On December 11, 2025, OpenAI announced GPT-5.2. This model will be better at creating spreadsheets, building presentations, perceiving images, writing code and understanding long context. On January 27, 2026, OpenAI introduced Prism, a LaTeX-native workspace meant to assist scientists to help with research and writing. The platform utilizes GPT-5.2 as a backend to automate the process of drafting for scientific papers, including features for managing citations, complex equation formatting, and real-time collaborative editing. In March 2023, the company was criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards. OpenAI cited competitiveness and safety concerns to justify this repudiation. OpenAI's former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become "obvious" in a few years. In September 2025, OpenAI published a study on how people use ChatGPT for everyday tasks. The study found that "non-work tasks" (according to an LLM-based classifier) account for more than 72 percent of all ChatGPT usage, with a minority of overall usage related to business productivity. In July 2023, OpenAI launched the superalignment project, aiming within four years to determine how to align future superintelligent systems. OpenAI promised to dedicate 20% of its computing resources to the project, although the team denied receiving anything close to 20%. OpenAI ended the project in May 2024 after its co-leaders Ilya Sutskever and Jan Leike left the company. In August 2025, OpenAI was criticized after thousands of private ChatGPT conversations were inadvertently exposed to public search engines like Google due to an experimental "share with search engines" feature. The opt-in toggle, intended to allow users to make specific chats discoverable, resulted in some discussions including personal details such as names, locations, and intimate topics appearing in search results when users accidentally enabled it while sharing links. OpenAI announced the feature's permanent removal on August 1, 2025, and the company began coordinating with search providers to remove the exposed content, emphasizing that it was not a security breach but a design flaw that heightened privacy risks. CEO Sam Altman acknowledged the issue in a podcast, noting users often treat ChatGPT as a confidant for deeply personal matters, which amplified concerns about AI handling sensitive data. Management In 2018, Musk resigned from his Board of Directors seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars. OpenAI stated that Musk's financial contributions were below $45 million. On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via Greylock Partners, and his co-founding of the AI startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI. In May 2024, Chief Scientist Ilya Sutskever resigned and was succeeded by Jakub Pachocki. Co-leader Jan Leike also departed amid concerns over safety and trust. OpenAI then signed deals with Reddit, News Corp, Axios, and Vox Media. Paul Nakasone then joined the board of OpenAI. In August 2024, cofounder John Schulman left OpenAI to join Anthropic, and OpenAI's president Greg Brockman took extended leave until November. In September 2024, CTO Mira Murati left the company. In November 2025, Lawrence Summers resigned from the board of directors. Governance and legal issues In May 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence. They stated that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They proposed creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also called for more technical safety research for superintelligences, and asked for more coordination, for example through governments launching a joint project which "many current efforts become part of". In July 2023, the FTC issued a civil investigative demand to OpenAI to investigate whether the company's data security and privacy practices to develop ChatGPT were unfair or harmed consumers (including by reputational harm) in violation of Section 5 of the Federal Trade Commission Act of 1914. These are typically preliminary investigative matters and are nonpublic, but the FTC's document was leaked. In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information. They asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people. The agency also raised concerns about ‘circular’ spending arrangements—for example, Microsoft extending Azure credits to OpenAI while both companies shared engineering talent—and warned that such structures could negatively affect the public. In September 2024, OpenAI's global affairs chief endorsed the UK's "smart" AI regulation during testimony to a House of Lords committee. In February 2025, OpenAI CEO Sam Altman stated that the company is interested in collaborating with the People's Republic of China, despite regulatory restrictions imposed by the U.S. government. This shift comes in response to the growing influence of the Chinese artificial intelligence company DeepSeek, which has disrupted the AI market with open models, including DeepSeek V3 and DeepSeek R1. Following DeepSeek's market emergence, OpenAI enhanced security protocols to protect proprietary development techniques from industrial espionage. Some industry observers noted similarities between DeepSeek's model distillation approach and OpenAI's methodology, though no formal intellectual property claim was filed. According to Oliver Roberts, in March 2025, the United States had 781 state AI bills or laws. OpenAI advocated for preempting state AI laws with federal laws. According to Scott Kohler, OpenAI has opposed California's AI legislation and suggested that the state bill encroaches on a more competent federal government. Public Citizen opposed a federal preemption on AI and pointed to OpenAI's growth and valuation as evidence that existing state laws have not hampered innovation. Before May 2024, OpenAI required departing employees to sign a lifelong non-disparagement agreement forbidding them from criticizing OpenAI and acknowledging the existence of the agreement. Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement. Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity. However, leaked documents and emails refute this claim. On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement. OpenAI was sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad in July 2023. In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work. The New York Times also sued the company in late December 2023. In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the training of GPT-3, and which the Authors Guild believed to have contained over 100,000 copyrighted books. In 2021, OpenAI developed a speech recognition tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI's president, Greg Brockman. The resulting dataset proved instrumental in training GPT-4. In February 2024, The Intercept as well as Raw Story and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground. The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI. On April 30, 2024, eight newspapers filed a lawsuit in the Southern District of New York against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles. The suing publications included The Mercury News, The Denver Post, The Orange County Register, St. Paul Pioneer Press, Chicago Tribune, Orlando Sentinel, Sun Sentinel, and New York Daily News. In June 2023, a lawsuit claimed that OpenAI scraped 300 billion words online without consent and without registering as a data broker. It was filed in San Francisco, California, by sixteen anonymous plaintiffs. They also claimed that OpenAI and its partner as well as customer Microsoft continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models. On May 22, 2024, OpenAI entered into an agreement with News Corp to integrate news content from The Wall Street Journal, the New York Post, The Times, and The Sunday Times into its AI platform. Meanwhile, other publications like The New York Times chose to sue OpenAI and Microsoft for copyright infringement over the use of their content to train AI models. In November 2024, a coalition of Canadian news outlets, including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC, sued OpenAI for using their news articles to train its software without permission. In October 2024 during a New York Times interview, Suchir Balaji accused OpenAI of violating copyright law in developing its commercial LLMs which he had helped engineer. He was a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in court filings as potentially having documents relevant to the case. On November 26, 2024, Balaji died by suicide. His death prompted the circulation of conspiracy theories alleging that he had been deliberately silenced. California Congressman Ro Khanna endorsed calls for an investigation. On April 24, 2025, Ziff Davis sued OpenAI in Delaware federal court for copyright infringement. Ziff Davis is known for publications such as ZDNet, PCMag, CNET, IGN and Lifehacker. In April 2023, the EU's European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against OpenAI about the ChatGPT service". In late April 2024 NOYB filed a complaint with the Austrian Datenschutzbehörde against OpenAI for violating the European General Data Protection Regulation. A text created with ChatGPT gave a false date of birth for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT's work nor the sources used, could be made available, OpenAI claimed. OpenAI was criticized for lifting its ban on using ChatGPT for "military and warfare". Up until January 10, 2024, its "usage policies" included a ban on "activity that has high risk of physical harm, including", specifically, "weapons development" and "military and warfare". Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons". In August 2025, the parents of a 16-year-old boy who died by suicide filed a wrongful death lawsuit against OpenAI (and CEO Sam Altman), alleging that months of conversations with ChatGPT about mental health and methods of self-harm contributed to their son's death and that safeguards were inadequate for minors. OpenAI expressed condolences and said it was strengthening protections (including updated crisis response behavior and parental controls). Coverage described it as a first-of-its-kind wrongful death case targeting the company's chatbot. The complaint was filed in California state court in San Francisco. In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits against OpenAI, of which four lawsuits alleged wrongful death. The suits were filed on behalf of Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon, who each committed suicide after prolonged ChatGPT usage. In December 2025, Stein-Erik Soelberg, who was 56 years old at the time, allegedly murdered his mother Suzanne Adams. In the months prior the paranoid, delusional man often discussed his ideas with ChatGPT. Adam's estate then sued OpenAI claiming that the company shared responsibility due to the risk of chatbot psychosis despite the fact that chatbot psychosis is not a real medical diagnosis. OpenAI responded saying they will make ChatGPT safer for users disconnected from reality. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Final_Fantasy_VII] | [TOKENS: 12698] |
Contents Final Fantasy VII Final Fantasy VII[b] is a 1997 role-playing video game developed and published by Square for the PlayStation. The seventh main installment in the Final Fantasy series, it was released in Japan by Square and internationally by Sony Computer Entertainment, becoming the first game in the main series to have a PAL release. The game's story follows Cloud Strife, a mercenary who joins an eco-terrorist organization to stop a world-controlling megacorporation from using the planet's life essence as an energy source. Ensuing events send Cloud and his allies in pursuit of Sephiroth, a superhuman who seeks to wound the planet and harness its healing power in order to be reborn as a god. Throughout their journey, Cloud bonds with his party members, including Aerith Gainsborough, who holds the secret to saving their world. Development began in 1994, originally for the Super Nintendo Entertainment System. After delays and technical difficulties from experimenting with several platforms, most notably the Nintendo 64, Square moved production to the PlayStation, largely due to the advantages of the CD-ROM format. Veteran Final Fantasy staff returned, including series creator and producer Hironobu Sakaguchi, director Yoshinori Kitase, and composer Nobuo Uematsu. The title was the first in the series to use full motion video and 3D computer graphics, featuring 3D character models superimposed over 2D pre-rendered backgrounds. Although the gameplay remained mostly unchanged from previous entries, Final Fantasy VII introduced more widespread science fiction elements and a more realistic presentation. The combined development and marketing budget amounted to approximately US$80 million. Final Fantasy VII received widespread critical acclaim upon release and was a commercial success. Critics praised its graphics, gameplay, music, and story, although some criticism was directed towards the original English localization. It remains widely regarded as a landmark title and one of the greatest and most influential video games of all time. The title won numerous Game of the Year awards and is credited for boosting the sales of the PlayStation and popularizing Japanese role-playing games worldwide. Its success has led to enhanced ports on various platforms, a multimedia subseries called the Compilation of Final Fantasy VII, and a high definition remake trilogy currently comprising Final Fantasy VII Remake (2020) and Final Fantasy VII Rebirth (2024). Gameplay The gameplay of Final Fantasy VII is similar to earlier Final Fantasy titles and Japanese role-playing games. The game features three modes of play: the world map, the field, and the battle screen.: 15, 20 At its grandest scale, players explore the world of Final Fantasy VII on a 3D world map. The world map contains representations of areas for the player to enter, including towns, environments, and ruins. Natural barriers—such as mountains, deserts, and bodies of water—block access by foot to some areas; as the story progresses, the player receives vehicles that help traverse these obstacles, thus opening more of the game world for exploration.: 44 Chocobos can be found in certain spots on the map and, if caught, can be ridden to areas inaccessible on foot or by vehicle.: 46 In field mode, the player navigates fully scaled versions of the areas represented on the world map. VII marks the first time in the series that the mode is represented in a three-dimensional space. In this mode, the player can explore the environment, talk with characters, advance the story, and initiate event games.: 15 Event games are short minigames that use special control functions and are often tied to the story.: 18 While in field mode, the player can also make use of shops and inns. Shops allow the player to buy and sell items that can aid Cloud and his party, such as weapons, armor, and accessories. Inns restore the hit points and mana points of characters who rest at them and cure abnormalities contracted during battles.: 17 At random intervals on the world map and in field mode, and at specific moments in the story, the game will enter the battle screen, which places the player characters on one side and the enemies on the other. It employs an Active Time Battle (ATB) system, in which the characters exchange moves until one side is defeated. The damage or healing dealt by either side is quantified on screen. Characters have several statistics that determine their effectiveness in battle; for example, hit points determine how much damage they can take, and magic determines how much damage they can inflict with spells. Each character on the screen has a time gauge; when a character's gauge is full, the player can input a command for them to perform. The commands change as the game progresses, and are dependent on the characters in the player's party and on the abilities, spells, etc., the player has added to their equipment. Commands include attacking with a weapon, casting magic, using items, summoning monsters, and other actions that either damage the enemy or aid the player characters. Final Fantasy VII also features powerful, character-specific commands called Limit Breaks, which can be used only after a special gauge is charged by taking enemy attacks. After being attacked, characters can be afflicted by one or more abnormal "statuses", such as poison or paralysis. These statuses and their adverse effects can be removed by special items or abilities or by resting at an inn. Once all enemies are defeated, the battle ends, and the player is rewarded with money, items, and experience points. If the player is defeated, it is game over and the game must be loaded from the last save point.: 20–27 When not in battle, the player can use the menu screen, where they can review each character's status and statistics, use items and abilities, change equipment, save the game when on the world map or at a save point, and manage orbs called Materia. Materia are the main method of customizing characters in Final Fantasy VII, and can be added to equipment to provide characters with new magic spells, monsters to summon, commands, statistical upgrades, and other benefits. Materia level up through their own experience point system and can be combined to create different effects.: 30–42 Synopsis Final Fantasy VII takes place on a world referred to in-game as the "Planet" and retroactively named "Gaia". The planet's lifeforce, called the Lifestream, is a flow of spiritual energy that gives life to everything on the Planet; its processed form is known as "Mako". On a societal and technological level, the game has been defined as an industrial or post-industrial science fiction setting. During Final Fantasy VII, the Shinra Electric Power Company, a world-dominating megacorporation headquartered in the city of Midgar, is draining the Planet's Lifestream for energy, weakening the Planet and threatening its existence and all life. Significant factions within the game include AVALANCHE, an eco-terrorist group seeking Shinra's downfall so the Planet can recover; the Turks, a covert branch of Shinra's security forces; SOLDIER, an elite Shinra fighting force created by enhancing humans with Mako; and the Cetra, a near-extinct human tribe which maintains a strong connection to the Planet and the Lifestream. The main protagonist is Cloud Strife, an aloof mercenary who claims to be a former 1st Class SOLDIER. Early on, he works with two members of AVALANCHE: Barret Wallace, its brazen but fatherly leader; and Tifa Lockhart, a shy yet nurturing martial artist and his childhood friend. During their journey, they meet Aerith Gainsborough, a carefree flower merchant and one of the last surviving Cetra; Red XIII, an intelligent feline from a tribe that protects the planet; Cait Sith, a fortune-telling robotic cat controlled by repentant Shinra staff member Reeve; and Cid Highwind, a pilot whose dream of being the first human in outer space was unrealized. The group can also recruit Yuffie Kisaragi, a young ninja and skilled Materia thief; and Vincent Valentine, a former Turk and victim of Shinra's experiments. The game's main antagonists are Rufus Shinra, the son of President Shinra and the later leader of the Shinra Corporation; Sephiroth, a former SOLDIER who reappears several years after being presumed dead; and Jenova, a hostile extraterrestrial life-form who the Cetra imprisoned 2,000 years ago and who Sephiroth was created from. A key character in Cloud's backstory is Zack Fair, a member of SOLDIER and Aerith's first love. AVALANCHE destroys a Shinra Mako reactor in Midgar, but an attack on another reactor goes wrong and Cloud falls into the city's slums. There, he meets Aerith and protects her from Shinra. Meanwhile, Shinra finds AVALANCHE's base of operations and intentionally collapses part of the upper city level in retaliation for the Mako reactor being destroyed, killing many AVALANCHE members and innocent bystanders as collateral damage. Aerith is also captured since Shinra believes that as a Cetra, she can potentially reveal the "Promised Land", which they believe is overflowing with Lifestream energy they can exploit. Cloud, Barret, and Tifa rescue Aerith, and during their escape from Midgar, discover that Sephiroth murdered President Shinra despite being presumed dead five years earlier. The party pursues Sephiroth across the Planet, with now-President Rufus on their trail; they are soon joined by the rest of the playable characters. At a Cetra temple, Sephiroth reveals he intends to use a powerful magical artifact known as "Black Materia" to cast the spell "Meteor", which would have a devastating impact on the Planet. Sephiroth claims he will absorb the Lifestream as it attempts to heal the wound caused by Meteor, and become a god-like being in the process. The party retrieves the Black Materia, but Sephiroth manipulates Cloud into surrendering it. Aerith departs alone to stop Sephiroth and follows him to an abandoned Cetra city. While Aerith prays to the Planet for help, Sephiroth attempts to force Cloud to kill her; after this fails, he kills her himself before fleeing, leaving the White Materia behind. The party then learns of Jenova, a hostile alien lifeform who landed on the Planet two thousand years prior to the game's events. Upon arrival on the Planet, Jenova began infecting the Cetra with a virus, and they were nearly wiped out. However, a small group managed to seal away Jenova in a tomb, which Shinra later unearthed. At Nibelheim, Jenova's cells were used in experiments which led to the creation of Sephiroth. Five years before the game's events, Sephiroth and Cloud visited Nibelheim, where Sephiroth learned of his origins and was driven insane as a result. He murdered the townspeople, and then vanished after Cloud confronted him. At the Northern Crater, the party learns that the "Sephiroths" they have encountered are Jenova clones who the insane Shinra scientist Hojo created. Cloud confronts the real Sephiroth as he is killing his clones to reunite Jenova's cells, but is again manipulated into giving him the Black Materia. Sephiroth then taunts Cloud by showing another SOLDIER in his place in his memories of Nibelheim, suggesting that Cloud is a failed clone of Sephiroth. Sephiroth summons Meteor and seals the Crater as Cloud falls into the Lifestream and Rufus captures the party. After escaping Shinra, the party discovers Cloud at an island hospital in a catatonic state from Mako poisoning, and Tifa decides to stay as his caretaker. When a planetary defense force called Weapon attacks the island, the two fall into the Lifestream, where Tifa helps Cloud reconstruct his memories. Cloud was a mere infantryman who was never accepted into SOLDIER; the SOLDIER in his memories was his friend Zack. At Nibelheim, Cloud ambushed and wounded Sephiroth after the latter's mental breakdown, but Jenova preserved Sephiroth's life. Hojo experimented on Cloud and Zack for four years, injecting them with Jenova's cells and Mako. They managed to escape, but Zack was killed in the process. The trauma of these events triggered an identity crisis in Cloud, and he constructed a false persona based around Zack's stories and his own fantasies. Cloud accepts his past and reunites with the party, who learn that Aerith's prayer to the Planet had been successful: the Planet had attempted to summon Holy to prevent Meteor's impact, but Sephiroth prevented it from having any effect. Shinra fails to destroy Meteor, but manages to defeat a Weapon and puncture the Northern Crater as Rufus seemingly dies in the process. After killing Hojo, who is revealed to be Sephiroth's biological father, the party descends to the Planet's core through the opening in the Northern Crater and defeats both Jenova and Sephiroth. The party escapes and Holy is summoned once again, destroying Meteor with help from the Lifestream. Five hundred years later, Red XIII is seen with two cubs looking out over the ruins of Midgar, which are now covered in greenery, showing that the planet has healed. Development Initial concept talks for Final Fantasy VII began in 1994 at Square studio, following the completion of Final Fantasy VI. As with the previous installment, series creator Hironobu Sakaguchi reduced his role to producer and granted others a more active role in development: these included Yoshinori Kitase, one of the directors of FFVI. The next installment was planned as a 2D game for Nintendo's Super Nintendo Entertainment System (Super NES). After creating an early 2D prototype of it, the team postponed development to help finish Chrono Trigger. Once Chrono Trigger was completed, the team resumed discussions for Final Fantasy VII in 1995. The team discussed continuing the 2D strategy, which would have been the safe and immediate path just prior to the imminent industry shift toward 3D gaming; such a change would require radical new development models. The team decided to take the riskier option and make a 3D game on new generation hardware but had yet to choose between the cartridge-based Nintendo 64 or the CD-ROM-based PlayStation from Sony Computer Entertainment. The team also considered the Sega Saturn console and Microsoft Windows. Their decision was influenced by two factors: a highly successful tech demo based on Final Fantasy VI using the new Softimage 3D software, and the escalating price of cartridge-based games, which was limiting Square's audience. Tests were made for a Nintendo 64 version, which would use the planned 64DD peripheral despite the lack of 64DD development kits and the prototype device's changing hardware specifications. This version was discarded during early testing, as the 2000 polygons needed to render the Behemoth monster placed excessive strain on the Nintendo 64 hardware, causing a low frame rate. It would have required an estimated thirty 64DD discs to run Final Fantasy VII properly with the data compression methods of the day. Faced with both technical and economic issues on Nintendo's current hardware, and impressed by the increased storage capacity of CD-ROM when compared to the Nintendo 64 cartridge, Square shifted development of Final Fantasy VII, and all other planned projects, onto the PlayStation. The Final Fantasy 7 staff at one point were planning to use a first person camera for world map exploration with enemies visible on the world map terrain and also wanted up to 10 characters to be in your party at once in the battle scenes. In the final version, the overall gameplay system remained mostly unchanged from Final Fantasy V and VI, but with an emphasis on player control. The initial decision was for battles to feature shifting camera angles. Battle arenas had a lower polygon count than field areas, which made creating distinctive features more difficult. The summon sequences benefited strongly from the switch to the cinematic style, as the team had struggled to portray their scale using 2D graphics. In his role as producer, Sakaguchi placed much of his effort into developing the battle system. He proposed the Materia system as a way to provide more character customization than previous Final Fantasy games: battles no longer revolved around characters with innate skills and roles in battle, as Materia could be reconfigured between battles. Artist Tetsuya Nomura also contributed to the gameplay; he designed the Limit Break system as an evolution of the Desperation Attacks used in Final Fantasy VI. The Limit Breaks served a purpose in gameplay while also evoking each character's personality in battle. Square retained the passion-based game development approach from their earlier projects, but now had the resources and ambition to create the game they wanted. This was because they had extensive capital from their earlier commercial successes, which meant they could focus on quality and scale rather than obsessing over and working around their budget. Final Fantasy VII was at the time one of the most expensive video game projects ever, costing an estimated US$40 million, which adjusted for inflation came to $61 million in 2017. Development of the final version took a staff of between 100 and 150 people just over a year to complete. As video game development teams were usually only 20 people, the game had what was described as the largest development team of any game up to that point. The development team was split between both Square's Japanese offices and its new American office in Los Angeles; the American team worked primarily on city backgrounds. The game's art director was Yusuke Naora, who had previously worked as a designer for Final Fantasy VI. With the switch into 3D, Naora realized that he needed to relearn drawing, as 3D visuals require a very different approach than 2D. With the massive scale and scope of the project, Naora was granted a team devoted entirely to the game's visual design. The department's duties included illustration, modeling of 3D characters, texturing, the creation of environments, visual effects, and animation. The Shinra logo, which incorporated a kanji symbol, was drawn by Naora personally. Promotional artwork, in addition to the logo artwork, was created by Yoshitaka Amano, an artist whose association with the series went back to its inception. While he had taken a prominent role in earlier entries, Amano was unable to do so for Final Fantasy VII, due to commitments at overseas exhibitions. His logo artwork was based on Meteor: when he saw images of Meteor, he was not sure how to turn it into suitable artwork. In the end, he created multiple variations of the image and asked staff to choose which they preferred. The green coloring represents the predominant lighting in Midgar and the color of the Lifestream, while the blue reflected the ecological themes present in the story. Its coloring directly influenced the general coloring of the game's environments. Another prominent artist was Nomura. Having impressed Sakaguchi with his proposed ideas, which were handwritten and illustrated rather than simply typed on a PC, Nomura was brought on as main character designer. Nomura stated that when he was brought on, the main scenario had not been completed, but he "went along like, 'I guess first off you need a hero and a heroine', and from there drew the designs while thinking up details about the characters. After [he'd] done the hero and heroine, [he] carried on drawing by thinking what kind of characters would be interesting to have. When [he] handed over the designs [he'd] tell people the character details [he'd] thought up, or write them down on a separate sheet of paper". Something that could not be carried over from earlier titles was the chibi sprite art, as that would not fit with the new graphical direction. Naora, in his role as an assistant character designer and art director, helped adjust each character's appearance so the actions they performed were believable. When designing Cloud and Sephiroth, Nomura was influenced by his view of their rivalry mirroring the legendary animosity between Miyamoto Musashi and Sasaki Kojirō, with Cloud and Sephiroth being Musashi and Kojirō respectively. Sephiroth's look was defined as "kakkoii", a Japanese term combining good looks with coolness. Several of Nomura's designs evolved substantially during development. Cloud's original design of slicked-back black hair was deemed "not very heroic", so developers changed his hair color to bright blond. Vincent's occupation changed from researcher to detective to chemist, and finally to a former Turk with a tragic past. Sakaguchi was responsible for writing the initial plot, which was quite different from the final version. In this draft for the planned SNES version, the game's setting was envisioned as New York City in 1999. Similar to the final story, the main characters were part of an organization trying to destroy Mako reactors, but they were pursued by a hot-blooded detective named Joe. The main characters would eventually blow up the city. An early version of the Lifestream concept was present at this stage. According to Sakaguchi, his mother had died while Final Fantasy III was being developed, and choosing life as a theme helped him cope with her death in a rational and analytical manner. Square eventually used the New York setting in Parasite Eve (1998). While the planned concept was dropped, Final Fantasy VII still marked a drastic shift in setting from previous entries, dropping the Medieval fantasy elements in favor of a world that was "ambiguously futuristic". When Kitase was put in charge of Final Fantasy VII, he and Nomura reworked the entire initial plot. Scenario writer Kazushige Nojima joined the team after finishing work on Bahamut Lagoon. While Final Fantasy VI featured an ensemble cast of numerous playable characters that were equally important, the team soon decided to develop a central protagonist for FFVII. The pursuit of Sephiroth that formed most of the main narrative was suggested by Nomura, as nothing similar had been done in the series before. Kitase and Nojima conceived AVALANCHE and Shinra as opposing organizations and created Cloud's backstory as well as his relationship to Sephiroth. Among Nojima's biggest contributions to the plot were Cloud's memories and split personality; this included the eventual conclusion involving his newly created character of Zack. The crew helped Kitase adjust the specifics of Sakaguchi's original Lifestream concept. Regarding the overall theme of the game, Sakaguchi said it was "not enough to make 'life' the theme, you need to depict living and dying. In any event, you need to portray death". Consequently, Nomura proposed killing off the heroine. Aerith had been the only heroine, but the death of a female protagonist would necessitate a second; this led to the creation of Tifa. The developers decided to kill Aerith, as her death would be the most devastating and consequential. Kitase wanted to depict it as very sudden and unexpected, leaving "not a dramatic feeling but great emptiness", "feelings of reality and not Hollywood". The script for the scene was written by Nojima. Kitase and Nojima then planned that most of the main cast would die shortly before the final battle; Nomura vetoed the idea because he felt it would undermine the impact of Aerith's death. Several character relations and statuses underwent changes during development. Aerith was to be Sephiroth's sister, which influenced the design of her hair. The team then made Sephiroth a previous love interest of hers to deepen her backstory, but later swapped him with Zack. Vincent and Yuffie were to be part of the main narrative, but due to time constraints, they were nearly cut and eventually relegated to being optional characters. Nojima was charged with writing the scenario and unifying the team's ideas into a cohesive narrative, as Kitase was impressed with his earlier work on the mystery-like Heracles no Eikō III: Kamigami no Chinmoku, an entry in the Glory of Heracles series. To make the characters more realistic, Nojima wrote scenes in which they would occasionally argue and raise objections: while this inevitably slowed down the pace of the story, it added depth to the characters. The graphical improvements allowed even relatively bland lines of dialogue to be enhanced with reactions and poses from the 3D character models. Voice acting would have led to significant load times, so it was omitted. Masato Kato wrote several late-game scenes, including the Lifestream sequence and Cloud and Tifa's conversation before the final battle. Initially unaffiliated with the project, Kato was called on to help flesh out less important story scenes. He wrote his scenes to his own tastes without outside consultation, something he later regretted. With the shift from the SNES to the next generation consoles, Final Fantasy VII became the first project in the series to use 3D computer graphics. Aside from the story, Final Fantasy VI had many details undecided when development began; most design elements were hashed out along the way. In contrast, with Final Fantasy VII, the developers knew from the outset it was going to be "a real 3D game", so from the earliest planning stage, detailed designs were in existence. The script was also finalized, and the image for the graphics had been fleshed out. This meant that when actual development work began, storyboards for the game were already in place. The shift from cartridge ROM to CD-ROM posed some problems: according to lead programmer Ken Narita, the CD-ROM had a slower access speed, delaying some actions during the game, so the team needed to overcome this issue. Certain tricks were used to conceal load times, such as offering animations to keep players from getting bored. When it was decided to use 3D graphics, there was a discussion among the staff whether to use sprite-based characters on 3D backgrounds or fully rendered polygonal models. While sprites proved more popular with the staff, the polygon models were chosen as they could better express emotion. This decision was influenced by the team's exposure to the 3D character models used in Alone in the Dark. Sakaguchi decided to use deformed models for field navigation and real-time event scenes, for better expression of emotion, while realistically proportioned models would be used in battles. The team purchased Silicon Graphics Onyx supercomputers and related workstations, and accompanying software including Softimage 3D, PowerAnimator, and N-World for an estimated total of $21 million. Many team members had never seen the technology before. The transition from 2D graphics to 3D environments overlaid on pre-rendered backgrounds was accompanied by a focus on a more realistic presentation. In previous entries, the sizes for characters and environments were fixed, and the player saw things from a scrolling perspective. This changed with Final Fantasy VII; environments shifted with camera angles, and character model sizes shifted depending on both their place in the environment and their distance from the camera, giving a sense of scale. The choice of this highly cinematic style of storytelling, contrasting directly with Square's previous games, was attributed to Kitase, who was a fan of films and had an interest in the parallels between film and video game narrative. Character movement during in-game events was done by the character designers in the planning group. While designers normally cooperate with a motion specialist for such animations, the designers taught themselves motion work, resulting in each character's movements differing depending on their creators—some designers liked exaggerated movements, while others went for subtlety. Much of the time was spent on each character's day-to-day, routine animations. Motion specialists were brought in for the game's battle animations. The first characters the team worked with were Cloud and Barret. Some of the real-time effects, such as an explosion near the opening, were hand-drawn rather than computer-animated. The main creative force behind the overall 3D presentation was Kazuyuki Hashimoto, the general supervisor for these sequences. Being experienced in the new technology the team had brought on board, he accepted the post at Square as the team aligned with his own creative spirit. One of the major events in development was when the real-time graphics were synchronized to computer-generated full motion video (FMV) cutscenes for some story sequences, notably an early sequence where a real-time model of Cloud jumps onto an FMV-rendered moving train. The backgrounds were created by overlaying two 2D graphic layers and changing the motion speed of each to simulate depth perception. While this was not a new technique, the increased power of the PlayStation enabled a more elaborate version of this effect. The biggest issue with the 3D graphics was the large memory storage gap between the development hardware and the console: while the early 3D tech demo had been developed on a machine with over 400 megabytes of total memory, the PlayStation only had two megabytes of system memory and 500 kilobytes for texture memory. The team needed to figure out how to shrink the amount of data while preserving the desired effects. This was aided with reluctant help from Sony, who had hoped to keep Square's direct involvement limited to a standard API package, but they eventually relented and allowed the team direct access to the hardware specifications. Final Fantasy VII features two types of cutscenes: real-time cutscenes featuring polygon models on pre-rendered backgrounds, and FMV cutscenes. The game's computer-generated imagery (CGI) FMVs were produced by Visual Works, a then-new subsidiary of Square that specialized in computer graphics and FMVs creation. Visual Works had created the initial movie concept for a 3D game project. The FMVs were created by an international team, covering both Japan and North America and involving talent from the gaming and film industry; Western contributors included artists and staff who had worked on the Star Wars film series, Jurassic Park, Terminator 2: Judgment Day, and True Lies. The team tried to create additional optional CGI content which would bring optional characters Vincent and Yuffie into the ending. As this would have further increased the number of discs the game needed, the idea was discarded. Kazuyuki Ikumori, a future key figure at Visual Works, helped with the creation of the CGI cutscenes, in addition to general background design. The CGI FMV sequences total around 40 minutes of footage, something only possible with the PlayStation's extra memory space and graphical power. This innovation brought with it the added difficulty of ensuring that the inferiority of the in-game graphics in comparison to the FMV sequences was not too obvious. Kitase has described the process of making the in-game environments as detailed as possible to be "a daunting task". The musical score of Final Fantasy VII was composed, arranged, and produced by Nobuo Uematsu, who had served as the sole composer for the six previous Final Fantasy games. Originally, Uematsu had planned to use CD quality music with vocal performances to take advantage of the console's audio capabilities but found that it resulted in the game having much longer loading times for each area. Uematsu then decided that the higher-quality audio was not worth the trade-off with performance, and opted instead to use MIDI-like sounds produced by the console's internal sound sequencer, similar to how his soundtracks for the previous games in the series on the Super NES were implemented. While the Super NES only had eight sound channels to work with, the PlayStation had twenty-four. Eight were reserved for sound effects, leaving sixteen available for the music. Uematsu's approach to composing the game's music was to treat it like a film soundtrack and compose music that reflected the mood of the scenes, rather than trying to make strong melodies to "define the game", as he felt that approach would come across too strong when placed alongside the game's new 3D visuals. As an example, he composed the track intended for the scene in the game where Aerith Gainsborough is killed to be "sad but beautiful", rather than more overtly emotional, creating what he felt was a more understated feeling. Uematsu additionally said that the soundtrack had a feel of "realism", which also prevented him from using "exorbitant, crazy music". The first piece that Uematsu composed for the game was the opening theme; game director Yoshinori Kitase showed him the opening cinematic and asked him to begin the project there. The track was well received in the company, which gave Uematsu "a sense that it was going to be a really good project". Final Fantasy VII was the first game in the series to include a track with high-quality digitized vocals, "One-Winged Angel", which accompanies a section of the final battle of the game. The track has been called Uematsu's "most recognizable contribution" to the music of the Final Fantasy series, which Uematsu agrees with. Inspired by The Rite of Spring by Igor Stravinsky to make a more "classical" track, and by rock and roll music from the late 1960s and early 1970s to make an orchestral track with a "destructive impact", he spent two weeks composing short unconnected musical phrases, and then arranged them together into "One-Winged Angel", an approach he had never used before. Music from the game has been released in several albums. Square released the main soundtrack album, Final Fantasy VII Original Soundtrack, on four Compact Discs through its DigiCube subsidiary in 1997. A limited edition release was also produced, containing illustrated liner notes. The regular edition of the album reached third on the Japan Oricon charts, while the limited edition reached #19. Overall, the album had sold nearly 150,000 copies by January 2010. A single-disc album of selected tracks from the original soundtrack, along with three arranged pieces, titled Final Fantasy VII Reunion Tracks, was also released by DigiCube in 1997, reaching #20 on the Japan Oricon charts. A third album, Piano Collections Final Fantasy VII, was released by DigiCube in 2003, and contains one disc of piano arrangements of tracks from the game. It was arranged by Shirō Hamaguchi and performed by Seiji Honda, and reached #228 on the Oricon charts. Release Final Fantasy VII was announced in February 1996. Square president and chief executive officer Tomoyuki Takechi were fairly confident about Japanese players making the game a commercial success despite it being on a new platform. A playable demo was included on a disc giveaway at the 1996 Tokyo Game Show, dubbed Square's Preview Extra: Final Fantasy VII & Siggraph '95 Works. The disc also included the early test footage Square created using characters from Final Fantasy VI. The initial release date was at some point in 1996, but to properly realize their vision, Square postponed the release date almost a full year. Final Fantasy VII was released on January 31, 1997. It was published in Japan by Square. A re-release of the game based on its Western version, titled Final Fantasy VII International, was released on October 2 the same year. This improved International version would kickstart the trend for Square to create an updated version for the Japanese release, based on the enhanced Western versions. The International version was re-released as a physical disc as part of the Final Fantasy 25th Anniversary Ultimate Box Japanese package on December 18, 2012. While its success in Japan had been taken for granted by Square executives, North America and Europe were another matter, as up to that time the Japanese role-playing genre was still a niche market in Western territories. Sony, due to the PlayStation's struggles against Nintendo and Sega's home consoles, lobbied for the publishing rights in North America and Europe following Final Fantasy VII's transfer to PlayStation—to further persuade Square, Sony offered a lucrative royalties deal with profits potentially equaling those Square would get by self-publishing the game. Square accepted Sony's offer as Square itself lacked Western publishing experience. Square was uncertain about the game's success, as other JRPGs including Final Fantasy VI had met with poor sales outside Japan. To help with promoting the title overseas, Square dissolved their original Washington offices and hired new staff for fresh offices in Costa Mesa. It was first exhibited to the Western public at Electronic Entertainment Expo 1996 (E3). To promote the game overseas, Square and Sony launched a widespread three-month advertising campaign in August 1997. Beginning with a television commercial that ran alongside popular shows such as Saturday Night Live and The Simpsons by TBWA\Chiat\Day, the campaign included numerous articles in both gaming and general interest magazines, advertisements in comics from publishers such as DC Comics and Marvel Comics, a special collaboration with Pepsi, media events, sample discs, and merchandise. According to estimations by Takechi, the total worldwide marketing budget came to US$40 million; $10 million had been spent in Japan, $10 million in Europe, and $20 million in North America. Unlike its predecessors, Final Fantasy VII did not have its numeral adjusted to account for the lack of a Western release for Final Fantasy II, III, and V — while only the fourth Final Fantasy released outside Japan, its Japanese title was retained. It was released in North America on September 7, 1997. The game was released in Europe on November 17, becoming the first Final Fantasy game to be released in Europe. The Western version included additional elements and alterations, such as streamlining of the menu and Materia system, reducing the health of enemies, new visual cues to help with navigation across the world map, and additional cutscenes relating to Cloud's past. A version for PC was developed by Square's Costa Mesa offices. Square invested in a PC version to reach as wide a player base as possible; many Western consumers did not own a PlayStation, and Square's deal with Sony did not prohibit such a port. Having never released a title for PC, Square decided to treat the port as a sales experiment. The port was handled by a team of 15 to 20 people, mostly from Costa Mesa but with help from Tokyo. Square did not begin the port until the console version was finished. The team needed to rewrite an estimated 80% of the game's code, due to the need to unify what had been a custom build for a console written by multiple staff members. Consequently, programmers faced problems such as having to unify the original PlayStation version's five different game engines, leading to delays. The PC version came with a license for Yamaha Corporation's software synthesizer S-YXG70, allowing high-quality sequenced music despite varying sound hardware setups on different user computers. The conversion of the nearly 100 original musical pieces to XG format files was done by Yamaha. To maximize their chances of success, Square searched for a Western company to assist with releasing the PC version. Eidos Interactive, whose release of Tomb Raider had turned them into a publishing giant, agreed to market and publish the port. The port was announced in December 1997, along with Eidos' exclusivity deal for North America and Europe at the time, though the port was rumored to happen as early as December 1996, prior to the PlayStation version's release. To help the product stand out in stores, Eidos chose a trapezoidal shape for the cover and box. They agreed on a contract price of $1.8 million, making initial sales forecasts of 100,000 units based on that outlay. The PC version was released in North America and Europe on June 25, 1998; the port was not released in Japan. Within one month, sales of the port exceeded the initial forecasts. The PC version would end up providing the source code for subsequent ports. Localization of Final Fantasy VII was handled internally by Square. The English localization, led by Seth Luisi, was completed by a team of about fifty people and faced a variety of problems. According to Luisi, the biggest hurdle was making "the direct Japanese-to-English text translation read correctly in English. The sentence structure and grammar rules for the Japanese language is very different from English", making it difficult for the translation to read like native English without distorting the meaning. Michael Basket was the sole translator for the project, though he received the help of native Japanese speakers from the Tokyo office. The localization was taxing for the team due to their inexperience, lack of professional editors, and poor communication between the North American and Japanese offices. A result of this disconnect was the original localization of Aerith's name—which was intended as a conflation of "air" and "earth"—as "Aeris" due to a lack of communication between localization staff and the QA team. The team also faced several technical issues due to programming practices which took little account of subsequent localization, such as dealing with a fixed-width font and having to insert kanji through language input keys to add special characters (for example, vowels with diacritics) to keep the code working. Consequently, the text was still read as Japanese by the word processor; the computer's spellcheck could not be used, and mistakes had to be caught manually. The code used obscure kanji to refer to main character's names, which made it unintuitive for the translators to identify characters. Translated text usually takes up more space than the Japanese text, though still had to fit to the screen appropriately without overusing page breaks (for example, item names, which are written in kanji in Japanese language, could overflow message windows in translated text); to mitigate this problem, a proportional typeface was implemented into the source code to fit more text into the screen. Swear words were used frequently in the localization to help convey the original Japanese meaning, though most profanities were censored in a manner described by Square employee Richard Honeywood as the "old comic book '@#$%!'-type replacement". The European release was described as being in a worse condition, as the translations into multiple European languages were outsourced by Sony to another company, further hindering communication. For the PC port, Square attempted to fix translation and grammar mistakes for the North American and European versions but did not have the time and budget to retranslate all the text. According to Honeywood, the success of Final Fantasy VII in the West encouraged Square to focus more on localization quality; on future games, Square hired additional translators and editors, while also streamlining communication between the development and localization teams. Some months prior to the game's North American release, Sony publicly stated that it was considering cutting the scene at the Honey Bee Inn due to the salacious content, prompting numerous online petitions and letters of protest from RPG fans. Square subsequently stated that it would never allow Sony to localize the game in any way. In addition to translating the text, the North American localization team made tweaks to the gameplay, including reducing the enemy encounter rate, simplifying the Materia menu, and adding new boss fights. The International version of Final Fantasy VII was released on PlayStation Network (PSN) as a PSOne Classic in Japan on April 10, 2009. This version was compatible with both PlayStation 3 and PlayStation Portable with support for PlayStation Vita and PlayStation TV coming later. Final Fantasy VII was later released as a PSOne Classic in North America, Europe, and Australia on June 2. The PC version was updated by DotEmu for use on modern operating systems and released via Square Enix's North American and European online stores on August 14, 2012. It included high-resolution support, cloud saves, achievements and a character booster. It would later be released via Steam on July 4, 2013, replacing the version available on Square Enix's North American and European online stores. The PC version would be released in Japan for the first time on May 16, 2013, exclusively via Square Enix's Japanese online store with the International version title. It has features unavailable in the western version including high-speed mode, no random encounters mode, and a max stats command. A release for iOS, based on the PC version and adjusted for mobile devices by D4 Enterprise, was released on August 19, 2015, with an auto-save feature. The PC version was released for PlayStation 4 on December 5, 2015. DotEmu developed the PS4 version, which included the E3 2015 reveal trailer for Final Fantasy VII Remake. The game was also released for Android on July 7, 2016, for the PlayStation Classic on December 3, 2018, and for the Nintendo Switch and Xbox One, also developed by DotEmu, worldwide on March 26, 2019. Reception Within three days of its release in Japan, Final Fantasy VII sold over two million copies. This popularity inspired thousands of retailers in North America to break street dates in September to meet public demand for the title. In the game's debut weekend in North America, it sold 330,000 copies, and had reached sales of 500,000 copies in less than three weeks. The momentum established in the game's opening weeks continued for several months; Sony announced the game had sold one million copies in North America by early December, prompting business analyst Edward Williams from Monness, Crespi, Hardt & Co. to comment that "Sony redefined the role-playing game (RPG) category and expanded the conventional audience with the launch of Final Fantasy VII". According to Weekly Famitsu, Final Fantasy VII sold 3.27 million units in Japan by the end of 1997. By the end of 2005, the PlayStation version had sold 9.8 million copies including 4 million sales in Japan, making it the highest-selling game in the Final Fantasy series. By the end of 2006, The Best, the bargain reissue of the game, had sold over 158,000 copies in Japan. By May 2010, it had sold over 10 million copies worldwide, making it the most popular title in the series in terms of units sold. The original PC version surpassed Eidos' expectations: while initially forecast to sell 100,000 units, it quickly exceeded sales of one million units, garnering royalties of over $2 million for Square. By August 2015, the PlayStation and PC versions had sold over 11 million units worldwide. Steam Spy estimated the game to have sold over 1.2 million downloads on Steam as of April 2018, with a later Steam leak estimating it had 1.14 million players on the platform as of July 2018. As of June 2020, the game has sold more than 13.3 million units worldwide. As of September 2025, the original version of the game has sold over 15.3 million units worldwide. Final Fantasy VII received universal acclaim from critics upon release, receiving perfect scores from 1Up.com, AllGame, Computer and Video Games, GameFan, GamePro, Next Generation, and official PlayStation magazines for Australia and the United States. It was referred to by GameFan as "quite possibly the greatest game ever made", a quote selected for the back cover of the game's jewel case. GameSpot commented that "never before have technology, playability, and narrative combined as well as in Final Fantasy VII", expressing particular favor toward the game's graphics, audio, and story. The four reviewers of Electronic Gaming Monthly unanimously gave it a 9.5 out of 10 and their "Game of the Month" award, lauding its rendered backgrounds, use of FMV, battles, and especially the story line, though they expressed disappointment that the ending did not resolve all of the loose ends. They also considered the North American localization a dramatic improvement over the original Japanese version. GamePro called the storytelling "dramatic, sentimental, and touching in a way that draws you into the characters", who "come alive thanks to sweetly subtle body movements". Both GamePro and Official U.S. PlayStation Magazine (OPM) said the ATB system gives battles a tension and urgency not usually seen in RPGs, and OPM called the summon animations "absolutely awe-inspiring". IGN's Jay Boor insisted the game's graphics were "light years beyond anything ever seen on the PlayStation", and regarded its battle system as its strongest point. Computer and Video Games's Alex C praised the dramatic story and well-developed characters. In addition to calling the graphics "bar none the best the PlayStation has ever seen", Next Generation said of the story that "while FFVII may take a bit to get going, as in every entry in the series, moments of high melodrama are blended with scenes of sheer poetry and vision". Edge noted that Final Fantasy VII had come close to being an interactive movie in playable form, praising its combination of a complex story that went against Western graphic adventures trends and "excellently orchestrated chip music". RPGamer praised the game's soundtrack, both in variety and sheer volume, stating that "Uematsu has done his work exceptionally well" and saying that it was potentially his best work. Final Fantasy VII has received some negative criticism. OPM and GameSpot questioned the game's linear progression. OPM considered the game's translation "a bit muddy". Similarly, RPGamer cited its translation as "packed with typos and other errors which further obscure what is already a very confusing plot". GamePro also considered the Japanese-to-English translation a significant weakness in the game, and IGN regarded the ability to use only three characters at a time as "the game's only shortcoming". Reviewers gave similar praise to the PC version but criticized its various technical faults. Computer Games Magazine said that no other recent game had the same "tendency to fail to work in any capacity on multiple [computers]". Computer Gaming World complained that the music quality suffered on PC sound cards, and Next Generation found the game's pre-rendered backgrounds significantly less impressive than those of the PlayStation version. However, the latter magazine found the higher-resolution battle visuals "absolutely stunning", and Computer Games Magazine said that they showed off the potential graphical power of PCs. All three magazines concluded by praising the game despite its technical flaws, and PC Gamer summarized that, while "Square apparently did only what was required to get its PlayStation game running under Windows", Final Fantasy VII is "still a winner on the PC". Final Fantasy VII was given numerous Game of the Year awards in 1997. At the second CESA Awards, it won the Grand Prize, Scenario Award and Sound Award. During the Academy of Interactive Arts & Sciences' inaugural Interactive Achievement Awards, Final Fantasy VII won in the categories of "Console Adventure Game of the Year" and "Console Role-Playing Game of the Year"; it also received nominations for "Interactive Title of the Year", "Console Game of the Year", "Outstanding Achievement in Art/Graphics", and "Outstanding Achievement in Interactive Design". In the Origins Awards, it won in the category "Best Roleplaying Computer Game of 1997". Final Fantasy VII was awarded Game of the Year by magazines including Game Informer, GamePro, and Hyper. It was also awarded the "Readers' Choice All Systems Game of the Year", "Readers' Choice PlayStation Game of the Year" and "Readers' Choice Role-Playing Game of the Year" by Electronic Gaming Monthly (EGM), which gave it Editors' Choice Awards for "Role-Playing Game of the Year" and "Best Graphics" (plus a runner-up slot for "Game of the Year"), and also gave it awards for "Hottest Video Game Babe" (for Tifa Lockhart), "Most Hype for a Game", "Best Ending", and "Best Print Ad". Since 1997, it has been selected by many game magazines as one of the top video games of all time, listed as 91st in EGM's 2001 "100 Best Games of All Time", and as fourth in Retro Gamer's "Top 100 Games" in 2004. In 2018, it was ranked 99th in IGN's "Top 100 Games of All Time" and as third in PALGN's "The Greatest 100 Games Ever". Final Fantasy VII was included in "The Greatest Games of All Time" list by GameSpot in 2006, and ranked as second in Empire's 2006 "100 Greatest Games of All Time", as third in Stuff's "100 Greatest Games" in 2008 and as 15th in Game Informer's 2009 "Top 200 Games of All Time" (down five places from its previous best games of all-time list). GameSpot placed it as the second most influential game ever made in 2002; in 2007, GamePro ranked it 14th on the list of the most important games of all time, and in 2009 it finished in the same place on their list of the most innovative games of all time. In 2012, Time named it one of their "All-Time 100 Video Games". In March 2018, Game Informer's "Readers Choice Top 300 Games of All Time", Final Fantasy VII ranked in seventh place. In March 2018, GamesRadar+ rated "The 25 best PS1 games of all time", Final Fantasy VII was ranked in 12th place. It has also appeared in numerous other greatest game lists. In 2007, Dengeki PlayStation gave it the "Best Story", "Best RPG" and "Best Overall Game" retrospective awards for games on the original PlayStation. The same year, Play magazine ranked it first place on its list of top 25 best role-playing games of all time. GamePro named it the best RPG title of all time in 2008, and featured it in their 2010 article "The 30 Best PSN Games". In 2012, GamesRadar also ranked it as the sixth saddest game ever. On the other hand, GameSpy ranked it seventh on their 2003 list of the most overrated games. Final Fantasy VII has often placed at or near the top of many reader polls of all-time best games. It was voted the "Reader's Choice Game of the Century" in an IGN poll in 2000, and placed second in the "Top 100 Favorite Games of All Time" by Japanese magazine Famitsu in 2006 (it was also voted as ninth in Famitsu's 2011 poll of most tear-inducing games of all time). Users of GameFAQs voted it the "Best Game Ever" in 2004 and in 2005, and placed it second in 2009. In 2008, readers of Dengeki magazine voted it the best game ever made, as well as the ninth most tear-inducing game of all time. Legacy The game inspired an unofficial version for the NES by Chinese company Shenzhen Nanjing Technology. This port features the Final Fantasy VII game scaled back to 2D, with some of the side quests removed. The game also inspired a LittleBigPlanet fan remake by Jamie Colliver. Colliver's adaptation reimagines the entire original game as a platformer and was awarded two Guinness World Records in 2014, for the first complete Final Fantasy remake made in another video game and the first role-playing game remade in LittleBigPlanet. The game's popularity and open-ended nature also led director Kitase and scenario writer Nojima to establish a plot-related connection between Final Fantasy VII and Final Fantasy X-2. The character Shinra from X-2 proposes the concept of extracting the life energy from within the planet Spira. Nojima has stated that Shinra and his proposal are a deliberate nod to the Shinra Company and that he envisioned the events of Final Fantasy X-2 as a prequel to those in VII. The advances in technology used to create the FMV sequences and computer graphics for Final Fantasy VII allowed Sakaguchi to begin production on the first Final Fantasy film, Final Fantasy: The Spirits Within. The game introduced a particular aesthetic to the series—fantasy suffused with modern-to-advanced technology—which was explored further in Final Fantasy VIII, The Spirits Within, and XV. Re-releases of Square games in Japan with bonus features would occur frequently after the release of Final Fantasy VII International. Later titles that would be re-released as international versions include Final Fantasy X and other follow-ups from the franchise, as well as the Kingdom Hearts series. Final Fantasy VII is credited as having the largest impact of the Final Fantasy series, and with allowing console role-playing games to gain mass-market appeal outside of Japan. Aerith's death in the game has often been referred to as one of the most significant moments from any video game. In addition, Final Fantasy VII is also noted for its use of the unreliable narrator literary concept, drawing comparisons to films such as Fight Club (1999), The Sixth Sense (1999), American Psycho (2000) and Memento (2000). Patrick Holleman and Jeremy Parish argue that the game takes the unreliable narrator concept a step further, with its interactivity establishing a connection between the player and the protagonist Cloud, setting Final Fantasy VII apart from films as well as other video games. According to Holleman, "no RPG has ever deliberately betrayed the connection between protagonist and player like FFVII does". Harry Mackin writing for Paste called the game "a subversion that deconstructs and comments meaningfully on how we think about heroism, masculinity and identity in videogame storytelling". Ric Manning of The Courier-Journal noted elements of psychoanalysis in the game. The game is also noted for its cyberpunk themes; GamesRadar+ called it one of the best games of the genre, and Paste Magazine compared its cyberpunk city of Midgar to Akira and Blade Runner. According to Comic Book Resources, the game's climate change theme is more meaningful in 2019 than it was in 1997. Several characters from Final Fantasy VII have made cameo appearances in other Square Enix titles, most notably the fighting game Ehrgeiz and the popular Final-Fantasy-to-Disney crossover series Kingdom Hearts. Additionally, fighting video game Dissidia Final Fantasy includes Final Fantasy VII characters such as Cloud and Sephiroth, and allows players to fight with characters from throughout the Final Fantasy series, and its follow-up, Dissidia 012, included Tifa as well. Cloud is also a playable character in Final Fantasy Tactics. In December 2015, Cloud was released as a downloadable content character for the Nintendo crossover fighting game Super Smash Bros. for Nintendo 3DS and Wii U, along with a stage based on Midgar. He returned in the 2018 sequel, Super Smash Bros. Ultimate, with Sephiroth being added as downloadable content in December 2020. The world of Final Fantasy VII is explored further in the Compilation of Final Fantasy VII, a series of games, animated features, and short stories. The first title in the Compilation is the mobile game Before Crisis, a prequel focusing on the Turks' activities six years before the original game. The CGI film sequel Advent Children, set two years after the game, was the first title announced but the second to be released. Special DVD editions of the film included Last Order, an original video animation that recounts the destruction of Nibelheim. Dirge of Cerberus and its mobile phone counterpart, Dirge of Cerberus Lost Episode: Final Fantasy VII, are third-person shooters set one year after Advent Children. Dirge focuses on the backstory of Vincent Valentine, whose history was left mostly untold in Final Fantasy VII. The most recent title is the PlayStation Portable game Crisis Core, an action role-playing game that centers on Zack's past. Releases not under the Compilation label include Maiden Who Travels the Planet, which follows Aerith's journey in the Lifestream after her death, taking place concurrently with the second half of the original game. In 1998, the Official Final Fantasy VII Strategy Guide was licensed by SquareSoft and published by Brady Games. Final Fantasy VII Snowboarding is a mobile port of the snowboard minigame featured in the original game, featuring different courses for the player to tackle. The game is downloadable on V Cast-compatible mobile phones and was first made available in 2005 in Japan and North America. Final Fantasy VII G-Bike is a mobile game released for iOS and Android in December 2014, based on the motorbike minigame featured in the original game. In September 2007, Square Enix published Final Fantasy VII 10th Anniversary Ultimania. This book is an in-depth compilation of FFVII story-line and artwork. The Universal Studios Theme Park in Japan is developing a Final Fantasy VII themed virtual reality attraction. With the announcement and development of the Compilation of Final Fantasy VII, speculation spread that a remake of the original Final Fantasy VII would be released for the PlayStation 3. This conjecture was sparked by the release of a video featuring the opening sequence of FFVII recreated using the PlayStation 3's graphical capabilities at E3 2005. After a decade of speculation, a remake was announced at E3 2015. The game saw changes made to its story and combat system. The game is planned to be released over three installments, with the first part being released for the PlayStation 4 in 2020. The follow-up, Final Fantasy VII Rebirth, was released for PlayStation 5 in February 2024. The final entry is currently in development for Nintendo Switch 2, PlayStation 5, Xbox Series X/S, and Windows. Crisis Core: Final Fantasy VII Reunion, a remaster of Crisis Core: Final Fantasy VII, which was released in December 2022, is also considered a prequel to the larger Final Fantasy VII Remake project. Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Storage_area_network] | [TOKENS: 2743] |
Contents Storage area network A storage area network (SAN) or storage network is a computer network which provides access to consolidated, block-level data storage. SANs are primarily used to access data storage devices, such as disk arrays and tape libraries from servers so that the devices appear to the operating system as direct-attached storage. A SAN typically is a dedicated network of storage devices not accessible through the local area network (LAN). Although a SAN provides only block-level access, file systems built on top of SANs do provide file-level access and are known as shared-disk file systems. Newer SAN configurations enable hybrid SAN and allow traditional block storage that appears as local storage but also object storage for web services through APIs. Storage architectures Storage area networks (SANs) are sometimes referred to as network behind the servers: 11 and historically developed out of a centralized data storage model, but with its own data network. A SAN is, at its simplest, a dedicated network for data storage. In addition to storing data, SANs allow for the automatic backup of data, and the monitoring of the storage as well as the backup process.: 16–17 A SAN is a combination of hardware and software.: 9 It grew out of data-centric mainframe architectures, where clients in a network can connect to several servers that store different types of data.: 11 To scale storage capacities as the volumes of data grew, direct-attached storage (DAS) was developed, where disk arrays or just a bunch of disks (JBODs) were attached to servers. In this architecture, storage devices can be added to increase storage capacity. However, the server through which the storage devices are accessed is a single point of failure, and a large part of the LAN network bandwidth is used for accessing, storing and backing up data. To solve the single point of failure issue, a direct-attached shared storage architecture was implemented, where several servers could access the same storage device.: 16–17 DAS was the first network storage system and is still widely used where data storage requirements are not very high. Out of it developed the network-attached storage (NAS) architecture, where one or more dedicated file server or storage devices are made available in a LAN.: 18 Therefore, the transfer of data, particularly for backup, still takes place over the existing LAN. If more than a terabyte of data was stored at any one time, LAN bandwidth became a bottleneck.: 21–22 Therefore, SANs were developed, where a dedicated storage network was attached to the LAN, and terabytes of data are transferred over a dedicated high speed and bandwidth network. Within the SAN, storage devices are interconnected. Transfer of data between storage devices, such as for backup, happens behind the servers and is meant to be transparent.: 22 In a NAS architecture data is transferred using the TCP and IP protocols over Ethernet. Distinct protocols were developed for SANs, such as Fibre Channel, iSCSI, Infiniband. Therefore, SANs often have their own network and storage devices, which have to be bought, installed, and configured. This makes SANs inherently more expensive than NAS architectures.: 29 Components SANs have their own networking devices, such as SAN switches. To access the SAN, so-called SAN servers are used, which in turn connect to SAN host adapters. Within the SAN, a range of data storage devices may be interconnected, such as SAN-capable disk arrays, JBODs and tape libraries.: 32, 35–36 Servers that allow access to the SAN and its storage devices are said to form the host layer of the SAN. Such servers have host adapters, which are cards that attach to slots on the server motherboard (usually PCI slots) and run with a corresponding firmware and device driver. Through the host adapters the operating system of the server can communicate with the storage devices in the SAN.: 26 In Fibre channel deployments, a cable connects to the host adapter through the gigabit interface converter (GBIC). GBICs are also used on switches and storage devices within the SAN, and they convert digital bits into light impulses that can then be transmitted over the Fibre Channel cables. Conversely, the GBIC converts incoming light impulses back into digital bits. The predecessor of the GBIC was called gigabit link module (GLM).: 27 The fabric layer consists of SAN networking devices that include SAN switches, routers, protocol bridges, gateway devices, and cables. SAN network devices move data within the SAN, or between an initiator, such as an HBA port of a server, and a target, such as the port of a storage device. When SANs were first built, hubs were the only devices that were Fibre Channel capable, but Fibre Channel switches were developed and hubs are now rarely found in SANs. Switches have the advantage over hubs that they allow all attached devices to communicate simultaneously, as a switch provides a dedicated link to connect all its ports with one another.: 34 When SANs were first built, Fibre Channel had to be implemented over copper cables, these days multimode optical fibre cables are used in SANs.: 40 SANs are usually built with redundancy, so SAN switches are connected with redundant links. SAN switches connect the servers with the storage devices and are typically non-blocking allowing transmission of data across all attached wires at the same time.: 29 SAN switches are for redundancy purposes set up in a meshed topology. A single SAN switch can have as few as 8 ports and up to 32 ports with modular extensions.: 35 So-called director-class switches can have as many as 128 ports.: 36 In switched SANs, the Fibre Channel switched fabric protocol FC-SW-6 is used under which every device in the SAN has a hardcoded World Wide Name (WWN) address in the host bus adapter (HBA). If a device is connected to the SAN its WWN is registered in the SAN switch name server.: 47 In place of a WWN, or worldwide port name (WWPN), SAN Fibre Channel storage device vendors may also hardcode a worldwide node name (WWNN). The ports of storage devices often have a WWN starting with 5, while the bus adapters of servers start with 10 or 21.: 47 The serialized Small Computer Systems Interface (SCSI) protocol is often used on top of the Fibre Channel switched fabric protocol in servers and SAN storage devices. The Internet Small Computer Systems Interface (iSCSI) over Ethernet and the Infiniband protocols may also be found implemented in SANs, but are often bridged into the Fibre Channel SAN. However, Infiniband and iSCSI storage devices, in particular, disk arrays, are available.: 47–48 The various storage devices in a SAN are said to form the storage layer. It can include a variety of hard disk and magnetic tape devices that store data. In SANs, disk arrays are joined through a RAID which makes a lot of hard disks look and perform like one big storage device.: 48 Every storage device, or even partition on that storage device, has a logical unit number (LUN) assigned to it. This is a unique number within the SAN. Every node in the SAN, be it a server or another storage device, can access the storage by referencing the LUN. The LUNs allow for the storage capacity of a SAN to be segmented and for the implementation of access controls. A particular server, or a group of servers, may, for example, be only given access to a particular part of the SAN storage layer, in the form of LUNs. When a storage device receives a request to read or write data, it will check its access list to establish whether the node, identified by its LUN, is allowed to access the storage area, also identified by a LUN.: 148–149 LUN masking is a technique whereby the host bus adapter and the SAN software of a server restrict the LUNs for which commands are accepted. In doing so LUNs that should never be accessed by the server are masked.: 354 Another method to restrict server access to particular SAN storage devices is fabric-based access control, or zoning, which is enforced by the SAN networking devices and servers. Under zoning, server access is restricted to storage devices that are in a particular SAN zone. Network protocols A mapping layer to other protocols is used to form a network: Storage networks may also be built using Serial Attached SCSI (SAS) and Serial ATA (SATA) technologies. SAS evolved from SCSI direct-attached storage. SATA evolved from Parallel ATA direct-attached storage. SAS and SATA devices can be networked using SAS expanders. Software The Storage Networking Industry Association (SNIA) defines a SAN as "a network whose primary purpose is the transfer of data between computer systems and storage elements". But a SAN does not just consist of a communication infrastructure, it also has a software management layer. This software organizes the servers, storage devices, and the network so that data can be transferred and stored. Because a SAN does not use direct attached storage (DAS), the storage devices in the SAN are not owned and managed by a server.: 11 A SAN allows a server to access a large data storage capacity and this storage capacity may also be accessible by other servers.: 12 Moreover, SAN software must ensure that data is directly moved between storage devices within the SAN, with minimal server intervention.: 13 SAN management software is installed on one or more servers and management clients on the storage devices. Two approaches have developed in SAN management software: in-band and out-of-band management. In-band means that management data between server and storage devices is transmitted on the same network as the storage data. While out-of-band means that management data is transmitted over dedicated links.: 174 SAN management software will collect management data from all storage devices in the storage layer. This includes info on read and write failures, storage capacity bottlenecks and failure of storage devices. SAN management software may integrate with the Simple Network Management Protocol (SNMP).: 176 In 1999 Common Information Model (CIM), an open standard, was introduced for managing storage devices and to provide interoperability, The web-based version of CIM is called Web-Based Enterprise Management (WBEM) and defines SAN storage device objects and process transactions. Use of these protocols involves a CIM object manager (CIMOM), to manage objects and interactions, and allows for the central management of SAN storage devices. Basic device management for SANs can also be achieved through the Storage Management Interface Specification (SMI-S), were CIM objects and processes are registered in a directory. Software applications and subsystems can then draw on this directory.: 177 Management software applications are also available to configure SAN storage devices, allowing, for example, the configuration of zones and LUNs.: 178 Ultimately SAN networking and storage devices are available from many vendors and every SAN vendor has its own management and configuration software. Common management in SANs that include devices from different vendors is only possible if vendors make the application programming interface (API) for their devices available to other vendors. In such cases, upper-level SAN management software can manage the SAN devices from other vendors.: 180 Filesystems support In a SAN, data is transferred, stored and accessed on a block level. As such, a SAN does not provide data file abstraction, only block-level storage and operations. Server operating systems maintain their own file systems on their own dedicated, non-shared LUNs on the SAN, as though they were local to themselves. If multiple systems were simply to attempt to share a LUN, these would interfere with each other and quickly corrupt the data. Any planned sharing of data on different computers within a LUN requires software. File systems have been developed to work with SAN software to provide file-level access. These are known as shared-disk file system. In media and entertainment Video editing systems require very high data transfer rates and very low latency. SANs in media and entertainment are often referred to as serverless due to the nature of the configuration which places the video workflow (ingest, editing, playout) desktop clients directly on the SAN rather than attaching to servers. Control of data flow is managed by a distributed file system. Per-node bandwidth usage control, sometimes referred to as quality of service (QoS), is especially important in video editing as it ensures fair and prioritized bandwidth usage across the network. Quality of service SAN Storage QoS enables the desired storage performance to be calculated and maintained for network customers accessing the device. Some factors that affect SAN QoS are: Alternatively, over-provisioning can be used to provide additional capacity to compensate for peak network traffic loads. However, where network loads are not predictable, over-provisioning can eventually cause all bandwidth to be fully consumed and latency to increase significantly resulting in SAN performance degradation. Storage virtualization Storage virtualization is the process of abstracting logical storage from physical storage. The physical storage resources are aggregated into storage pools, from which the logical storage is created. It presents to the user a logical space for data storage and transparently handles the process of mapping it to the physical location, a concept called location transparency. This is implemented in modern disk arrays, often using vendor-proprietary technology. However, the goal of storage virtualization is to group multiple disk arrays from different vendors, scattered over a network, into a single storage device. The single storage device can then be managed uniformly. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Mars#cite_note-American_Chemical_Society_2013-17] | [TOKENS: 11899] |
Contents Mars Mars is the fourth planet from the Sun. It is also known as the "Red Planet", for its orange-red appearance. Mars is a desert-like rocky planet with a tenuous atmosphere that is primarily carbon dioxide (CO2). At the average surface level the atmospheric pressure is a few thousandths of Earth's, atmospheric temperature ranges from −153 to 20 °C (−243 to 68 °F), and cosmic radiation is high. Mars retains some water, in the ground as well as thinly in the atmosphere, forming cirrus clouds, fog, frost, larger polar regions of permafrost and ice caps (with seasonal CO2 snow), but no bodies of liquid surface water. Its surface gravity is roughly a third of Earth's or double that of the Moon. Its diameter, 6,779 km (4,212 mi), is about half the Earth's, or twice the Moon's, and its surface area is the size of all the dry land of Earth. Fine dust is prevalent across the surface and the atmosphere, being picked up and spread at the low Martian gravity even by the weak wind of the tenuous atmosphere. The terrain of Mars roughly follows a north-south divide, the Martian dichotomy, with the northern hemisphere mainly consisting of relatively flat, low lying plains, and the southern hemisphere of cratered highlands. Geologically, the planet is fairly active with marsquakes trembling underneath the ground, but also hosts many enormous volcanoes that are extinct (the tallest is Olympus Mons, 21.9 km or 13.6 mi tall), as well as one of the largest canyons in the Solar System (Valles Marineris, 4,000 km or 2,500 mi long). Mars has two natural satellites that are small and irregular in shape: Phobos and Deimos. With a significant axial tilt of 25 degrees, Mars experiences seasons, like Earth (which has an axial tilt of 23.5 degrees). A Martian solar year is equal to 1.88 Earth years (687 Earth days), a Martian solar day (sol) is equal to 24.6 hours. Mars formed along with the other planets approximately 4.5 billion years ago. During the martian Noachian period (4.5 to 3.5 billion years ago), its surface was marked by meteor impacts, valley formation, erosion, the possible presence of water oceans and the loss of its magnetosphere. The Hesperian period (beginning 3.5 billion years ago and ending 3.3–2.9 billion years ago) was dominated by widespread volcanic activity and flooding that carved immense outflow channels. The Amazonian period, which continues to the present, is the currently dominating and remaining influence on geological processes. Because of Mars's geological history, the possibility of past or present life on Mars remains an area of active scientific investigation, with some possible traces needing further examination. Being visible with the naked eye in Earth's sky as a red wandering star, Mars has been observed throughout history, acquiring diverse associations in different cultures. In 1963 the first flight to Mars took place with Mars 1, but communication was lost en route. The first successful flyby exploration of Mars was conducted in 1965 with Mariner 4. In 1971 Mariner 9 entered orbit around Mars, being the first spacecraft to orbit any body other than the Moon, Sun or Earth; following in the same year were the first uncontrolled impact (Mars 2) and first successful landing (Mars 3) on Mars. Probes have been active on Mars continuously since 1997. At times, more than ten probes have simultaneously operated in orbit or on the surface, more than at any other planet beyond Earth. Mars is an often proposed target for future crewed exploration missions, though no such mission is currently planned. Natural history Scientists have theorized that during the Solar System's formation, Mars was created as the result of a random process of run-away accretion of material from the protoplanetary disk that orbited the Sun. Mars has many distinctive chemical features caused by its position in the Solar System. Elements with comparatively low boiling points, such as chlorine, phosphorus, and sulfur, are much more common on Mars than on Earth; these elements were probably pushed outward by the young Sun's energetic solar wind. After the formation of the planets, the inner Solar System may have been subjected to the so-called Late Heavy Bombardment. About 60% of the surface of Mars shows a record of impacts from that era, whereas much of the remaining surface is probably underlain by immense impact basins caused by those events. However, more recent modeling has disputed the existence of the Late Heavy Bombardment. There is evidence of an enormous impact basin in the Northern Hemisphere of Mars, spanning 10,600 by 8,500 kilometres (6,600 by 5,300 mi), or roughly four times the size of the Moon's South Pole–Aitken basin, which would be the largest impact basin yet discovered if confirmed. It has been hypothesized that the basin was formed when Mars was struck by a Pluto-sized body about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet. A 2023 study shows evidence, based on the orbital inclination of Deimos (a small moon of Mars), that Mars may once have had a ring system 3.5 billion years to 4 billion years ago. This ring system may have been formed from a moon, 20 times more massive than Phobos, orbiting Mars billions of years ago; and Phobos would be a remnant of that ring. Epochs: The geological history of Mars can be split into many periods, but the following are the three primary periods: Geological activity is still taking place on Mars. The Athabasca Valles is home to sheet-like lava flows created about 200 million years ago. Water flows in the grabens called the Cerberus Fossae occurred less than 20 million years ago, indicating equally recent volcanic intrusions. The Mars Reconnaissance Orbiter has captured images of avalanches. Physical characteristics Mars is approximately half the diameter of Earth or twice that of the Moon, with a surface area only slightly less than the total area of Earth's dry land. Mars is less dense than Earth, having about 15% of Earth's volume and 11% of Earth's mass, resulting in about 38% of Earth's surface gravity. Mars is the only presently known example of a desert planet, a rocky planet with a surface akin to that of Earth's deserts. The red-orange appearance of the Martian surface is caused by iron(III) oxide (nanophase Fe2O3) and the iron(III) oxide-hydroxide mineral goethite. It can look like butterscotch; other common surface colors include golden, brown, tan, and greenish, depending on the minerals present. Like Earth, Mars is differentiated into a dense metallic core overlaid by less dense rocky layers. The outermost layer is the crust, which is on average about 42–56 kilometres (26–35 mi) thick, with a minimum thickness of 6 kilometres (3.7 mi) in Isidis Planitia, and a maximum thickness of 117 kilometres (73 mi) in the southern Tharsis plateau. For comparison, Earth's crust averages 27.3 ± 4.8 km in thickness. The most abundant elements in the Martian crust are silicon, oxygen, iron, magnesium, aluminum, calcium, and potassium. Mars is confirmed to be seismically active; in 2019, it was reported that InSight had detected and recorded over 450 marsquakes and related events. Beneath the crust is a silicate mantle responsible for many of the tectonic and volcanic features on the planet's surface. The upper Martian mantle is a low-velocity zone, where the velocity of seismic waves is lower than surrounding depth intervals. The mantle appears to be rigid down to the depth of about 250 km, giving Mars a very thick lithosphere compared to Earth. Below this the mantle gradually becomes more ductile, and the seismic wave velocity starts to grow again. The Martian mantle does not appear to have a thermally insulating layer analogous to Earth's lower mantle; instead, below 1050 km in depth, it becomes mineralogically similar to Earth's transition zone. At the bottom of the mantle lies a basal liquid silicate layer approximately 150–180 km thick. The Martian mantle appears to be highly heterogenous, with dense fragments up to 4 km across, likely injected deep into the planet by colossal impacts ~4.5 billion years ago; high-frequency waves from eight marsquakes slowed as they passed these localized regions, and modeling indicates the heterogeneities are compositionally distinct debris preserved because Mars lacks plate tectonics and has a sluggishly convecting interior that prevents complete homogenization. Mars's iron and nickel core is at least partially molten, and may have a solid inner core. It is around half of Mars's radius, approximately 1650–1675 km, and is enriched in light elements such as sulfur, oxygen, carbon, and hydrogen. The temperature of the core is estimated to be 2000–2400 K, compared to 5400–6230 K for Earth's solid inner core. In 2025, based on data from the InSight lander, a group of researchers reported the detection of a solid inner core 613 kilometres (381 mi) ± 67 kilometres (42 mi) in radius. Mars is a terrestrial planet with a surface that consists of minerals containing silicon and oxygen, metals, and other elements that typically make up rock. The Martian surface is primarily composed of tholeiitic basalt, although parts are more silica-rich than typical basalt and may be similar to andesitic rocks on Earth, or silica glass. Regions of low albedo suggest concentrations of plagioclase feldspar, with northern low albedo regions displaying higher than normal concentrations of sheet silicates and high-silicon glass. Parts of the southern highlands include detectable amounts of high-calcium pyroxenes. Localized concentrations of hematite and olivine have been found. Much of the surface is deeply covered by finely grained iron(III) oxide dust. The Phoenix lander returned data showing Martian soil to be slightly alkaline and containing elements such as magnesium, sodium, potassium and chlorine. These nutrients are found in soils on Earth, and are necessary for plant growth. Experiments performed by the lander showed that the Martian soil has a basic pH of 7.7, and contains 0.6% perchlorate by weight, concentrations that are toxic to humans. Streaks are common across Mars and new ones appear frequently on steep slopes of craters, troughs, and valleys. The streaks are dark at first and get lighter with age. The streaks can start in a tiny area, then spread out for hundreds of metres. They have been seen to follow the edges of boulders and other obstacles in their path. The commonly accepted hypotheses include that they are dark underlying layers of soil revealed after avalanches of bright dust or dust devils. Several other explanations have been put forward, including those that involve water or even the growth of organisms. Environmental radiation levels on the surface are on average 0.64 millisieverts of radiation per day, and significantly less than the radiation of 1.84 millisieverts per day or 22 millirads per day during the flight to and from Mars. For comparison the radiation levels in low Earth orbit, where Earth's space stations orbit, are around 0.5 millisieverts of radiation per day. Hellas Planitia has the lowest surface radiation at about 0.342 millisieverts per day, featuring lava tubes southwest of Hadriacus Mons with potentially levels as low as 0.064 millisieverts per day, comparable to radiation levels during flights on Earth. Although Mars has no evidence of a structured global magnetic field, observations show that parts of the planet's crust have been magnetized, suggesting that alternating polarity reversals of its dipole field have occurred in the past. This paleomagnetism of magnetically susceptible minerals is similar to the alternating bands found on Earth's ocean floors. One hypothesis, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands suggest plate tectonic activity on Mars four billion years ago, before the planetary dynamo ceased to function and the planet's magnetic field faded. Geography and features Although better remembered for mapping the Moon, Johann Heinrich von Mädler and Wilhelm Beer were the first areographers. They began by establishing that most of Mars's surface features were permanent and by more precisely determining the planet's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars. Features on Mars are named from a variety of sources. Albedo features are named for classical mythology. Craters larger than roughly 50 km are named for deceased scientists and writers and others who have contributed to the study of Mars. Smaller craters are named for towns and villages of the world with populations of less than 100,000. Large valleys are named for the word "Mars" or "star" in various languages; smaller valleys are named for rivers. Large albedo features retain many of the older names but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian "continents" and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The permanent northern polar ice cap is named Planum Boreum. The southern cap is called Planum Australe. Mars's equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mädler and Beer selected a line for their first maps of Mars in 1830. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen by Merton E. Davies, Harold Masursky, and Gérard de Vaucouleurs for the definition of 0.0° longitude to coincide with the original selection. Because Mars has no oceans, and hence no "sea level", a zero-elevation surface had to be selected as a reference level; this is called the areoid of Mars, analogous to the terrestrial geoid. Zero altitude was defined by the height at which there is 610.5 Pa (6.105 mbar) of atmospheric pressure. This pressure corresponds to the triple point of water, and it is about 0.6% of the sea level surface pressure on Earth (0.006 atm). For mapping purposes, the United States Geological Survey divides the surface of Mars into thirty cartographic quadrangles, each named for a classical albedo feature it contains. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023. The vast upland region Tharsis contains several massive volcanoes, which include the shield volcano Olympus Mons. The edifice is over 600 km (370 mi) wide. Because the mountain is so large, with complex structure at its edges, giving a definite height to it is difficult. Its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over 21 km (13 mi), a little over twice the height of Mauna Kea as measured from its base on the ocean floor. The total elevation change from the plains of Amazonis Planitia, over 1,000 km (620 mi) to the northwest, to the summit approaches 26 km (16 mi), roughly three times the height of Mount Everest, which in comparison stands at just over 8.8 kilometres (5.5 mi). Consequently, Olympus Mons is either the tallest or second-tallest mountain in the Solar System; the only known mountain which might be taller is the Rheasilvia peak on the asteroid Vesta, at 20–25 km (12–16 mi). The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. It is possible that, four billion years ago, the Northern Hemisphere of Mars was struck by an object one-tenth to two-thirds the size of Earth's Moon. If this is the case, the Northern Hemisphere of Mars would be the site of an impact crater 10,600 by 8,500 kilometres (6,600 by 5,300 mi) in size, or roughly the area of Europe, Asia, and Australia combined, surpassing Utopia Planitia and the Moon's South Pole–Aitken basin as the largest impact crater in the Solar System. Mars is scarred by 43,000 impact craters with a diameter of 5 kilometres (3.1 mi) or greater. The largest exposed crater is Hellas, which is 2,300 kilometres (1,400 mi) wide and 7,000 metres (23,000 ft) deep, and is a light albedo feature clearly visible from Earth. There are other notable impact features, such as Argyre, which is around 1,800 kilometres (1,100 mi) in diameter, and Isidis, which is around 1,500 kilometres (930 mi) in diameter. Due to the smaller mass and size of Mars, the probability of an object colliding with the planet is about half that of Earth. Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter. Martian craters can[discuss] have a morphology that suggests the ground became wet after the meteor impact. The large canyon, Valles Marineris (Latin for 'Mariner Valleys, also known as Agathodaemon in the old canal maps), has a length of 4,000 kilometres (2,500 mi) and a depth of up to 7 kilometres (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 kilometres (277 mi) long and nearly 2 kilometres (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 kilometres (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement. Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the volcano Arsia Mons. The caves, named after loved ones of their discoverers, are collectively known as the "seven sisters". Cave entrances measure from 100 to 252 metres (328 to 827 ft) wide and they are estimated to be at least 73 to 96 metres (240 to 315 ft) deep. Because light does not reach the floor of most of the caves, they may extend much deeper than these lower estimates and widen below the surface. "Dena" is the only exception; its floor is visible and was measured to be 130 metres (430 ft) deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface. Martian geysers (or CO2 jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms – are the two most visible types of features ascribed to these eruptions. Similarly sized dust will settle from the thinner Martian atmosphere sooner than it would on Earth. For example, the dust suspended by the 2001 global dust storms on Mars only remained in the Martian atmosphere for 0.6 years, while the dust from Mount Pinatubo took about two years to settle. However, under current Martian conditions, the mass movements involved are generally much smaller than on Earth. Even the 2001 global dust storms on Mars moved only the equivalent of a very thin dust layer – about 3 μm thick if deposited with uniform thickness between 58° north and south of the equator. Dust deposition at the two rover sites has proceeded at a rate of about the thickness of a grain every 100 sols. Atmosphere Mars lost its magnetosphere 4 billion years ago, possibly because of numerous asteroid strikes, so the solar wind interacts directly with the Martian ionosphere, lowering the atmospheric density by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected ionized atmospheric particles trailing off into space behind Mars, and this atmospheric loss is being studied by the MAVEN orbiter. Compared to Earth, the atmosphere of Mars is quite rarefied. Atmospheric pressure on the surface today ranges from a low of 30 Pa (0.0044 psi) on Olympus Mons to over 1,155 Pa (0.1675 psi) in Hellas Planitia, with a mean pressure at the surface level of 600 Pa (0.087 psi). The highest atmospheric density on Mars is equal to that found 35 kilometres (22 mi) above Earth's surface. The resulting mean surface pressure is only 0.6% of Earth's 101.3 kPa (14.69 psi). The scale height of the atmosphere is about 10.8 kilometres (6.7 mi), which is higher than Earth's 6 kilometres (3.7 mi), because the surface gravity of Mars is only about 38% of Earth's. The atmosphere of Mars consists of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 μm in diameter which give the Martian sky a tawny color when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it. Despite repeated detections of methane on Mars, there is no scientific consensus as to its origin. One suggestion is that methane exists on Mars and that its concentration fluctuates seasonally. The existence of methane could be produced by non-biological process such as serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars, or by Martian life. Compared to Earth, its higher concentration of atmospheric CO2 and lower surface pressure may be why sound is attenuated more on Mars, where natural sources are rare apart from the wind. Using acoustic recordings collected by the Perseverance rover, researchers concluded that the speed of sound there is approximately 240 m/s for frequencies below 240 Hz, and 250 m/s for those above. Auroras have been detected on Mars. Because Mars lacks a global magnetic field, the types and distribution of auroras there differ from those on Earth; rather than being mostly restricted to polar regions as is the case on Earth, a Martian aurora can encompass the planet. In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25 times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month. Mars has seasons, alternating between its northern and southern hemispheres, similar to on Earth. Additionally the orbit of Mars has, compared to Earth's, a large eccentricity and approaches perihelion when it is summer in its southern hemisphere and winter in its northern, and aphelion when it is winter in its southern hemisphere and summer in its northern. As a result, the seasons in its southern hemisphere are more extreme and the seasons in its northern are milder than would otherwise be the case. The summer temperatures in the south can be warmer than the equivalent summer temperatures in the north by up to 30 °C (54 °F). Martian surface temperatures vary from lows of about −110 °C (−166 °F) to highs of up to 35 °C (95 °F) in equatorial summer. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure (about 1% that of the atmosphere of Earth), and the low thermal inertia of Martian soil. The planet is 1.52 times as far from the Sun as Earth, resulting in just 43% of the amount of sunlight. Mars has the largest dust storms in the Solar System, reaching speeds of over 160 km/h (100 mph). These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase global temperature. Seasons also produce dry ice covering polar ice caps. Hydrology While Mars contains water in larger amounts, most of it is dust covered water ice at the Martian polar ice caps. The volume of water ice in the south polar ice cap, if melted, would be enough to cover most of the surface of the planet with a depth of 11 metres (36 ft). Water in its liquid form cannot persist on the surface due to Mars's low atmospheric pressure, which is less than 1% that of Earth. Only at the lowest of elevations are the pressure and temperature high enough for liquid water to exist for short periods. Although little water is present in the atmosphere, there is enough to produce clouds of water ice and different cases of snow and frost, often mixed with snow of carbon dioxide dry ice. Landforms visible on Mars strongly suggest that liquid water has existed on the planet's surface. Huge linear swathes of scoured ground, known as outflow channels, cut across the surface in about 25 places. These are thought to be a record of erosion caused by the catastrophic release of water from subsurface aquifers, though some of these structures have been hypothesized to result from the action of glaciers or lava. One of the larger examples, Ma'adim Vallis, is 700 kilometres (430 mi) long, much greater than the Grand Canyon, with a width of 20 kilometres (12 mi) and a depth of 2 kilometres (1.2 mi) in places. It is thought to have been carved by flowing water early in Mars's history. The youngest of these channels is thought to have formed only a few million years ago. Elsewhere, particularly on the oldest areas of the Martian surface, finer-scale, dendritic networks of valleys are spread across significant proportions of the landscape. Features of these valleys and their distribution strongly imply that they were carved by runoff resulting from precipitation in early Mars history. Subsurface water flow and groundwater sapping may play important subsidiary roles in some networks, but precipitation was probably the root cause of the incision in almost all cases. Along craters and canyon walls, there are thousands of features that appear similar to terrestrial gullies. The gullies tend to be in the highlands of the Southern Hemisphere and face the Equator; all are poleward of 30° latitude. A number of authors have suggested that their formation process involves liquid water, probably from melting ice, although others have argued for formation mechanisms involving carbon dioxide frost or the movement of dry dust. No partially degraded gullies have formed by weathering and no superimposed impact craters have been observed, indicating that these are young features, possibly still active. Other geological features, such as deltas and alluvial fans preserved in craters, are further evidence for warmer, wetter conditions at an interval or intervals in earlier Mars history. Such conditions necessarily require the widespread presence of crater lakes across a large proportion of the surface, for which there is independent mineralogical, sedimentological and geomorphological evidence. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water. The chemical signature of water vapor on Mars was first unequivocally demonstrated in 1963 by spectroscopy using an Earth-based telescope. In 2004, Opportunity detected the mineral jarosite. This forms only in the presence of acidic water, showing that water once existed on Mars. The Spirit rover found concentrated deposits of silica in 2007 that indicated wet conditions in the past, and in December 2011, the mineral gypsum, which also forms in the presence of water, was found on the surface by NASA's Mars rover Opportunity. It is estimated that the amount of water in the upper mantle of Mars, represented by hydroxyl ions contained within Martian minerals, is equal to or greater than that of Earth at 50–300 parts per million of water, which is enough to cover the entire planet to a depth of 200–1,000 metres (660–3,280 ft). On 18 March 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 centimetres (24 in), during the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain. In September 2015, NASA announced that they had found strong evidence of hydrated brine flows in recurring slope lineae, based on spectrometer readings of the darkened areas of slopes. These streaks flow downhill in Martian summer, when the temperature is above −23 °C, and freeze at lower temperatures. These observations supported earlier hypotheses, based on timing of formation and their rate of growth, that these dark streaks resulted from water flowing just below the surface. However, later work suggested that the lineae may be dry, granular flows instead, with at most a limited role for water in initiating the process. A definitive conclusion about the presence, extent, and role of liquid water on the Martian surface remains elusive. Researchers suspect much of the low northern plains of the planet were covered with an ocean hundreds of meters deep, though this theory remains controversial. In March 2015, scientists stated that such an ocean might have been the size of Earth's Arctic Ocean. This finding was derived from the ratio of protium to deuterium in the modern Martian atmosphere compared to that ratio on Earth. The amount of Martian deuterium (D/H = 9.3 ± 1.7 10−4) is five to seven times the amount on Earth (D/H = 1.56 10−4), suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the former presence of an ocean. Other scientists caution that these results have not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water. Near the northern polar cap is the 81.4 kilometres (50.6 mi) wide Korolev Crater, which the Mars Express orbiter found to be filled with approximately 2,200 cubic kilometres (530 cu mi) of water ice. In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior (which is 12,100 cubic kilometers). During observations from 2018 through 2021, the ExoMars Trace Gas Orbiter spotted indications of water, probably subsurface ice, in the Valles Marineris canyon system. Orbital motion Mars's average distance from the Sun is roughly 230 million km (143 million mi), and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. The gravitational potential difference and thus the delta-v needed to transfer between Mars and Earth is the second lowest for Earth. The axial tilt of Mars is 25.19° relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day, the orientation of the north pole of Mars is close to the star Deneb. Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury has a larger orbital eccentricity. It is known that in the past, Mars has had a much more circular orbit. At one point, 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. Mars's cycle of eccentricity is 96,000 Earth years compared to Earth's cycle of 100,000 years. Mars has its closest approach to Earth (opposition) in a synodic period of 779.94 days. It should not be confused with Mars conjunction, where the Earth and Mars are at opposite sides of the Solar System and form a straight line crossing the Sun. The average time between the successive oppositions of Mars, its synodic period, is 780 days; but the number of days between successive oppositions can range from 764 to 812. The distance at close approach varies between about 54 and 103 million km (34 and 64 million mi) due to the planets' elliptical orbits, which causes comparable variation in angular size. At their furthest Mars and Earth can be as far as 401 million km (249 million mi) apart. Mars comes into opposition from Earth every 2.1 years. The planets come into opposition near Mars's perihelion in 2003, 2018 and 2035, with the 2020 and 2033 events being particularly close to perihelic opposition. The mean apparent magnitude of Mars is +0.71 with a standard deviation of 1.05. Because the orbit of Mars is eccentric, the magnitude at opposition from the Sun can range from about −3.0 to −1.4. The minimum brightness is magnitude +1.86 when the planet is near aphelion and in conjunction with the Sun. At its brightest, Mars (along with Jupiter) is second only to Venus in apparent brightness. Mars usually appears distinctly yellow, orange, or red. When farthest away from Earth, it is more than seven times farther away than when it is closest. Mars is usually close enough for particularly good viewing once or twice at 15-year or 17-year intervals. Optical ground-based telescopes are typically limited to resolving features about 300 kilometres (190 mi) across when Earth and Mars are closest because of Earth's atmosphere. As Mars approaches opposition, it begins a period of retrograde motion, which means it will appear to move backwards in a looping curve with respect to the background stars. This retrograde motion lasts for about 72 days, and Mars reaches its peak apparent brightness in the middle of this interval. Moons Mars has two relatively small (compared to Earth's) natural moons, Phobos (about 22 km (14 mi) in diameter) and Deimos (about 12 km (7.5 mi) in diameter), which orbit at 9,376 km (5,826 mi) and 23,460 km (14,580 mi) around the planet. The origin of both moons is unclear, although a popular theory states that they were asteroids captured into Martian orbit. Both satellites were discovered in 1877 by Asaph Hall and were named after the characters Phobos (the deity of panic and fear) and Deimos (the deity of terror and dread), twins from Greek mythology who accompanied their father Ares, god of war, into battle. Mars was the Roman equivalent to Ares. In modern Greek, the planet retains its ancient name Ares (Aris: Άρης). From the surface of Mars, the motions of Phobos and Deimos appear different from that of the Earth's satellite, the Moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbit – where the orbital period would match the planet's period of rotation – rises as expected in the east, but slowly. Because the orbit of Phobos is below a synchronous altitude, tidal forces from Mars are gradually lowering its orbit. In about 50 million years, it could either crash into Mars's surface or break up into a ring structure around the planet. The origin of the two satellites is not well understood. Their low albedo and carbonaceous chondrite composition have been regarded as similar to asteroids, supporting a capture theory. The unstable orbit of Phobos would seem to point toward a relatively recent capture. But both have circular orbits near the equator, which is unusual for captured objects, and the required capture dynamics are complex. Accretion early in the history of Mars is plausible, but would not account for a composition resembling asteroids rather than Mars itself, if that is confirmed. Mars may have yet-undiscovered moons, smaller than 50 to 100 metres (160 to 330 ft) in diameter, and a dust ring is predicted to exist between Phobos and Deimos. A third possibility for their origin as satellites of Mars is the involvement of a third body or a type of impact disruption. More-recent lines of evidence for Phobos having a highly porous interior, and suggesting a composition containing mainly phyllosilicates and other minerals known from Mars, point toward an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's satellite. Although the visible and near-infrared (VNIR) spectra of the moons of Mars resemble those of outer-belt asteroids, the thermal infrared spectra of Phobos are reported to be inconsistent with chondrites of any class. It is also possible that Phobos and Deimos were fragments of an older moon, formed by debris from a large impact on Mars, and then destroyed by a more recent impact upon the satellite. More recently, a study conducted by a team of researchers from multiple countries suggests that a lost moon, at least fifteen times the size of Phobos, may have existed in the past. By analyzing rocks which point to tidal processes on the planet, it is possible that these tides may have been regulated by a past moon. Human observations and exploration The history of observations of Mars is marked by oppositions of Mars when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars, which are distinguished because Mars is close to perihelion, making it even closer to Earth. The ancient Sumerians named Mars Nergal, the god of war and plague. During Sumerian times, Nergal was a minor deity of little significance, but, during later times, his main cult center was the city of Nineveh. In Mesopotamian texts, Mars is referred to as the "star of judgement of the fate of the dead". The existence of Mars as a wandering object in the night sky was also recorded by the ancient Egyptian astronomers and, by 1534 BCE, they were familiar with the retrograde motion of the planet. By the period of the Neo-Babylonian Empire, the Babylonian astronomers were making regular records of the positions of the planets and systematic observations of their behavior. For Mars, they knew that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. They invented arithmetic methods for making minor corrections to the predicted positions of the planets. In Ancient Greece, the planet was known as Πυρόεις. Commonly, the Greek name for the planet now referred to as Mars, was Ares. It was the Romans who named the planet Mars, for their god of war, often represented by the sword and shield of the planet's namesake. In the fourth century BCE, Aristotle noted that Mars disappeared behind the Moon during an occultation, indicating that the planet was farther away. Ptolemy, a Greek living in Alexandria, attempted to address the problem of the orbital motion of Mars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection later called the Almagest (from the Arabic for "greatest"), which became the authoritative treatise on Western astronomy for the next fourteen centuries. Literature from ancient China confirms that Mars was known by Chinese astronomers by no later than the fourth century BCE. In the East Asian cultures, Mars is traditionally referred to as the "fire star" (火星) based on the Wuxing system. In 1609 Johannes Kepler published a 10 year study of Martian orbit, using the diurnal parallax of Mars, measured by Tycho Brahe, to make a preliminary calculation of the relative distance to the planet. From Brahe's observations of Mars, Kepler deduced that the planet orbited the Sun not in a circle, but in an ellipse. Moreover, Kepler showed that Mars sped up as it approached the Sun and slowed down as it moved farther away, in a manner that later physicists would explain as a consequence of the conservation of angular momentum.: 433–437 In 1610 the first use of a telescope for astronomical observation, including Mars, was performed by Italian astronomer Galileo Galilei. With the telescope the diurnal parallax of Mars was again measured in an effort to determine the Sun-Earth distance. This was first performed by Giovanni Domenico Cassini in 1672. The early parallax measurements were hampered by the quality of the instruments. The only occultation of Mars by Venus observed was that of 13 October 1590, seen by Michael Maestlin at Heidelberg. By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. On 5 September 1877, a perihelic opposition to Mars occurred. The Italian astronomer Giovanni Schiaparelli used a 22-centimetre (8.7 in) telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which, with the possible exception of the natural canyon Valles Marineris, were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means "channels" or "grooves", was popularly mistranslated in English as "canals". Influenced by the observations, the orientalist Percival Lowell founded an observatory which had 30- and 45-centimetre (12- and 18-in) telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894, and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were independently observed by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time. The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summers) in combination with the canals led to speculation about life on Mars, and it was a long-held belief that Mars contained vast seas and vegetation. As bigger telescopes were used, fewer long, straight canali were observed. During observations in 1909 by Antoniadi with an 84-centimetre (33 in) telescope, irregular patterns were observed, but no canali were seen. The first spacecraft from Earth to visit Mars was Mars 1 of the Soviet Union, which flew by in 1963, but contact was lost en route. NASA's Mariner 4 followed and became the first spacecraft to successfully transmit from Mars; launched on 28 November 1964, it made its closest approach to the planet on 15 July 1965. Mariner 4 detected the weak Martian radiation belt, measured at about 0.1% that of Earth, and captured the first images of another planet from deep space. Once spacecraft visited the planet during the 1960s and 1970s, many previous concepts of Mars were radically broken. After the results of the Viking life-detection experiments, the hypothesis of a dead planet was generally accepted. The data from Mariner 9 and Viking allowed better maps of Mars to be made. Until 1997 and after Viking 1 shut down in 1982, Mars was only visited by three unsuccessful probes, two flying past without contact (Phobos 1, 1988; Mars Observer, 1993), and one (Phobos 2 1989) malfunctioning in orbit before reaching its destination Phobos. In 1997 Mars Pathfinder became the first successful rover mission beyond the Moon and started together with Mars Global Surveyor (operated until late 2006) an uninterrupted active robotic presence at Mars that has lasted until today. It produced complete, extremely detailed maps of the Martian topography, magnetic field and surface minerals. Starting with these missions a range of new improved crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars, with successful missions by the NASA (United States), Jaxa (Japan), ESA, United Kingdom, ISRO (India), Roscosmos (Russia), the United Arab Emirates, and CNSA (China) to study the planet's surface, climate, and geology, uncovering the different elements of the history and dynamic of the hydrosphere of Mars and possible traces of ancient life. As of 2023[update], Mars is host to ten functioning spacecraft. Eight are in orbit: 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, ExoMars Trace Gas Orbiter, the Hope orbiter, and the Tianwen-1 orbiter. Another two are on the surface: the Mars Science Laboratory Curiosity rover and the Perseverance rover. Collected maps are available online at websites including Google Mars. NASA provides two online tools: Mars Trek, which provides visualizations of the planet using data from 50 years of exploration, and Experience Curiosity, which simulates traveling on Mars in 3-D with Curiosity. Planned missions to Mars include: As of February 2024[update], debris from these types of missions has reached over seven tons. Most of it consists of crashed and inactive spacecraft as well as discarded components. In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Key areas include establishing telecommunications, payload delivery and surface imaging. Habitability and habitation During the late 19th century, it was widely accepted in the astronomical community that Mars had life-supporting qualities, including the presence of oxygen and water. However, in 1894 W. W. Campbell at Lick Observatory observed the planet and found that "if water vapor or oxygen occur in the atmosphere of Mars it is in quantities too small to be detected by spectroscopes then available". That observation contradicted many of the measurements of the time and was not widely accepted. Campbell and V. M. Slipher repeated the study in 1909 using better instruments, but with the same results. It was not until the findings were confirmed by W. S. Adams in 1925 that the myth of the Earth-like habitability of Mars was finally broken. However, even in the 1960s, articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars. The current understanding of planetary habitability – the ability of a world to develop environmental conditions favorable to the emergence of life – favors planets that have liquid water on their surface. Most often this requires the orbit of a planet to lie within the habitable zone, which for the Sun is estimated to extend from within the orbit of Earth to about that of Mars. During perihelion, Mars dips inside this region, but Mars's thin (low-pressure) atmosphere prevents liquid water from existing over large regions for extended periods. The past flow of liquid water demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface may have been too salty and acidic to support regular terrestrial life. The environmental conditions on Mars are a challenge to sustaining organic life: the planet has little heat transfer across its surface, it has poor insulation against bombardment by the solar wind due to the absence of a magnetosphere and has insufficient atmospheric pressure to retain water in a liquid form (water instead sublimes to a gaseous state). Mars is nearly, or perhaps totally, geologically dead; the end of volcanic activity has apparently stopped the recycling of chemicals and minerals between the surface and interior of the planet. Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there remains unknown. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites and had positive results, including a temporary increase in CO2 production on exposure to water and nutrients. This sign of life was later disputed by scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A 2014 analysis of Martian meteorite EETA79001 found chlorate, perchlorate, and nitrate ions in sufficiently high concentrations to suggest that they are widespread on Mars. UV and X-ray radiation would turn chlorate and perchlorate ions into other, highly reactive oxychlorines, indicating that any organic molecules would have to be buried under the surface to survive. Small quantities of methane and formaldehyde detected by Mars orbiters are both claimed to be possible evidence for life, as these chemical compounds would quickly break down in the Martian atmosphere. Alternatively, these compounds may instead be replenished by volcanic or other geological means, such as serpentinite. Impact glass, formed by the impact of meteors, which on Earth can preserve signs of life, has also been found on the surface of the impact craters on Mars. Likewise, the glass in impact craters on Mars could have preserved signs of life, if life existed at the site. The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available. Several plans for a human mission to Mars have been proposed, but none have come to fruition. The NASA Authorization Act of 2017 directed NASA to study the feasibility of a crewed Mars mission in the early 2030s; the resulting report concluded that this would be unfeasible. In addition, in 2021, China was planning to send a crewed Mars mission in 2033. Privately held companies such as SpaceX have also proposed plans to send humans to Mars, with the eventual goal to settle on the planet. As of 2024, SpaceX has proceeded with the development of the Starship launch vehicle with the goal of Mars colonization. In plans shared with the company in April 2024, Elon Musk envisions the beginning of a Mars colony within the next twenty years. This would be enabled by the planned mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. The moon Phobos has been proposed as an anchor point for a space elevator. Besides national space agencies and space companies, groups such as the Mars Society and The Planetary Society advocate for human missions to Mars. In culture Mars is named after the Roman god of war (Greek Ares), but was also associated with the demi-god Heracles (Roman Hercules) by ancient Greek astronomers, as detailed by Aristotle. This association between Mars and war dates back at least to Babylonian astronomy, in which the planet was named for the god Nergal, deity of war and destruction. It persisted into modern times, as exemplified by Gustav Holst's orchestral suite The Planets, whose famous first movement labels Mars "The Bringer of War". The planet's symbol, a circle with a spear pointing out to the upper right, is also used as a symbol for the male gender. The symbol dates from at least the 11th century, though a possible predecessor has been found in the Greek Oxyrhynchus Papyri. The idea that Mars was populated by intelligent Martians became widespread in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". In the present day, high-resolution mapping of the surface of Mars has revealed no artifacts of habitation, but pseudoscientific speculation about intelligent life on Mars still continues. Reminiscent of the canali observations, these speculations are based on small scale features perceived in the spacecraft images, such as "pyramids" and the "Face on Mars". In his book Cosmos, planetary astronomer Carl Sagan wrote: "Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears." The depiction of Mars in fiction has been stimulated by its dramatic red color and by nineteenth-century scientific speculations that its surface conditions might support not just life but intelligent life. This gave way to many science fiction stories involving these concepts, such as H. G. Wells's The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth; Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization; as well as Edgar Rice Burroughs's series Barsoom, C. S. Lewis's novel Out of the Silent Planet (1938), and a number of Robert A. Heinlein stories before the mid-sixties. Since then, depictions of Martians have also extended to animation. A comic figure of an intelligent Martian, Marvin the Martian, appeared in Haredevil Hare (1948) as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. After the Mariner and Viking spacecraft had returned pictures of Mars as a lifeless and canal-less world, these ideas about Mars were abandoned; for many science-fiction authors, the new discoveries initially seemed like a constraint, but eventually the post-Viking knowledge of Mars became itself a source of inspiration for works like Kim Stanley Robinson's Mars trilogy. See also Notes References Further reading External links Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of". |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_note-3] | [TOKENS: 10628] |
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Hasmonean] | [TOKENS: 11383] |
Contents Hasmonean dynasty The Hasmonean dynasty (/hæzməˈniːən/; Hebrew: חַשְׁמוֹנָאִים Ḥašmōnāʾīm; Greek: Ασμοναϊκή δυναστεία) was the Jewish ruling dynasty of Judea during the Hellenistic times of the Second Temple period (part of classical antiquity), from c. 141 BC to 37 BC. Between c. 141 and c. 116 BC the dynasty ruled Judea semi-autonomously within the Seleucid Empire, and from roughly 110 BC, with the empire disintegrating, gained further autonomy and expanded into the neighboring regions of Perea, Samaria, Idumea, Galilee, and Iturea. The Hasmonean rulers took the Greek title basileus ("king") and the kingdom attained regional power status for several decades. Forces of the Roman Republic intervened in the Hasmonean Civil War in 63 BC, turning the kingdom into a client state and marking an irreversible decline of Hasmonean power; Herod the Great displaced the last reigning Hasmonean client-ruler in 37 BC. Simon Thassi established the dynasty in 141 BC, two decades after his brother Judah Maccabee (יהודה המכבי Yehudah HaMakabi) had defeated the Seleucid army during the Maccabean Revolt of 167 to 160 BC. According to 1 Maccabees, 2 Maccabees, and the first book of The Jewish War by historian Josephus (37 – c. 100 AD), the Seleucid king Antiochus IV Epiphanes (r. 175–164) moved to assert strict control over the Seleucid satrapy of Coele Syria and Phoenicia after his successful invasion of Ptolemaic Egypt (170–168 BC) was turned back by the intervention of the Roman Republic. He sacked Jerusalem and its Temple, suppressing Jewish and Samaritan religious and cultural observances, and imposed Hellenistic practices (c. 168–167 BC). The steady collapse of the Seleucid Empire under attacks from the rising powers of the Roman Republic and the Parthian Empire allowed Judea to regain some autonomy; however, in 63 BC, the kingdom was invaded by the Roman Republic, broken up and set up as a Roman client state. Hyrcanus II and Aristobulus II, Simon's great-grandsons, became pawns in a proxy war between Julius Caesar and Pompey. The deaths of Pompey (48 BC) and Caesar (44 BC), and the related Roman civil wars, temporarily relaxed Rome's grip on the Hasmonean kingdom, allowing a brief reassertion of autonomy backed by the Parthian Empire, rapidly crushed by the Romans under Mark Antony and Augustus. The Hasmonean dynasty had survived for 103 years before yielding to the Herodian dynasty in 37 BC. The installation of Herod the Great (an Idumean) as king in 37 BC made Judea a Roman client state and marked the end of the Hasmonean dynasty. Even then, Herod tried to bolster the legitimacy of his reign by marrying a Hasmonean princess, Mariamne, and planning to drown the last male Hasmonean heir at his Jericho palace. In 6 AD, Rome joined Judea proper, Samaria and Idumea into the Roman province of Judaea. In 44 AD, Rome installed the rule of a procurator side by side with the rule of the Herodian kings (specifically Agrippa I 41–44 and Agrippa II 50–100). Etymology and origins The family name of the Hasmonean dynasty originates from the ancestor of the house, whom Josephus called by the Hellenised form Asmoneus or Asamoneus (Greek: Ἀσαμωναῖος), said to have been the great-grandfather of Mattathias, but about whom nothing more is known. The name appears to come from the Hebrew name Hashmonai (Hebrew: חַשְׁמוֹנַאי, romanized: Ḥašmōnaʾy). An alternative view posits that the Hebrew name Hashmona'i is linked with the village of Heshmon, mentioned in Joshua 15:27. P.J. Gott and Logan Licht attribute the name to "Ha Simeon", a veiled reference to the Simeonite Tribe. The origins of the Hasmonean dynasty are somewhat obscure, as the family appears in the historical record only when they rise up against Seleucid rule in the mid-2nd century BC. The family's rise began with Mattathias, a priest from the town of Mod'in, who resisted the anti-Jewish decrees of Antiochus IV and sparked what became the Maccabean Revolt. Josephus and 1 Maccabees identify the family as belonging to the priestly order of Joiarib. The widely known label "Maccabees" originally applied only to Judas Maccabeus, son of Mattathias, whose nickname probably meant "hammer" and indicates his reputation as a fierce fighter. Hasmonean leaders Historical sources The major source of information about the origin of the Hasmonean dynasty is the books 1 Maccabees and 2 Maccabees, held as canonical scripture by the Catholic, Orthodox, and most Oriental Orthodox churches and as apocryphal by Protestant denominations, although they do not comprise the canonical books of the Hebrew Bible. The books cover the period from 175 BC to 134 BC during which time the Hasmonean dynasty became semi-independent from the Seleucid empire but had not yet expanded far outside of Judea. They are written from the point of view that the salvation of the Jewish people in a crisis came from God through the family of Mattathias, particularly his sons Judas Maccabeus, Jonathan Apphus, and Simon Thassi, and his grandson John Hyrcanus. The books include historical and religious material from the Septuagint that was codified by Catholics and Eastern Orthodox Christians. The other primary source for the Hasmonean dynasty is the first book of The Wars of the Jews and a more detailed history in Antiquities of the Jews by the Jewish historian Josephus, (37–c. 100 AD). Josephus' account is the only primary source covering the history of the Hasmonean dynasty during the period of its expansion and independence between 110 and 63 BC. Notably, Josephus, a Roman citizen and former general in the Galilee, who survived the Jewish–Roman wars of the 1st century, was a Jew who was captured by and cooperated with the Romans, and wrote his books under Roman patronage. Background Between c. 720 BC and 333 BC, the lands of the former Kingdom of Israel and Kingdom of Judah were occupied in turn by Assyria, Babylonia, and the Achaemenid Empire. After the conquest of the Achaemenid Empire by Alexander the Great (c. 333 BC), the entire region was heavily contested between the two major successor states of the Macedonian empire — the Seleucid Empire to the north and the Ptolemaic Kingdom to the south. Between 319 and 302 BC, Jerusalem changed hands seven times. Under Antiochus III the Great, the Seleucids wrested control of Judea from the Ptolemies for the final time, defeating Ptolemy V Epiphanes at the Battle of Panium in 200 BC. Seleucid rule over the Jewish parts of the region then resulted in the rise of Hellenistic cultural and religious practices: "In addition to the turmoil of war, there arose in the Jewish nation pro-Seleucid and pro-Ptolemaic parties; and the schism exercised great influence upon the Judaism of the time. It was in Antioch that the Jews first made the acquaintance of Hellenism and of the more corrupt sides of Greek culture; and it was from Antioch that Judea henceforth was ruled." Seleucid rule over Judea The continuing Hellenization of Judea pitted those who eagerly Hellenized against traditionalists, as the former felt that the latter's orthodoxy held them back; additionally the conflict between Ptolemies and Seleucids further divided them over allegiance to either faction. An example of these divisions is the conflict which broke out between High Priest Onias III (who opposed Hellenisation and favoured the Ptolemies) and his brother Jason (who favoured Hellenisation and the Seleucids) in 175 BC, followed by a period of political intrigue with both Jason and Menelaus bribing the king to win the High Priesthood, and accusations of murder of competing contenders for the title. The result was a brief civil war. The Tobiads, a philo-Hellenistic party, succeeded in placing Jason into the powerful position of High Priest. He established an arena for public games close by the Temple. Author Lee I. Levine notes, "The 'piece de resistance' of Judaean Hellenisation, and the most dramatic of all these developments, occurred in 175 BC, when the high priest Jason converted Jerusalem into a Greek polis replete with gymnasium and ephebeion (2 Maccabees 4). Whether this step represents the culmination of a 150-year process of Hellenisation within Jerusalem in general, or whether it was only the initiative of a small coterie of Jerusalem priests with no wider ramifications, has been debated for decades." Hellenised Jews are known to have engaged in non-surgical foreskin restoration (epispasm) in order to join the dominant Hellenistic cultural practice of socialising naked in the gymnasium, where their circumcision would have carried a social stigma; Classical, Hellenistic, and Roman culture found circumcision to be a cruel, barbaric and repulsive custom. In spring 168 BCE, after successfully invading the Ptolemaic kingdom of Egypt, Antiochus IV was humiliatingly pressured by the Romans to withdraw. According to the Roman historian Livy, the Roman senate dispatched the diplomat Gaius Popilius to Egypt who demanded Antiochus to withdraw. When Antiochus requested time to discuss the matter Popilius "drew a circle round the king with the stick he was carrying and said, 'Before you step out of that circle give me a reply to lay before the senate.'" While Antiochus was campaigning in Egypt, a rumor spread in Judah that he had been killed. The deposed high priest Jason[clarification needed] took advantage of the situation, attacked Jerusalem, and drove away Menelaus and his followers. Menelaus took refuge in Akra, the Seleucids fortress in Jerusalem. When Antiochus heard of this, he sent an army to Jerusalem who drove out Jason and his followers, and reinstated Menelaus as high priest; he then imposed a tax and established a fortress in Jerusalem. During this period Antiochus tried to suppress public observance of Jewish laws, apparently in an attempt to secure control over the Jews. His government set up an idol of Zeus on the Temple Mount, which Jews considered to be desecration of the Mount, outlawed observance of the Sabbath and the offering of sacrifices at the Jerusalem Temple, required Jewish leaders to sacrifice to idols and forbade both circumcision and possession of Jewish scriptures, on pain of death. Punitive executions were also instituted. According to Josephus, "Now Antiochus was not satisfied either with his unexpected taking the city, or with its pillage, or with the great slaughter he had made there; but being overcome with his violent passions, and remembering what he had suffered during the siege, he compelled the Jews to dissolve the laws of their country, and to keep their infants uncircumcised, and to sacrifice swine's flesh upon the altar." The motives of Antiochus are unclear. He may have been incensed at the overthrow of his appointee, Menelaus, he may have been responding to a Jewish revolt that had drawn on the Temple and the Torah for its strength, or he may have been encouraged by a group of radical Hellenisers among the Jews. Maccabean Revolt The author of the First Book of Maccabees regarded the Maccabean revolt as a rising of pious Jews against the Seleucid king who had tried to eradicate their religion and against the Jews who supported him. The author of the Second Book of Maccabees presented the conflict as a struggle between "Judaism" and "Hellenism", words that he was the first to use. Modern scholarship tends to the second view. Most modern scholars argue that the king was intervening in a civil war between traditionalist Jews in the countryside and Hellenised Jews in Jerusalem. According to Joseph P. Schultz, modern scholarship, "considers the Maccabean revolt less as an uprising against foreign oppression than as a civil war between the orthodox and reformist parties in the Jewish camp." In the conflict over the office of High Priest, traditionalists with Hebrew/Aramaic names like Onias contested against Hellenisers with Greek names like Jason or Menelaus. Other authors point to social and economic factors in the conflict. What began as a civil war took on the character of an invasion when the Hellenistic kingdom of Syria sided with the Hellenising Jews against the traditionalists. As the conflict escalated, Antiochus prohibited the practices of the traditionalists, thereby, in a departure from usual Seleucid practice, banning the religion of an entire people. Other scholars argue that while the rising began as a religious rebellion, it was gradually transformed into a war of national liberation. The two greatest twentieth-century scholars of the Maccabean revolt, Elias Bickermann and Victor Tcherikover, each placed the blame on the policies of the Jewish leaders and not on the Seleucid ruler, Antiochus IV Epiphanes, but for different reasons.Bickermann saw the origin of the problem in the attempt of "Hellenised" Jews to reform the "antiquated" and "outdated" religion practised in Jerusalem, and to rid it of superstitious elements. They were the ones who egged on Antiochus IV and instituted the religious reform in Jerusalem. One suspects that [Bickermann] may have been influenced in his view by an antipathy to Reform Judaism in 19th- and 20th-century Germany. Tcherikover, perhaps influenced by socialist concerns, saw the uprising as one of the rural peasants against the rich elite. According to I and II Maccabees, the priestly family of Mattathias (Mattitiyahu in Hebrew), which came to be known as the Maccabees, called the people forth to holy war against the Seleucids. Mattathias' sons Judas (Yehuda), Jonathan (Yonoson/Yonatan), and Simon (Shimon) began a military campaign, initially with disastrous results: one thousand Jewish men, women, and children were killed by Seleucid troops during Sabbath as they refused to fight on the holy day. After that, other Jews accepted that when attacked on the Sabbath they should fight back. Eventually the use of guerrilla warfare practices by Judah over several years gave control of the country to the Maccabees: It was now, in the fall of 165, that Judah's successes began to disturb the central government. He appears to have controlled the road from Jaffa to Jerusalem, and thus to have cut off the royal party in Acra from direct communication with the sea and thus with the government. It is significant that this time the Syrian troops, under the leadership of the governor-general Lysias, took the southerly route, by way of Idumea. Towards the end of 164 BC, after reaching a compromise with Lysias (who retreated to Antioch perhaps for political reasons following the death of Antiochus IV who died while campaigning against the Parthians), Judas entered Jerusalem and re-established the formal religious worship of Yahweh. The feast of Hanukkah was instituted to commemorate the recovery of the temple. Around April 162 Judas laid siege to Acra, which had remained under Seleucids control, as a response Lysias returned to fight the jews in the Battle of Beth Zechariah, but despite the positive outcome of the battle, the resistance of the Maccabees in the mountains of Aphairema (near the original center of the revolt) and troubles in his own home country, prompted by the political situation surrounding the young Antiochus V Eupator successor of Antiochus IV, forced Lysias to once again negotiate peace with the Maccabees, renouncing to his siege of Jerusalem in exchange for the Maccabean siege to Acra.[note 1] In 161, while on his way to assume governorship Nicanor, the newly appointed strategos of the region, won a skirmish against Simon, and while in Jerusalem, despite 2 Maccabees describing good initial relations between him and Judas(including the appointment to an official position), he eventually tried of have the latter arrested. Judas was however able to flee to the countryside and, after defeating Nicanor and the small contingent under him that was giving chase, he later managed to win a decisive battle at Adasa where Nicanor was killed (ib. 7:26–50), granting Judas once again control over Jerusalem. At this point, strong of his multiple wins over the Seleucids, he sent Eupolemus the son of Johanan and Jason the son of Eleazar as a diplomatic party "to make a league of amity and confederacy with the Romans." However on the same year, Antiochus V was soon succeeded by his cousin Demetrius I Soter, whose throne his father had usurped. Demetrius, after getting rid of Antiochus and Lysas, sent the general Bacchides to Israel with a large army, in order to install Alcimus to the office of high priest. After Bacchides carried out a massacre in Galilee and Alcimus thus claimed to be in a better position than Judas to protect the Hebrew population, the Hasmonean leader prepared to meet the Seleucid general in battle; the unorthodox route Bacchides took however (through Mount Beth El) may have surprised Judas's forces, two thirds of which, finding themselves greatly outnumbered in an open field battle, didn't actually fight. In what is known as the Battle of Elasa (Laisa), Judas choose to fight against all odds and aimed to win by charging the right flank where Bacchides would be located and decapitate the Seleucid army as he did with Nicanor's. After what the sources describe as a battle that lasted 'from morning to evening', the Seleucid cavalry was able to cut off Judas, and it ultimately was the Jewish army who was dispersed after the loss of their leader. The achievement of autonomy Upon Judas's death, the persecuted patriots, under his brother Jonathan, fled beyond the Jordan River. (ib. 9:25–27) They set camp near a morass by the name of Asphar, and remained, after several engagements with the Seleucids, in the swamp in the country east of the Jordan. Following the death of his puppet High Priest Alcimus in 159 BC, Bacchides felt secure enough to leave the country, but two years later, the City of Acre contacted Demetrius and requested the return of Bacchides to deal with the Maccabean threat. Jonathan and Simeon, wise of 10 years worth of experience in guerrilla warfare, thought it well to retreat farther, and accordingly fortified a place named Beth-hogla in the desert, where they were besieged several days by Bacchides. Jonathan offered the rival general a peace treaty and exchange of prisoners of war which Bacchides readily consented to, and even took an oath of nevermore making war upon Jonathan. Bacchius and his forces then left Israel and nothing is reported for the five following years (158–153 BC), as the chief source (1 Maccabees) reports: "Thus the sword ceased from Israel. Jonathan settled in Michmash and began to judge the people; and he destroyed the godless and the apostate out of Israel". An important external event brought the design of the Maccabeans to fruition. Demetrius I Soter's relations with Attalus II Philadelphus of Pergamon (reigned 159–138 BC), Ptolemy VI of Egypt (reigned 163–145 BC), and Ptolemy's co-ruler Cleopatra II of Egypt were deteriorating, and they supported a rival claimant to the Seleucid throne: Alexander Balas, who purported to be the son of Antiochus IV Epiphanes and a first cousin of Demetrius. Demetrius was forced to recall the garrisons of Judea, except those in the City of Acre and at Beth-zur, to bolster his strength. Furthermore, he made a bid for the loyalty of Jonathan, permitting him to recruit an army and to reclaim the hostages kept in the City of Acre. Jonathan gladly accepted these terms, took up residence at Jerusalem in 153 BC, and began fortifying the city. Alexander Balas offered Jonathan even more favourable terms, including official appointment as High Priest in Jerusalem, and despite a second letter from Demetrius promising prerogatives that were almost impossible to guarantee, Jonathan declared allegiance to Balas. Jonathan became the official religious leader of his people, and officiated at the Feast of Tabernacles of 153 BC wearing the High Priest's garments. The Hellenistic party could no longer attack him without severe consequences. Hasmoneans held the office of High Priest continuously until 37 BC. Soon, Demetrius lost both his throne and his life, in 150 BC. The victorious Alexander Balas was given the further honour of marriage to Cleopatra Thea, daughter of his allies Ptolemy VI and Cleopatra II. Jonathan was invited to Ptolemais for the ceremony, appearing with presents for both kings, and was permitted to sit between them as their equal; Balas even clothed him with his own royal garment and otherwise accorded him high honour. Balas appointed Jonathan as strategos and "meridarch" (i.e., civil governor of a province; details not found in Josephus), sent him back with honours to Jerusalem, and refused to listen to the Hellenistic party's complaints against Jonathan. In 147 BC, Demetrius II Nicator, a son of Demetrius I Soter, claimed Balas' throne. The governor of Coele-Syria, Apollonius Taos, used the opportunity to challenge Jonathan to battle, saying that the Jews might for once leave the mountains and venture out into the plain. Jonathan and Simeon led a force of 10,000 men against Apollonius' forces in Jaffa, which was unprepared for the rapid attack and opened the gates in surrender to the Jewish forces. Apollonius received reinforcements from Azotus and appeared in the plain in charge of 3,000 men including superior cavalry forces. Jonathan assaulted, captured and burned Azotus along with the resident temple of Dagon and the surrounding villages. Alexander Balas honoured the victorious High Priest by giving him the city of Ekron along with its outlying territory. The people of Azotus complained to King Ptolemy VI, who had come to make war upon his son-in-law, but Jonathan met Ptolemy at Jaffa in peace and accompanied him as far as the River Eleutherus. Jonathan then returned to Jerusalem, maintaining peace with the King of Egypt despite their support for different contenders for the Seleucid throne. In 145 BC, the Battle of Antioch resulted in the final defeat of Alexander Balas by the forces of his father-in-law Ptolemy VI. Ptolemy himself, however, was among the casualties of the battle. Demetrius II Nicator remained sole ruler of the Seleucid Empire and became the second husband of Cleopatra Thea. Jonathan owed no allegiance to the new King and took this opportunity to lay siege to the Acra, the Seleucid fortress in Jerusalem and the symbol of Seleucid control over Judea. It was heavily garrisoned by a Seleucid force and offered asylum to Jewish Hellenists. Demetrius was greatly incensed; he appeared with an army at Ptolemais and ordered Jonathan to come before him. Without raising the siege, Jonathan, accompanied by the elders and priests, went to the king and pacified him with presents, so that the king not only confirmed him in his office of high priest, but gave to him the three Samaritan toparchies of Mount Ephraim, Lod, and Ramathaim-Zophim. In consideration of a present of 300 talents the entire country was exempted from taxes, the exemption being confirmed in writing. Jonathan in return lifted the siege of the Acra and left it in Seleucid hands. Soon, however, a new claimant to the Seleucid throne appeared in the person of the young Antiochus VI Dionysus, son of Alexander Balas and Cleopatra Thea. He was three years old at most, but general Diodotus Tryphon used him to advance his own designs on the throne. In the face of this new enemy, Demetrius not only promised to withdraw the garrison from the City of Acre, but also called Jonathan his ally and requested him to send troops. The 3,000 men of Jonathan protected Demetrius in his capital, Antioch, against his own subjects. As Demetrius II did not keep his promise, Jonathan thought it better to support the new king when Diodotus Tryphon and Antiochus VI seized the capital, especially as the latter confirmed all his rights and appointed his brother Simon (Simeon) strategos of the Paralia (the sea coast), from the "Ladder of Tyre" to the frontier of Egypt. Jonathan and Simon were now entitled to make conquests; Ashkelon submitted voluntarily while Gaza was forcibly taken. Jonathan vanquished even the strategoi of Demetrius II far to the north, in the plain of Hazar, while Simon at the same time took the strong fortress of Beth-zur on the pretext that it harboured supporters of Demetrius. Like Judas in former years, Jonathan sought alliances with foreign peoples. He renewed the treaty with the Roman Republic and exchanged friendly messages with Sparta and other places. However, the documents referring to those diplomatic events are of questionable authenticity. Diodotus Tryphon went with an army to Judea and invited Jonathan to Scythopolis for a friendly conference, where he persuaded him to dismiss his army of 40,000 men, promising to give him Ptolemais and other fortresses. Jonathan fell into the trap; he took with him to Ptolemais 1,000 men, all of whom were slain; he himself was taken prisoner. When Diodotus Tryphon was about to enter Judea at Hadid, he was confronted by the new Jewish leader, Simon, ready for battle. Tryphon, avoiding an engagement, demanded one hundred talents and Jonathan's two sons as hostages, in return for which he promised to liberate Jonathan. Although Simon did not trust Diodotus Tryphon, he complied with the request so that he might not be accused of the death of his brother. But Diodotus Tryphon did not liberate his prisoner; angry that Simon blocked his way everywhere and that he could accomplish nothing, he executed Jonathan at Baskama, in the country east of the Jordan. Jonathan was buried by Simeon at Modin. Nothing is known of his two captive sons. One of his daughters was an ancestor of Josephus. Simon assumed the leadership (142 BC), receiving the double office of High Priest and Ethnarch (Prince) of Israel. The leadership of the Hasmoneans was established by a resolution, adopted in 141 BC, at a large assembly "of the priests and the people and of the elders of the land, to the effect that Simon should be their leader and High Priest forever, until there should arise a faithful prophet" (1 Macc. 14:41). Ironically, the election was performed in Hellenistic fashion. Simon, having made the Jewish people semi-independent of the Seleucid Greeks, reigned from 142 to 135 BC and formed the Hasmonean dynasty, finally capturing the citadel [Acra] in 141 BC. The Roman Senate accorded the new dynasty recognition c. 139 BC, when the delegation of Simon was in Rome. Simon led the people in peace and prosperity, until in February 135 BC, he was assassinated at the instigation of his son-in-law Ptolemy, son of Abubus (also spelled Abobus or Abobi), who had been named governor of the region by the Seleucids. Simon's eldest sons, Mattathias and Judah, were also murdered. Hasmonean expansion After achieving semi-independency from the Seleucid Empire, the dynasty began to expand into the neighboring regions. Perea was conquered already by Jonathan Apphus, subsequently John Hyrcanus conquered Samaria and Idumea, Aristobulus I conquered the territory of Galilee, and Alexander Jannaeus conquered the territory of Iturea. In addition to territorial conquests, the Hasmonean rulers, initially reigning only as rebel leaders, gradually assumed the religious office of High Priest during the reign of Jonathan Apphus in 152 BC and the monarchical title of Ethnarch during the reign of Simon Thassi in 142 BC, eventually assuming the title of King (basileus) in 104 BC by Aristobulus I. In c. 135 BC, John Hyrcanus, Simon's third son, assumed the leadership as both the High Priest (Kohen Gadol) and Ethnarch, taking a Greek "regnal name" (see Hyrcania) in an acceptance of the Hellenistic culture of his Seleucid suzerains. Within a year of the death of Simon, Seleucid King Antiochus VII Sidetes attacked Jerusalem. According to Josephus, John Hyrcanus opened King David's sepulchre and removed three thousand talents which he paid as tribute to spare the city. He managed to retain governorship as a Seleucid vassal and for the next two decades of his reign, Hyrcanus continued, like his father, to rule semi-autonomously from the Seleucids. The Seleucid empire had been disintegrating in the face of the Seleucid–Parthian wars and in 129 BC Antiochus VII Sidetes was killed in Media by the forces of Phraates II of Parthia, permanently ending Seleucid rule east of the Euphrates. In 116 BC, a civil war between Seleucid half-brothers Antiochus VIII Grypus and Antiochus IX Cyzicenus broke out, and it was in this moment of division of the already significantly reduced kingdom that semi-independent Seleucid client states such as Judea found an opportunity to revolt. In 110 BC, John Hyrcanus carried out the first military conquests of the newly independent Hasmonean kingdom, raising a mercenary army to capture Madaba and Schechem, significantly increasing his regional influence.[full citation needed] Hyrcanus conquered Transjordan, Samaria, and Idumea[better source needed] (also known as Edom), and forced Idumeans to convert to Judaism: Hyrcanus ... subdued all the Idumeans; and permitted them to stay in that country, if they would circumcise their genitals, and make use of the laws of the Jews; and they were so desirous of living in the country of their forefathers, that they submitted to the use of circumcision, (25) and of the rest of the Jewish ways of living; at which time therefore this befell them, that they were hereafter no other than Jews. Hyrcanus desired for his wife to succeed him as head of the government, but upon his death in 104 BC, the eldest of his five sons, Aristobulus I, whom he had wished to provide only with the title of High Priest, jailed his three brothers (including Alexander Jannaeus) and his mother, starving her to death. By those means he came into possession of the throne and became the first Hasmonean to take the title of Basileus, asserting the new-found independence of the state. Subsequently he conquered Galilee. Aristobulus I died after a painful illness in 103 BC. Aristobulus' brothers were freed from prison by his widow; one of them, Alexander Jannaeus, reigned as a king as well as a high priest from 103–76 BC. During his reign he conquered Iturea and, according to Josephus, forcibly converted Itureans to Judaism. In 93 BC at the Battle of Gadara, Jannaeus and his forces were ambushed in a hilly area by the Nabataeans, who saw the Hasmoneans' Transjordanian acquisitions as a threat to their interests, and Jannaeus was "lucky to escape alive". After this defeat, Jannaeus returned to fierce Jewish opposition in Jerusalem, and had to cede the Transjordan territories to the Nabataeans just so he could dissuade them from supporting his opponents in Judea; according to Josephus, in c. 87 BC, six year into the civil war (which involved even the Seleucid king Demetrius III Eucaerus), he crucified 800 Jewish rebels in Jerusalem. He died during the siege of the fortress Ragaba and was followed by his wife, Salome Alexandra, who reigned from 76 to 67 BC. She was the only regnant Jewish Queen in the Second Temple period, having followed usurper Queen Athalia who had reigned centuries prior. During Alexandra's reign, her son Hyrcanus II held the office of High Priest and was named her successor. Civil war Pharisees and Sadducees were rival sects of Judaism, all through the Hasmonean period, they functioned primarily as political factions. One of the factors that distinguished the Pharisees (which are first mentioned by Josephus in connection with Jonathan ("Ant." xiii. 5, § 9)) from other groups prior to the destruction of the Temple was their belief that all Jews had to observe the purity laws (which applied to the Temple service) outside the Temple. The major difference, however, was the continued adherence of the Pharisees to the laws and traditions of the Jewish people in the face of assimilation. As Josephus noted, the Pharisees were considered the most expert and accurate expositors of Jewish law. Later texts such as the Mishnah and the Talmud record a host of rulings ascribed to the Pharisees concerning sacrifices and other ritual practices in the Temple, torts, criminal law, and governance. The influence of the Pharisees over the lives of the common people remained strong, and their rulings on Jewish law were deemed authoritative by many. Although these texts were written long after these periods, many scholars believe that they are a fairly reliable account of history during the Second Temple period. Although the Pharisees had opposed the wars of expansion of the Hasmoneans and the forced conversions of the Idumeans, the political rift between them became wider when Pharisees demanded that the Hasmonean king Alexander Jannaeus choose between being king and being High Priest. In response, the king openly sided with the Sadducees by adopting their rites in the Temple. His actions caused a riot in the Temple and led to a brief civil war that ended with a bloody repression of the Pharisees, although at his deathbed the king called for a reconciliation between the two parties. However, Alexander was succeeded by his widow, Salome Alexandra, who Josephus attests as having been very favourably inclined toward the Pharisees, her brother Shimon ben Shetach being a leading Pharisee himself, tremendously increasing their political influence under her reign, especially in the institution known as the Sanhedrin. Upon her death her elder son, Hyrcanus II, sought Pharisee support and her younger son, Aristobulus II, sought the support of the Sadducees; Hyrcanus, had scarcely reigned three months when his younger brother, Aristobulus, rose in rebellion. The conflict between them only ended when the Roman general Pompey captured Jerusalem in 63 BC and inaugurated the Roman period of Jewish history. According to Josephus: "Now Hyrcanus was heir to the kingdom, and to him did his mother commit it before she died; but Aristobulus was superior to him in power and magnanimity; and when there was a battle between them, to decide the dispute about the kingdom, near Jericho, the greatest part deserted Hyrcanus, and went over to Aristobulus." Hyrcanus then took refuge in the citadel of Jerusalem, but the eventual capture of the Temple by Aristobulus II compelled him to surrender. A peace was concluded, according to the terms of which Hyrcanus was to renounce the throne and the office of high priest (comp. Emil Schürer, "Gesch." i. 291, note 2), but was to retain the revenues of his previous role, as Josephus states: "but Hyrcanus, with those of his party who stayed with him, fled to Antonia, and got into his power the hostages (which were Aristobulus's wife, with her children) that he might persevere; but the parties came to an agreement before things should come to extremes, that Aristobulus should be king, and Hyrcanus should resign, but retain all the rest of his dignities, as being the king's brother. Hereupon they were reconciled to each other in the Temple, and embraced one another in a very kind manner, while the people stood round about them; they also changed their houses, while Aristobulus went to the royal palace, and Hyrcanus retired to the house of Aristobulus." Aristobulus then ruled from 67–63 BC. From 63 to 40 BC, the official government (by this time reduced to a protectorate of Rome as described below) was back in the hands of Hyrcanus II as High Priest and Ethnarch, although effective power was in the hands of his adviser Antipater the Idumaean. While Hyrcanus had retired to private life, Antipater the Idumean, governor of Idumea, began to impress upon his mind that Aristobulus was planning his death, finally persuading him to take refuge with Aretas, king of the Nabatæans. Aretas, bribed by Antipater, who also promised him the restitution of the Arabian towns taken by the Hasmoneans, readily espoused the cause of Hyrcanus and advanced toward Jerusalem with an army of fifty thousand. During the siege, which lasted several months, the adherents of Hyrcanus were guilty of two acts that greatly incensed the majority of the Jews: they stoned the pious Onias (see Honi ha-Magel) and when the besieged paid the besiegers to receive sacrificial lambs for the purpose of the paschal sacrifice, they instead sent a pig.[note 2] Roman intervention: the end of the Hasmonean dynasty While this civil war was going on, the Roman general Marcus Aemilius Scaurus went to Syria to take possession of the kingdom of the Seleucids, in the name of Gnaeus Pompeius Magnus. Each of the brothers appealed to him through gifts and promises: Scaurus, moved by a gift of four hundred talents, decided in favour of Aristobulus; Aretas was ordered to withdraw his army from Judea and while retreating suffered a crushing defeat at the hands of Aristobulus himself. But the situation changed when Pompey, who had just been awarded the title "Conqueror of Asia" due to his decisive victories in Asia Minor over Pontus and the Seleucid Empire, came to Syria (63 BC) having decided to bring Judea under the rule of the Romans. The two brothers, as well as a third party which, weary of Hasmonean quarrels, desired the extinction of the dynasty, sent delegates to Pompey; who delayed the decision and eventually, in spite of Aristobulus' gift of a golden vine valued at five hundred talents, decided that Hyrcanus II would have made a more acceptable ward of Rome than his brother. Aristobulus fathomed the designs of Pompey and assembled his armies; but Pompey was able to defeat him multiple times and capture his cities, so he entrenched himself in the fortress of Alexandrium. Soon realising the futility of resistance however, he surrendered at the first summons of the Romans, and decided to deliver Jerusalem to them. Despite this, the patriots were not willing to open their gates to the Romans, and a siege ensued which ended in the capture of the city. Pompey entered the Holy of Holies (this was only the second time that someone had dared to penetrate into this sacred spot). Judaea had to pay tribute to Rome and was placed under the supervision of the Roman governor of Syria. Aristobulus was taken to Rome a prisoner, and Hyrcanus was restored to his position as High Priest but not to the Kingship. Political authority rested with the Romans whose interests were represented by Antipater. This factually ended the Hasmoean rule of the area and Jewish independence. In 57–55 BC, Aulus Gabinius, proconsul of Syria, split the former Hasmonean Kingdom into Galilee, Samaria, and Judea, with five districts of legal and religious councils known as sanhedrin (Greek: συνέδριον, "synedrion"): "And when he had ordained five councils (συνέδρια), he distributed the nation into the same number of parts. So these councils governed the people; the first was at Jerusalem, the second at Gadara, the third at Amathus, the fourth at Jericho, and the fifth at Sepphoris in Galilee." When, in 50 BC, it appeared that Julius Caesar was interested in using Aristobulus and his family as his clients to take control of Judea from Hyrcanus II and Antipater, who were in turn clients of Pompey, the supporters of the latter had Aristobulus poisoned in Rome and executed Alexander in Antioch. However, Hyrcanus and Antipater would soon turn to the other side: At the beginning of the civil war between [Caesar] and Pompey, Hyrcanus, at the instance of Antipater, prepared to support the man to whom he owed his position; but after Pompey was murdered in Egypt, Antipater led the Jewish forces to the help of Caesar, who was besieged at Alexandria. His timely help and his influence over the Egyptian Jews won the favour of Caesar, and secured him an extension of his authority in Palestine, while Hyrcanus was confirmed the title of ethnarch. Joppa was restored to the Hasmonean domain, Judea was granted freedom from all tribute and taxes to Rome, and the independence of the internal administration was guaranteed." Antipater and Hyrcanus's newly won favour led the triumphant Caesar to ignore the claims of Aristobulus's younger son, Antigonus the Hasmonean, and to confirm them in their authority, despite their previous allegiance to Pompey. Josephus noted, Antigonus... came to Caesar... and accused Hyrcanus and Antipater, how they had driven him and his brethren entirely out of their native country... and that as to the assistance they had sent [to Caesar] into Egypt, it was not done out of good-will to him, but out of the fear they were in from former quarrels, and in order to gain pardon for their friendship to [his enemy] Pompey. Hyrcanus II' restoration as ethnarch in 47 BC coincided with Caesar's appointment of Antipater as the first Procurator of Judea (Roman province) "Caesar appointed Hyrcanus to be high priest, and gave Antipater what principality he himself should choose, leaving the determination to himself; so he made him procurator of Judea." Antipater appointed his sons to positions of influence: Phasael became Governor of Jerusalem, and Herod Governor of Galilee. This led to increasing tension between Hyrcanus and the family of Antipater, culminating in a trial of Herod for supposed abuses in his governorship, which resulted in Herod's flight into exile in 46 BC. Herod soon returned, however, and the honours to Antipater's family continued. Hyrcanus' incapacity and weakness were so manifest that, when he defended Herod against the Sanhedrin and before Mark Antony, the latter stripped Hyrcanus of his nominal political authority and his title, bestowing them both upon the accused. Caesar was assassinated in 44 BC spreading unrest and confusion throughout the Roman world, including Judaea. Shortly thereafter, Antipater the Idumean was assassinated in 43 BC by the Nabatean king, Malichus I, who had bribed one of Hyrcanus' cup-bearers to poison him. However, Antipater's sons managed to maintain their control over Hyrcanus and Judea. In 40 BC a Parthian army crossed the Euphrates, joined by Quintus Labienus, a Roman republican general, who was once sent as ambassador to the Parthians, and who now, following the events of the Liberators' civil war, assisted them in their invasion of Roman territories, and was able to entice Mark Antony's Roman garrisons around Syria to rally to his cause. The Parthians split their army, and under Pacorus conquered the Levant: Antigonus... roused the Parthians to invade Syria and Palestine, [and] the Jews eagerly rose in support of the scion of the Maccabean house, and drove out the hated Idumeans with their puppet Jewish king. The struggle between the people and the Romans had begun in earnest, and though Antigonus, when placed on the throne by the Parthians, proceeded to spoil and harry the Jews, rejoicing at the restoration of the Hasmonean line, thought a new era of independence had come. When Antipater's son Phasael and Hyrcanus II set out on an embassy to the Parthians which got captured, Antigonus, who was present, cut off Hyrcanus's ears to make him unsuitable for the High Priesthood, while Phasael in fear of humiliation and torture, killed himself. Antigonus, whose Hebrew name was Mattathias, bore the double title of king and High Priest for only three years, as he had not disposed of Antipater's other son Herod, the most dangerous of his enemies. Herod fled into exile and sought the support of Mark Antony. He was designated "King of the Jews" by the Roman Senate in 40 BC as Antony then resolved to get [Herod] made king of the Jews...[and] told [the Senate] that it was for their advantage in the Parthian war that Herod should be king; so they all gave their votes for it. And when the senate was separated, Antony and Caesar [Augustus] went out, with Herod between them; while the consul and the rest of the magistrates went before them, in order to offer sacrifices [to the Roman gods], and to lay the decree in the Capitol. Antony also made a feast for Herod on the first day of his reign.[unreliable source?] The struggle thereafter lasted for some years, as the main Roman forces were occupied with defeating the Parthians and had few additional resources to use to support Herod. After the Parthians' defeat however, in 37 BC Herod was victorious over his rival; Antigonus was delivered to Antony, executed and the Romans assented to Herod's proclamation as King of the Jews, bringing about the end of the Hasmonean rule over Judea. Antigonus was not the last Hasmonean; however, the fate of the remaining male members of the family under Herod was not a happy one. Aristobulus III, grandson of Aristobulus II through his elder son Alexander, was briefly made high priest, but was soon executed (36 BC) due to Herod's jealousy. His sister Mariamne was married to Herod, but also fell victim to his jealousy. Her sons by Herod, Aristobulus IV and Alexander, were in their adulthood also executed by their father. Hyrcanus II had been held by the Parthians since 40 BC. For four years he lived amid the Babylonian Jews, who paid him every mark of respect. However, in 36 BC Herod, who feared that the last remaining male Hasmonean might gain the support of the Parthians to retake the throne, invited him to return to Jerusalem. The Babylonian Jews warned him in vain as Herod received him with every mark of respect, assigning him the first place at his table and the presidency of the state council, while awaiting an opportunity to get rid of him. As a Hasmonean, Hyrcanus was too dangerous a rival for Herod. In the year 30 BC, charged with plotting with the King of Arabia, Hyrcanus was condemned and executed. The later Herodian rulers Agrippa I and Agrippa II both had Hasmonean blood, as Agrippa I's father was Aristobulus IV, son of Herod by Mariamne I, but they were not direct male descendants. The Hasmoneans did not have defined rules for succession and Agrippa was viewed as legitimate via his grandmother, Mariamne I. Foreign views In his Histories, Tacitus explained the background for the establishment of the Hasmonean state: While the East was under the dominion of the Assyrians, Medes, and Persians, the Jews were regarded as the meanest of their subjects: but after the Macedonians gained supremacy, King Antiochus endeavored to abolish Jewish superstition and to introduce Greek civilization; the war with the Parthians, however, prevented his improving this basest of peoples; for it was exactly at that time that Arsaces had revolted. Later on, since the power of Macedon had waned, the Parthians were not yet come to their strength, and the Romans were far away, the Jews selected their own kings. These in turn were expelled by the fickle mob; but recovering their throne by force of arms, they banished citizens, destroyed towns, killed brothers, wives, and parents, and dared essay every other kind of royal crime without hesitation; but they fostered the national superstition, for they had assumed the priesthood to support their civil authority. See also Notes References Bibliography Further reading External links (Shamshi-Adad dynasty1808–1736 BCE)(Amorites)Shamshi-Adad I Ishme-Dagan I Mut-Ashkur Rimush Asinum Ashur-dugul Ashur-apla-idi Nasir-Sin Sin-namir Ipqi-Ishtar Adad-salulu Adasi (Non-dynastic usurpers1735–1701 BCE) Puzur-Sin Ashur-dugul Ashur-apla-idi Nasir-Sin Sin-namir Ipqi-Ishtar Adad-salulu Adasi (Adaside dynasty1700–722 BCE)Bel-bani Libaya Sharma-Adad I Iptar-Sin Bazaya Lullaya Shu-Ninua Sharma-Adad II Erishum III Shamshi-Adad II Ishme-Dagan II Shamshi-Adad III Ashur-nirari I Puzur-Ashur III Enlil-nasir I Nur-ili Ashur-shaduni Ashur-rabi I Ashur-nadin-ahhe I Enlil-Nasir II Ashur-nirari II Ashur-bel-nisheshu Ashur-rim-nisheshu Ashur-nadin-ahhe II Second Intermediate PeriodSixteenthDynasty of Egypt AbydosDynasty SeventeenthDynasty of Egypt (1500–1100 BCE)Kidinuid dynastyIgehalkid dynastyUntash-Napirisha Twenty-first Dynasty of EgyptSmendes Amenemnisu Psusennes I Amenemope Osorkon the Elder Siamun Psusennes II Twenty-third Dynasty of EgyptHarsiese A Takelot II Pedubast I Shoshenq VI Osorkon III Takelot III Rudamun Menkheperre Ini Twenty-fourth Dynasty of EgyptTefnakht Bakenranef (Sargonid dynasty)Tiglath-Pileser† Shalmaneser† Marduk-apla-iddina II Sargon† Sennacherib† Marduk-zakir-shumi II Marduk-apla-iddina II Bel-ibni Ashur-nadin-shumi† Nergal-ushezib Mushezib-Marduk Esarhaddon† Ashurbanipal Ashur-etil-ilani Sinsharishkun Sin-shumu-lishir Ashur-uballit II |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Dog] | [TOKENS: 9934] |
Contents Dog The dog (Canis familiaris or Canis lupus familiaris) is a domesticated descendant of wolves. Also called the domestic dog, it was selectively bred during the Late Pleistocene by hunter-gatherers. Dogs and the modern gray wolf share a common ancestor. Dogs were the first species to be domesticated over 14,000 years ago, before the development of agriculture, though genetic studies suggest the domestication process may have begun over 25,000 years ago. Due to their long association with humans, dogs have gained the ability to thrive on a starch-rich diet that would be inadequate for other canids. Dogs have been bred for desired behaviors, sensory capabilities, and physical attributes. Dog breeds vary widely in shape, size, and color. They have the same number of bones (with the exception of the tail), powerful jaws that house around 42 teeth, and well-developed senses of smell, hearing, and sight. Compared to humans, dogs possess a superior sense of smell and hearing, but inferior visual acuity. Dogs perform many roles for humans, such as hunting, herding, pulling loads, protection, companionship, therapy, aiding disabled people, and assisting police and the military. Communication in dogs includes eye gaze, facial expression, vocalization, body posture (including movements of bodies and limbs), and chemical communication (scents, pheromones, and taste). They mark their territories by urinating on them, which is more likely when entering a new environment. Over the millennia, dogs have uniquely adapted to human behavior; this adaptation includes being able to understand and communicate with humans. As such, the human–canine bond has been a topic of frequent study, and dogs' influence on human society has given them the sobriquet of "man's best friend". The global dog population is estimated at 700 million to 1 billion, distributed around the world. The dog is the most popular pet in the United States, present in 34–40% of households. Developed countries make up approximately 20% of the global dog population, while around 75% of dogs are estimated to be from developing countries, mainly in the form of feral and street dogs. Taxonomy Gray wolf (domestic dog) Coyote African wolf Ethiopian wolf Golden jackal Dhole African wild dog Side-striped jackal Black-backed jackal Dogs are domesticated members of the family Canidae. They are classified as a subspecies of Canis lupus, along with wolves and dingoes. Genetic studies show that dogs likely diverged from wolves between 27,000 and 40,000 years ago. Dogs were domesticated from wolves at least 14,000 years ago by hunter-gatherers, before the development of agriculture; the remains of the Bonn–Oberkassel dog, buried alongside humans between 14,000 and 15,000 years ago, are the earliest to be conclusively identified as a domesticated dog. The dingo and the related New Guinea singing dog resulted from the geographic isolation and feralization of dogs in Oceania over 8,000 years ago. Dogs, wolves, and dingoes have sometimes been classified as separate species. In 1758, the Swedish botanist and zoologist Carl Linnaeus assigned the genus name Canis (which is the Latin word for "dog") to the domestic dog, the wolf, and the golden jackal in his book, Systema Naturae. He classified the domestic dog as Canis familiaris and, on the next page, classified the grey wolf as Canis lupus. Linnaeus considered the dog to be a separate species from the wolf because of its upturning tail (cauda recurvata in Latin term), which is not found in any other canid. In the 2005 edition of Mammal Species of the World, mammalogist W. Christopher Wozencraft listed the wolf as a wild subspecies of Canis lupus and proposed two additional subspecies: familiaris, as named by Linnaeus in 1758, and dingo, named by Meyer in 1793. Wozencraft included hallstromi (the New Guinea singing dog) as another name (junior synonym) for the dingo. This classification was informed by a 1999 mitochondrial DNA study. The classification of dingoes is disputed and a political issue in Australia. Classifying dingoes as wild dogs simplifies reducing or controlling dingo populations that threaten livestock. Treating dingoes as a separate species allows conservation programs to protect the dingo population. Dingo classification affects wildlife management policies, legislation, and societal attitudes. In 2019, a workshop hosted by the IUCN/Species Survival Commission's Canid Specialist Group considered the dingo and the New Guinea singing dog to be feral Canis familiaris. Therefore, it did not assess them for the IUCN Red List of threatened species. The earliest remains generally accepted to be those of a domesticated dog were discovered in Bonn-Oberkassel, Germany. Contextual, isotopic, genetic, and morphological evidence shows that this dog was not a local wolf. Dated to 14,223 years ago, the dog was found buried along with a man and a woman; all three had been sprayed with red hematite powder and buried under large, thick basalt blocks. The dog had survived canine distemper as a puppy - survival without intensive human care is unlikely, and occurred at an age before the dog could have had any utilitarian use. This timing indicates that the dog was the first species to be domesticated during the hunter-gatherer era, predating agriculture. Earlier remains dating back 30,000 years have been described as Paleolithic dogs, but their status as dogs or wolves remains debated because considerable morphological diversity existed among wolves during the Late Pleistocene. DNA sequences show that all ancient and modern dogs share a common ancestry and descended from an ancient, extinct wolf population that was distinct from any modern wolf lineage. Some studies have posited that all living wolves are more closely related to each other than to dogs, while others have suggested that dogs are more closely related to modern Eurasian wolves than to American wolves. The dog is a domestic animal that likely travelled a commensal pathway into domestication (i.e. humans initially neither benefitted nor were harmed by wild dogs eating refuse from their camps). The questions of when and where dogs were first domesticated remains uncertain. Genetic studies suggest a domestication process commencing over 25,000 years ago, in one or several wolf populations in either Europe, the high Arctic, or eastern Asia. In 2021, a literature review of the current evidence infers that the dog was domesticated in Siberia 23,000 years ago by ancient North Siberians, then later dispersed eastward into the Americas and westward across Eurasia, with dogs likely accompanying the first humans to inhabit the Americas. Some studies have suggested that the extinct Japanese wolf is closely related to the ancestor of domestic dogs. In 2018, a study identified 429 genes that differed between modern dogs and modern wolves. As the differences in these genes could also be found in ancient dog fossils, these were regarded as being the result of the initial domestication and not from recent breed formation. These genes are linked to neural crest and central nervous system development. These genes affect embryogenesis and can confer tameness, smaller jaws, floppy ears, and diminished craniofacial development, which distinguish domesticated dogs from wolves and are considered to reflect domestication syndrome. The study concluded that during early dog domestication, the initial selection was for behavior. This trait is influenced by those genes which act in the neural crest, which led to the phenotypes observed in modern dogs. There are around 450 official dog breeds, the most of any mammal. Dogs began diversifying in the Victorian era, when humans took control of their natural selection. Most breeds were derived from small numbers of founders within the last 200 years. Since then, dogs have undergone rapid phenotypic change and have been subjected to artificial selection by humans. The skull, body, and limb proportions between breeds display more phenotypic diversity than can be found within the entire order of carnivores. These breeds possess distinct traits related to morphology, which include body size, skull shape, tail phenotype, fur type, and colour. As such, humans have long used dogs for their desirable traits to complete or fulfill a certain work or role. Their behavioural traits include guarding, herding, hunting, retrieving, and scent detection. Their personality traits include hypersocial behavior, boldness, and aggression. Present-day dogs are dispersed around the world. An example of this dispersal is the numerous modern breeds of European lineage during the Victorian era. Anatomy and physiology Dogs are extremely variable in size, ranging from one of the largest breeds, the Great Dane, at 50–79 kg (110–174 lb) and 71–81 cm (28–32 in), to one of the smallest, the Chihuahua, at 0.5–3 kg (1.1–6.6 lb) and 13–20 cm (5.1–7.9 in). All healthy dogs, regardless of their size and type, have the same number of bones (with the exception of the tail), although there is significant skeletal variation between dogs of different types. The dog's skeleton is well adapted for running; the vertebrae on the neck and back have extensions for back muscles, consisting of epaxial muscles and hypaxial muscles, to connect to; the long ribs provide room for the heart and lungs; and the shoulders are unattached to the skeleton, allowing for flexibility. Compared to the dog's wolf-like ancestors, selective breeding since domestication has seen the dog's skeleton increase in size for larger types such as mastiffs and miniaturised for smaller types such as terriers; dwarfism has been selectively bred for some types where short legs are preferred, such as dachshunds and corgis. Most dogs naturally have 26 vertebrae in their tails, but some with naturally short tails have as few as three. The dog's skull has identical components regardless of breed type, but there is significant divergence in terms of skull shape between types. The three basic skull shapes are the elongated dolichocephalic type as seen in sighthounds, the intermediate mesocephalic or mesaticephalic type, and the very short and broad brachycephalic type exemplified by mastiff type skulls. The jaw contains around 42 teeth, and it has evolved for the consumption of flesh. Dogs use their carnassial teeth to cut food into bite-sized chunks, more especially meat. Dogs' senses include vision, hearing, smell, taste, touch, and magnetoreception. One study suggests that dogs can feel small variations in Earth's magnetic field. Dogs prefer to defecate with their spines aligned in a north–south position in calm magnetic field conditions. Dogs' vision is dichromatic; their visual world consists of yellows, blues, and grays. They have difficulty differentiating between red and green, and much like other mammals, the dog's eye is composed of two types of cone cells compared to the human's three. The divergence of the eye axis of dogs ranges from 12 to 25°, depending on the breed, which can have different retina configurations. The fovea centralis area of the eye is attached to a nerve fiber, and is the most sensitive to photons. Additionally, a study found that dogs' visual acuity was up to eight times less effective than a human, and their ability to discriminate levels of brightness was about two times worse than a human. While the human brain is dominated by a large visual cortex, the dog brain is dominated by a large olfactory cortex. Dogs have roughly forty times more smell-sensitive receptors than humans, ranging from about 125 million to nearly 300 million in some dog breeds, such as bloodhounds. This sense of smell is the most prominent sense of the species; it detects chemical changes in the environment, allowing dogs to pinpoint the location of mating partners, potential stressors, resources, etc. Dogs also have an acute sense of hearing up to four times greater than that of humans. They can pick up the slightest sounds from about 400 m (1,300 ft) compared to 90 m (300 ft) for humans. Dogs have stiff, deeply embedded hairs known as whiskers that sense atmospheric changes, vibrations, and objects not visible in low light conditions. The lower most part of whiskers hold more receptor cells than other hair types, which help in alerting dogs of objects that could collide with the nose, ears, and jaw. Whiskers likely also facilitate the movement of food towards the mouth. The coats of domestic dogs are of two varieties: "double" being common in dogs (as well as wolves) originating from colder climates, made up of a coarse guard hair and a soft down hair, or "single", with the topcoat only. Breeds may have an occasional "blaze", stripe, or "star" of white fur on their chest or underside. Premature graying can occur in dogs as early as one year of age; this is associated with impulsive behaviors, anxiety behaviors, and fear of unfamiliar noise, people, or animals. Some dog breeds are hairless, while others have a very thick corded coat. The coats of certain breeds are often groomed to a characteristic style, for example, the Yorkshire Terrier's "show cut". A dog's dewclaw is the fifth digit in its forelimb and hind legs. Dewclaws on the forelimbs are attached by bone and ligament, while the dewclaws on the hind legs are attached only by skin. Most dogs are not born with dewclaws in their hind legs, and some are without them in their forelimbs. Dogs' dewclaws consist of the proximal phalanges and distal phalanges. Some publications theorize that dewclaws in wolves, who usually do not have dewclaws, were a sign of hybridization with dogs. A dog's tail is the terminal appendage of the vertebral column, which is made up of a string of 5 to 23 vertebrae enclosed in muscles and skin that support the dog's back extensor muscles. One of the primary functions of a dog's tail is to communicate their emotional state. The tail also helps the dog maintain balance by putting its weight on the opposite side of the dog's tilt, and it can also help the dog spread its anal gland's scent through the tail's position and movement. Dogs can have a violet gland (or supracaudal gland) characterized by sebaceous glands on the dorsal surface of their tails; in some breeds, it may be vestigial or absent. The enlargement of the violet gland in the tail, which can create a bald spot from hair loss, can be caused by Cushing's disease or an excess of sebum from androgens in the sebaceous glands. A study suggests that dogs show asymmetric tail-wagging responses to different emotive stimuli. "Stimuli that could be expected to elicit approach tendencies seem to be associated with [a] higher amplitude of tail-wagging movements to the right side". Dogs can injure themselves by wagging their tails forcefully; this condition is called kennel tail, happy tail, bleeding tail, or splitting tail. In some hunting dogs, the tail is traditionally docked to avoid injuries. Some dogs can be born without tails because of a DNA variant in the T gene, which can also result in a congenitally short (bobtail) tail. Tail docking is opposed by many veterinary and animal welfare organisations such as the American Veterinary Medical Association and the British Veterinary Association. Evidence from veterinary practices and questionnaires showed that around 500 dogs would need to have their tail docked to prevent one injury. Health Numerous disorders have been known to affect dogs. Some are congenital and others are acquired. Dogs can acquire upper respiratory tract diseases including diseases that affect the nasal cavity, the larynx, and the trachea; lower respiratory tract diseases which includes pulmonary disease and acute respiratory diseases; heart diseases which includes any cardiovascular inflammation or dysfunction of the heart; haemopoietic diseases including anaemia and clotting disorders; gastrointestinal disease such as diarrhoea and gastric dilatation volvulus; hepatic disease such as portosystemic shunts and liver failure; pancreatic disease such as pancreatitis; renal disease; lower urinary tract disease such as cystitis and urolithiasis; endocrine disorders such as diabetes mellitus, Cushing's syndrome, hypoadrenocorticism, and hypothyroidism; nervous system diseases such as seizures and spinal injury; musculoskeletal disease such as arthritis and myopathies; dermatological disorders such as alopecia and pyoderma; ophthalmological diseases such as conjunctivitis, glaucoma, entropion, and progressive retinal atrophy; and neoplasia. Common dog parasites are lice, fleas, fly larvae, ticks, mites, cestodes, nematodes, and coccidia. Taenia is a notable genus with 5 species in which dogs are the definitive host. Additionally, dogs are a source of zoonoses for humans. They are responsible for 99% of rabies cases worldwide; however, in some developed countries such as the UK, rabies is absent from dogs and is instead only transmitted by bats. Other common zoonoses are hydatid disease, leptospirosis, pasteurellosis, ringworm, and toxocariasis. Common infections in dogs include canine adenovirus, canine distemper virus, canine parvovirus, leptospirosis, canine influenza, and canine coronavirus. All of these conditions have vaccines available. Dogs are the companion animal most frequently reported for exposure to toxins. Most poisonings are accidental; in the US more than 80% of reports of exposure to the ASPCA animal poisoning hotline are due to oral exposure. The most common substances people report exposure to are: pharmaceuticals, toxic foods, and rodenticides. Data from the Pet Poison Helpline shows that human drugs are the most frequent cause of toxicosis death. The most common household products ingested are cleaning products. Most food related poisonings involved theobromine poisoning (chocolate). Other common food poisonings include xylitol, Vitis (grapes, raisins, etc.), and Allium (garlic, onions, etc.). Pyrethrin insecticides were the most common cause of pesticide poisoning. Metaldehyde, a common pesticide for snails and slugs, typically causes severe outcomes when ingested by dogs. Neoplasia is the most common cause of death for dogs. Other common causes of death are heart and renal failure. Their pathology is similar to that of humans, as is their response to treatment and their outcomes. Genes found in humans to be responsible for disorders are investigated in dogs as being the cause and vice versa. The typical lifespan of dogs varies widely among breeds, but the median longevity (the age at which half the dogs in a population have died and half are still alive) is about 12.7 years. Obesity correlates negatively with longevity with one study finding obese dogs to have a life expectancy approximately a year and a half less than dogs with a healthy weight. A 2024 UK study analyzing 584,734 dogs concluded that purebred dogs live longer than crossbred dogs, challenging the previous notion of the latter having the higher life expectancies. The authors noted that their study included "designer dogs" as crossbred and that purebred dogs were typically given better care than their crossbred counterparts, which likely influenced the outcome of the study. Other studies also show that fully mongrel dogs live about a year longer on average than dogs with pedigrees. Furthermore, small dogs with longer muzzles have been shown to have higher lifespans than larger medium-sized dogs with much more depressed muzzles. For free-ranging dogs, less than 1 in 5 reach sexual maturity, and the median life expectancy for feral dogs is less than half of dogs living with humans. In domestic dogs, sexual maturity happens around six months to one year for both males and females, although this can be delayed until up to two years of age in some large breeds. This is the time at which female dogs will have their first estrous cycle, characterized by their vulvas swelling and producing discharges, usually lasting between 4 and 20 days. They will experience subsequent estrous cycles semiannually, during which the body prepares for pregnancy. At the peak of the cycle, females will become estrous, mentally and physically receptive to copulation. Because the ova survive and can be fertilized for a week after ovulation, more than one male can sire the same litter. Fertilization typically occurs two to five days after ovulation. After ejaculation, the dogs are coitally tied for around 5–30 minutes because of the male's bulbus glandis swelling and the female's constrictor vestibuli contracting; the male will continue ejaculating until they untie naturally due to muscle relaxation. 14–16 days after ovulation, the embryo attaches to the uterus, and after seven to eight more days, a heartbeat is detectable. Dogs bear their litters roughly 58 to 68 days after fertilization, with an average of 63 days, although the length of gestation can vary. An average litter consists of about six puppies. Neutering is the sterilization of animals via gonadectomy, which is an orchidectomy (castration) in dogs and ovariohysterectomy (spay) in bitches. Neutering reduces problems caused by hypersexuality, especially in male dogs. Spayed females are less likely to develop cancers affecting the mammary glands, ovaries, and other reproductive organs. However, neutering increases the risk of urinary incontinence and pyometra in bitches, prostate cancer in dogs, and osteosarcoma, hemangiosarcoma, cruciate ligament rupture, obesity, and diabetes mellitus in either sex. Neutering is the most common surgical procedure in dogs less than a year old in the US and is seen as a control method for overpopulation. Neutering often occurs as early as 6–14 weeks in shelters in the US. The American Society for the Prevention of Cruelty to Animals (ASPCA) advises that dogs not intended for further breeding should be neutered so that they do not have undesired puppies that may later be euthanized. However, the Society for Theriogenology and the American College of Theriogenologists made a joint statement that opposes mandatory neutering; they said that the cause of overpopulation in the US is cultural. Neutering is less common in most European countries, especially in Nordic countries—except for the UK, where it is common. In Norway neutering is illegal unless for the benefit of the animal's health (e.g., ovariohysterectomy in case of ovarian or uterine neoplasia). Some European countries have similar laws to Norway, but their wording either explicitly allows for neutering for controlling reproduction or it is allowed in practice or by contradiction through other laws. Italy and Portugal have passed recent laws that promote it. Germany forbids early-age neutering, but neutering is still allowed at the usual age. In Romania neutering is mandatory except for when a pedigree to select breeds can be shown. A common breeding practice for pet dogs is to mate them between close relatives (e.g., between half- and full-siblings). In a study of seven dog breeds (the Bernese Mountain Dog, Basset Hound, Cairn Terrier, Brittany, German Shepherd Dog, Leonberger, and West Highland White Terrier), it was found that inbreeding decreases litter size and survival. Another analysis of data on 42,855 Dachshund litters found that as the inbreeding coefficient increased, litter size decreased and the percentage of stillborn puppies increased, thus indicating inbreeding depression. In a study of Boxer litters, 22% of puppies died before reaching 7 weeks of age. Stillbirth was the most frequent cause of death, followed by infection. Mortality due to infection increased significantly with increases in inbreeding. Behavior Dog behavior has been shaped by millennia of contact with humans. They have acquired the ability to understand and communicate with humans and are uniquely attuned to human behaviors. Behavioral scientists suggest that a set of social-cognitive abilities in domestic dogs that are not possessed by the dog's canine relatives or other highly intelligent mammals, such as great apes, are parallel to children's social-cognitive skills. Dogs have about twice the number of neurons in their cerebral cortexes than what cats have, which suggests they could be about twice as intelligent. Most domestic animals were initially bred for the production of goods. Dogs, on the other hand, were selectively bred for desirable behavioral traits. In 2016, a study found that only 11 fixed genes showed variation between wolves and dogs. These gene variations indicate the occurrence of artificial selection and the subsequent divergence of behavior and anatomical features. These genes have been shown to affect the catecholamine synthesis pathway, with the majority of the genes affecting the fight-or-flight response (i.e., selection for tameness) and emotional processing. Compared to their wolf counterparts, dogs tend to be less timid and less aggressive, though some of these genes have been associated with aggression in certain dog breeds. Traits of high sociability and lack of fear in dogs may include genetic modifications related to Williams-Beuren syndrome in humans, which cause hypersociability at the expense of problem-solving ability. In a 2023 study of 58 dogs, some dogs classified as attention deficit hyperactivity disorder-like showed lower serotonin and dopamine concentrations. A similar study claims that hyperactivity is more common in male and young dogs. A dog can become aggressive because of trauma or abuse, fear or anxiety, territorial protection, or protecting an item it considers valuable. Acute stress reactions from post-traumatic stress disorder (PTSD) seen in dogs can evolve into chronic stress. Police dogs with PTSD can often refuse to work. Dogs have a natural instinct called prey drive (the term is chiefly used to describe training dogs' habits) which can be influenced by breeding. These instincts can drive dogs to consider objects or other animals to be prey or drive possessive behavior. These traits have been enhanced in some breeds so that they may be used to hunt and kill vermin or other pests. Puppies or dogs sometimes bury food underground. One study found that wolves outperformed dogs in finding food caches, likely due to a "difference in motivation" between wolves and dogs. Some puppies and dogs engage in coprophagy out of habit, stress, for attention, or boredom; most of them will not do it later in life. A study hypothesizes that the behavior was inherited from wolves, a behavior likely evolved to lessen the presence of intestinal parasites in dens. Most dogs can swim. In a study of 412 dogs, around 36.5% of the dogs could not swim; the other 63.5% were able to swim without a trainer in a swimming pool. A study of 55 dogs found a correlation between swimming and 'improvement' of the hip osteoarthritis joint. The female dog (bitch) may produce colostrum, a type of milk high in nutrients and antibodies, 1–7 days before giving birth. Milk production lasts for around three months, and increases with litter size. The dog can sometimes vomit and refuse food during child contractions. In the later stages of the dog's pregnancy, nesting behaviour may occur. Puppies are born with a protective fetal membrane that the mother usually removes shortly after birth. Dogs can have the maternal instincts to start grooming their puppies, consume their puppies' feces, and protect their puppies, likely due to their hormonal state. While male-parent dogs can show more disinterested behaviour toward their own puppies, most can play with the young pups as they would with other dogs or humans. A bitch may abandon or attack her puppies or her male partner dog if she is stressed or in pain. Researchers have tested dogs' ability to perceive information, retain it as knowledge, and apply it to solve problems. Studies of two dogs suggest that dogs can learn by inference. A study with Rico, a Border Collie, showed that he knew the labels of over 200 different items. He inferred the names of novel things by exclusion learning and correctly retrieved those new items after four weeks of the initial exposure. A study of another Border Collie, Chaser, documented that he had learned the names and could associate them by verbal command with over 1,000 words. One study of canine cognitive abilities found that dogs' capabilities are similar to those of horses, chimpanzees, or cats. One study of 18 household dogs found that the dogs could not distinguish food bowls at specific locations without distinguishing cues; the study stated that this indicates a lack of spatial memory. A study stated that dogs have a visual sense for number. The dogs showed a ratio-dependent activation both for numerical values from 1–3 to larger than four. Dogs demonstrate a theory of mind by engaging in deception. Another experimental study showed evidence that Australian dingos can outperform domestic dogs in non-social problem-solving, indicating that domestic dogs may have lost much of their original problem-solving abilities once they joined humans. Another study showed that dogs stared at humans after failing to complete an impossible version of the same task they had been trained to solve. Wolves, under the same situation, avoided staring at humans altogether. Dog communication is the transfer of information between dogs, as well as between dogs and humans. Communication behaviors of dogs include eye gaze, facial expression, vocalization, body posture (including movements of bodies and limbs), and gustatory communication (scents, pheromones, and taste). Dogs mark their territories by urinating on them, which is more likely when entering a new environment. Both sexes of dogs may also urinate to communicate anxiety or frustration, submissiveness, or when in exciting or relaxing situations. Overarousal in dogs can be a result of the dogs' higher cortisol levels. Dogs begin socializing with other dogs by the time they reach the ages of 3 to 8 weeks, and at about 5 to 12 weeks of age, they alter their focus from dogs to humans. Belly exposure in dogs can be a defensive behavior that can lead to a bite or to seek comfort. Humans communicate with dogs by using vocalization, hand signals, and body posture. With their acute sense of hearing, dogs rely on the auditory aspect of communication for understanding and responding to various cues, including the distinctive barking patterns that convey different messages. A study using functional magnetic resonance imaging (fMRI) has shown that dogs respond to both vocal and nonvocal voices using the brain's region towards the temporal pole, similar to that of humans' brains. Most dogs also looked significantly longer at the face whose expression matched the valence of vocalization. A study of caudate responses shows that dogs tend to respond more positively to social rewards than to food rewards. Ecology The dog is the most widely abundant large carnivoran living in the human environment. In 2020, the estimated global dog population was between 700 million and 1 billion. In the same year, a study found the dog to be the most popular pet in the United States, as they were present in 34 out of every 100 homes. About 20% of the dog population live in developed countries. In the developing world, it was estimated as of 2001 that three-quarters of the world's dog population lives in the developing world as feral, village, or community dogs. Most of these dogs live as scavengers and have never been owned by humans, with one study showing that village dogs' most common response when approached by strangers is to run away (52%) or respond aggressively (11%). Feral and free-ranging dogs' potential to compete with other large carnivores is limited by their strong association with humans. Although wolves are known to kill dogs, wolves tend to live in pairs in areas where they are highly persecuted, giving them a disadvantage when facing large dog groups. In some instances, wolves have displayed an uncharacteristic fearlessness of humans and buildings when attacking dogs, to the extent that they have to be beaten off or killed. Although the numbers of dogs killed each year are relatively low, there is still a fear among humans of wolves entering villages and farmyards to take dogs, and losses of dogs to wolves have led to demands for more liberal wolf hunting regulations. Coyotes and big cats have also been known to attack dogs. In particular, leopards are known to have a preference for dogs and have been recorded to kill and consume them, no matter their size. Siberian tigers in the Amur river region have killed dogs in the middle of villages. They will not tolerate wolves as competitors within their territories, and the tigers could be considering dogs in the same way. Striped hyenas are known to kill dogs in their range. Dogs as introduced predators have affected the ecology of New Zealand, which lacked indigenous land-based mammals before human settlement. Dogs have made 11 vertebrate species extinct and are identified as a 'potential threat' to at least 188 threatened species worldwide. Dogs have also been linked to the extinction of 156 animal species. Dogs have been documented to have killed a few birds of the endangered species, the kagu, in New Caledonia. Dogs are typically described as omnivores. Compared to wolves, dogs from agricultural societies have extra copies of amylase and other genes involved in starch digestion that contribute to an increased ability to thrive on a starch-rich diet. Similar to humans, some dog breeds produce amylase in their saliva and are classified as having a high-starch diet. Despite being an omnivore, dogs are only able to conjugate bile acid with taurine. They must get vitamin D from their diet. Of the twenty-one amino acids common to all life forms (including selenocysteine), dogs cannot synthesize ten: arginine, histidine, isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan, and valine. Like cats, dogs require arginine to maintain nitrogen balance. These nutritional requirements place dogs halfway between carnivores and omnivores. As a domesticated or semi-domesticated animal, the dog has notable exceptions of presence in: Dogs were introduced to Antarctica as sled dogs. Starting practice in December 1993, dogs were later outlawed by the Protocol on Environmental Protection to the Antarctic Treaty international agreement due to the possible risk of spreading infections. Roles with humans The domesticated dog originated as a predator and scavenger. They inherited complex behaviors, such as bite inhibition, from their wolf ancestors, which would have been pack hunters with complex body language. These sophisticated forms of social cognition and communication may account for dogs' trainability, playfulness, and ability to fit into human households and social situations, and probably also their co-existence with early human hunter-gatherers. Dogs perform many roles for people, such as hunting, herding, pulling loads, protection, assisting police and the military, companionship, and aiding disabled individuals. These roles in human society have earned them the nickname "man's best friend" in the Western world. In some cultures, however, dogs are also a source of meat. The keeping of dogs as companions, particularly by elites, has a long history. Pet-dog populations grew significantly after World War II as suburbanization increased. In the 1980s, there have been changes in the pet dog's functions, such as the increased role of dogs in the emotional support of their human guardians. Within the second half of the 20th century, more and more dog owners considered their animal to be a part of the family. This major social status shift allowed the dog to conform to social expectations of personality and behavior. The second has been the broadening of the concepts of family and the home to include dogs-as-dogs within everyday routines and practices. Products such as dog-training books, classes, and television programs target dog owners. Some dog-trainers have promoted a dominance model of dog-human relationships. However, the idea of the "alpha dog" trying to be dominant is based on a controversial theory about wolf packs. It has been disputed that "trying to achieve status" is characteristic of dog-human interactions. Human family members have increased participation in activities in which the dog is an integral partner, such as dog dancing and dog yoga. According to statistics published by the American Pet Products Manufacturers Association in the National Pet Owner Survey in 2009–2010, an estimated 77.5 million people in the United States have pet dogs. The source shows that nearly 40% of American households own at least one dog, of which 67% own just one dog, 25% own two dogs, and nearly 9% own more than two dogs. The data also shows an equal number of male and female pet dogs; less than one-fifth of the owned dogs come from shelters. In addition to dogs' role as companion animals, dogs have been bred for herding livestock (such as collies and sheepdogs); for hunting; for rodent control (such as terriers); as search and rescue dogs; as detection dogs (such as those trained to detect illicit drugs or chemical weapons); as homeguard dogs; as police dogs (sometimes nicknamed "K-9"); as welfare-purpose dogs; as dogs who assist fishermen retrieve their nets; and as dogs that pull loads (such as sled dogs). In 1957, the dog Laika became one of the first animals to be launched into Earth orbit aboard the Soviets's Sputnik 2; Laika died during the flight from overheating. Various kinds of service dogs and assistance dogs, including guide dogs, hearing dogs, mobility assistance dogs, and psychiatric service dogs, assist individuals with disabilities. A study of 29 dogs found that 9 dogs owned by people with epilepsy were reported to exhibit attention-getting behavior to their handler 30 seconds to 45 minutes prior to an impending seizure; there was no significant correlation between the patients' demographics, health, or attitude towards their pets. Dogs compete in breed-conformation shows and dog sports (including racing, sledding, and agility competitions). In dog shows, also referred to as "breed shows", a judge familiar with the specific dog breed evaluates individual purebred dogs for conformity with their established breed type as described in a breed standard. Weight pulling, a dog sport involving pulling weight, has been criticized for promoting doping and for its risk of injury. Humans have consumed dog meat going back at least 14,000 years. It is unknown to what extent prehistoric dogs were consumed and bred for meat. For centuries, the practice was prevalent in Southeast Asia, East Asia, Africa, and Oceania before cultural changes triggered by the spread of religions resulted in dog meat consumption declining and becoming more taboo. Switzerland, Polynesia, and pre-Columbian Mexico historically consumed dog meat. Some Native American dogs, like the Peruvian Hairless Dog and Xoloitzcuintle, were raised to be sacrificed and eaten. Han Chinese traditionally ate dogs. Consumption of dog meat declined but did not end during the Sui dynasty (581–618) and Tang dynasty (618–907) due in part to the spread of Buddhism and the upper class rejecting the practice. Dog consumption was rare in India, Iran, and Europe. Eating dog meat is a social taboo in most parts of the world, though some still consume it in modern times. It is still consumed in some East Asian countries, including China, Vietnam, Korea, Indonesia, and the Philippines. An estimated 30 million dogs are killed and consumed in Asia every year. China is the world's largest consumer of dogs, with an estimated 10 to 20 million dogs killed every year for human consumption. In Vietnam, about 5 million dogs are slaughtered annually. In 2024, China, Singapore, and Thailand placed a ban on the consumption of dogs within their borders. In some parts of Poland and Central Asia, dog fat is reportedly believed to be beneficial for the lungs. Proponents of eating dog meat have argued that placing a distinction between livestock and dogs is Western hypocrisy and that there is no difference in eating different animals' meat. There is a long history of dog meat consumption in South Korea, but the practice has fallen out of favor. A 2017 survey found that under 40% of participants supported a ban on the distribution and consumption of dog meat. This increased to over 50% in 2020, suggesting changing attitudes, particularly among younger individuals. In 2018, the South Korean government passed a bill banning restaurants that sell dog meat from doing so during that year's Winter Olympics. On 9 January 2024, the South Korean parliament passed a law banning the distribution and sale of dog meat. It will take effect in 2027, with plans to assist dog farmers in transitioning to other products. The primary type of dog raised for meat in South Korea has been the Nureongi. In North Korea where meat is scarce, eating dog is a common and accepted practice, officially promoted by the government. In 2018, the World Health Organization (WHO) reported that 59,000 people died globally from rabies, with 59.6% of the deaths in Asia and 36.4% in Africa. Rabies is a disease for which dogs are the most significant vector. Dog bites affect tens of millions of people globally each year. The primary victims of dog bite incidents are children. They are more likely to sustain more serious injuries from bites, which can lead to death. Sharp claws can lacerate flesh and cause serious infections. In the United States, cats and dogs are a factor in more than 86,000 falls each year. It has been estimated that around 2% of dog-related injuries treated in U.K. hospitals are domestic accidents. The same study concluded that dog-associated road accidents involving injuries more commonly involve two-wheeled vehicles. Some countries and cities have also banned or restricted certain dog breeds, usually for safety concerns. Toxocara canis (dog roundworm) eggs in dog feces can cause toxocariasis. It is estimated that nearly 14% of people in the United States are infected with Toxocara; about 10,000 cases are reported each year. Untreated toxocariasis can cause retinal damage and decreased vision. Dog feces can also contain hookworms that cause cutaneous larva migrans in humans. The scientific evidence is mixed as to whether a dog's companionship can enhance human physical and psychological well-being. Studies suggest that there are benefits to physical health and psychological well-being, but they have been criticized for being "poorly controlled". One study states that "the health of elderly people is related to their health habits and social supports but not to their ownership of, or attachment to, a companion animal". Earlier studies have shown that pet-dog or -cat guardians make fewer hospital visits and are less likely to be on medication for heart problems and sleeping difficulties than non-guardians. People with pet dogs took considerably more physical exercise than those with cats or those without pets; these effects are relatively long-term. Pet guardianship has also been associated with increased survival in cases of coronary artery disease. Human guardians are significantly less likely to die within one year of an acute myocardial infarction than those who do not own dogs. Studies have found a small to moderate correlation between dog-ownership and increased adult physical-activity levels. A 2005 paper by the British Medical Journal states: Recent research has failed to support earlier findings that pet ownership is associated with a reduced risk of cardiovascular disease, a reduced use of general practitioner services, or any psychological or physical benefits on health for community dwelling older people. Research has, however, pointed to significantly less absenteeism from school through sickness among children who live with pets. Health benefits of dogs can result from contact with dogs in general, not solely from having dogs as pets. For example, when in a pet dog's presence, people show reductions in cardiovascular, behavioral, and psychological indicators of anxiety and are exposed to immune-stimulating microorganisms, which can protect against allergies and autoimmune diseases (according to the hygiene hypothesis). Other benefits include dogs as social support. One study indicated that wheelchair-users experience more positive social interactions with strangers when accompanied by a dog than when they are not. In a 2015 study, it was found that having a pet made people more inclined to foster positive relationships with their neighbors. In one study, new guardians reported a significant reduction in minor health problems during the first month following pet acquisition, which was sustained through the 10-month study. Using dogs and other animals as a part of therapy dates back to the late-18th century, when animals were introduced into mental institutions to help socialize patients with mental disorders. Animal-assisted intervention research has shown that animal-assisted therapy with a dog can increase smiling and laughing among people with Alzheimer's disease. One study demonstrated that children with ADHD and conduct disorders who participated in an education program with dogs and other animals showed increased attendance, knowledge, and skill-objectives and decreased antisocial and violent behavior compared with those not in an animal-assisted program. Artworks have depicted dogs as symbols of guidance, protection, loyalty, fidelity, faithfulness, alertness, and love. In ancient Mesopotamia, from the Old Babylonian period until the Neo-Babylonian period, dogs were the symbol of Ninisina, the goddess of healing and medicine, and her worshippers frequently dedicated small models of seated dogs to her. In the Neo-Assyrian and Neo-Babylonian periods, dogs served as emblems of magical protection. In China, Korea, and Japan, dogs are viewed as kind protectors. In mythology, dogs often appear as pets or as watchdogs. Stories of dogs guarding the gates of the underworld recur throughout Indo-European mythologies and may originate from Proto-Indo-European traditions. In Greek mythology, Cerberus is a three-headed, dragon-tailed watchdog who guards the gates of Hades. Dogs also feature in association with the Greek goddess Hecate. In Norse mythology, a dog called Garmr guards Hel, a realm of the dead. In Persian mythology, two four-eyed dogs guard the Chinvat Bridge. In Welsh mythology, Cŵn Annwn guards Annwn. In Hindu mythology, Yama, the god of death, owns two watchdogs named Shyama and Sharvara, which each have four eyes—they are said to watch over the gates of Naraka. A black dog is considered to be the vahana (vehicle) of Bhairava (an incarnation of Shiva). In Christianity, dogs represent faithfulness. Within the Roman Catholic denomination specifically, the iconography of Saint Dominic includes a dog after the saint's mother dreamt of a dog springing from her womb and became pregnant shortly after that. As such, the Dominican Order (Ecclesiastical Latin: Domini canis) means "dog of the Lord" or "hound of the Lord". In Christian folklore, a church grim often takes the form of a black dog to guard Christian churches and their churchyards from sacrilege. Jewish law does not prohibit keeping dogs and other pets but requires Jews to feed dogs (and other animals that they own) before themselves and to make arrangements for feeding them before obtaining them. The view on dogs in Islam is mixed, with some schools of thought viewing them as unclean, although Khaled Abou El Fadl states that this view is based on "pre-Islamic Arab mythology" and "a tradition [...] falsely attributed to the Prophet". The Sunni Maliki school jurists disagree with the idea that dogs are unclean. Terminology See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Quantitative_research] | [TOKENS: 1813] |
Contents Quantitative research Quantitative research is a research strategy that focuses on quantifying the collection and analysis of data. It is formed from a deductive approach where emphasis is placed on the testing of theory, shaped by empiricist and positivist philosophies. Associated with the natural, applied, formal, and social sciences this research strategy promotes the objective empirical investigation of observable phenomena to test and understand relationships. This is done through a range of quantifying methods and techniques, reflecting on its broad utilization as a research strategy across differing academic disciplines. The objective of quantitative research is to develop and employ mathematical models, theories, and hypotheses pertaining to phenomena. The process of measurement is central to quantitative research because it provides the fundamental connection between empirical observation and mathematical expression of quantitative relationships. Quantitative data is any data that is in numerical form such as statistics, percentages, etc. The researcher analyses the data with the help of statistics and hopes the numbers will yield an unbiased result that can be generalized to some larger population. Qualitative research, on the other hand, inquires deeply into specific experiences, with the intention of describing and exploring meaning through text, narrative, or visual-based data, by developing themes exclusive to that set of participants. Quantitative research is widely used in psychology, economics, demography, sociology, marketing, community health, health and human development, gender studies, and political science; and less frequently in anthropology and history. Research in mathematical sciences, such as physics, is also "quantitative" by definition, though this use of the term differs in context. In the social sciences, the term relates to empirical methods originating in both philosophical positivism and the history of statistics, in contrast with qualitative research methods. Qualitative research produces information only on the particular cases studied, and any more general conclusions are only hypotheses. Quantitative methods can be used to verify which of such hypotheses are true. A comprehensive analysis of 1274 articles published in the top two American sociology journals between 1935 and 2005 found that roughly two-thirds of these articles used quantitative method. Overview Quantitative research is generally closely affiliated with ideas from the 'scientific method', which can include: Quantitative research is often contrasted with qualitative research, which purports to be focused more on discovering underlying meanings and patterns of relationships, including classifications of types of phenomena and entities, in a manner that does not involve mathematical models. Approaches to quantitative psychology were first modeled on quantitative approaches in the physical sciences by Gustav Fechner in his work on psychophysics, which built on the work of Ernst Heinrich Weber. Although a distinction is commonly drawn between qualitative and quantitative aspects of scientific investigation, it has been argued that the two go hand in hand. For example, based on analysis of the history of science, Kuhn concludes that "large amounts of qualitative work have usually been prerequisite to fruitful quantification in the physical sciences". Qualitative research is often used to gain a general sense of phenomena and to form theories that can be tested using further quantitative research. For instance, in the social sciences qualitative research methods are often used to gain better understanding of such things as intentionality (from the speech response of the researchee) and meaning (why did this person/group say something and what did it mean to them?) (Kieron Yeoman). Although quantitative investigation of the world has existed since people first began to record events or objects that had been counted, the modern idea of quantitative processes have their roots in Auguste Comte's positivist framework. Positivism emphasized the use of the scientific method through observation to empirically test hypotheses explaining and predicting what, where, why, how, and when phenomena occurred. Positivist scholars like Comte believed only scientific methods rather than previous spiritual explanations for human behavior could advance. Quantitative methods are an integral component of the five angles of analysis fostered by the data percolation methodology, which also includes qualitative methods, reviews of the literature (including scholarly), interviews with experts and computer simulation, and which forms an extension of data triangulation. Quantitative methods have limitations. These studies do not provide reasoning behind participants' responses, they often do not reach underrepresented populations, and they may span long periods in order to collect the data. Use of statistics Statistics is the most widely used branch of mathematics in quantitative research outside of the physical sciences, and also finds applications within the physical sciences, such as in statistical mechanics. Statistical methods are used extensively within fields such as economics, social sciences and biology. Quantitative research using statistical methods starts with the collection of data, based on the hypothesis or theory. Usually a big sample of data is collected – this would require verification, validation and recording before the analysis can take place. Software packages such as SPSS and R are typically used for this purpose. Causal relationships are studied by manipulating factors thought to influence the phenomena of interest while controlling other variables relevant to the experimental outcomes. In the field of health, for example, researchers might measure and study the relationship between dietary intake and measurable physiological effects such as weight loss, controlling for other key variables such as exercise. Quantitatively based opinion surveys are widely used in the media, with statistics such as the proportion of respondents in favor of a position commonly reported. In opinion surveys, respondents are asked a set of structured questions and their responses are tabulated. In the field of climate science, researchers compile and compare statistics such as temperature or atmospheric concentrations of carbon dioxide. Empirical relationships and associations are also frequently studied by using some form of general linear model, non-linear model, or by using factor analysis. A fundamental principle in quantitative research is that correlation does not imply causation, although some such as Clive Granger suggest that a series of correlations can imply a degree of causality. This principle follows from the fact that it is always possible a spurious relationship exists for variables between which covariance is found in some degree. Associations may be examined between any combination of continuous and categorical variables using methods of statistics. Other data analytical approaches for studying causal relations can be performed with Necessary Condition Analysis (NCA), which outlines must-have conditions for the studied outcome variable. Measurement Views regarding the role of measurement in quantitative research are somewhat divergent. Measurement is often regarded as being only a means by which observations are expressed numerically in order to investigate causal relations or associations. However, it has been argued that measurement often plays a more important role in quantitative research. For example, Kuhn argued that within quantitative research, the results that are shown can prove to be strange. This is because accepting a theory based on results of quantitative data could prove to be a natural phenomenon. He argued that such abnormalities are interesting when done during the process of obtaining data, as seen below: In classical physics, the theory and definitions which underpin measurement are generally deterministic in nature. In contrast, probabilistic measurement models known as the Rasch model and Item response theory models are generally employed in the social sciences. Psychometrics is the field of study concerned with the theory and technique for measuring social and psychological attributes and phenomena. This field is central to much quantitative research that is undertaken within the social sciences. Quantitative research may involve the use of proxies as stand-ins for other quantities that cannot be directly measured. Tree-ring width, for example, is considered a reliable proxy of ambient environmental conditions such as the warmth of growing seasons or amount of rainfall. Although scientists cannot directly measure the temperature of past years, tree-ring width and other climate proxies have been used to provide a semi-quantitative record of average temperature in the Northern Hemisphere back to 1000 A.D. When used in this way, the proxy record (tree ring width, say) only reconstructs a certain amount of the variance of the original record. The proxy may be calibrated (for example, during the period of the instrumental record) to determine how much variation is captured, including whether both short and long term variation is revealed. In the case of tree-ring width, different species in different places may show more or less sensitivity to, say, rainfall or temperature: when reconstructing a temperature record there is considerable skill in selecting proxies that are well correlated with the desired variable. Relationship with qualitative methods In most physical and biological sciences, the use of either quantitative or qualitative methods is uncontroversial, and each is used when appropriate. In the social sciences, particularly in sociology, social anthropology and psychology, the use of one or other type of method can be a matter of controversy and even ideology, with particular schools of thought within each discipline favouring one type of method and pouring scorn on to the other. The majority tendency throughout the history of social science, however, is to use eclectic approaches-by combining both methods. Qualitative methods might be used to understand the meaning of the conclusions produced by quantitative methods. Using quantitative methods, it is possible to give precise and testable expression to qualitative ideas. This combination of quantitative and qualitative data gathering is often referred to as mixed-methods research. Examples See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Amazons] | [TOKENS: 6845] |
Contents Amazons In Greek mythology, the Amazons (Ancient Greek: Ἀμαζόνες) were female warriors and hunters, known for their physical agility, strength, archery, riding skills, and the arts of combat. Their society was closed to men and they raised only their daughters, returning their sons to their fathers with whom they would only socialize briefly in order to reproduce. They were portrayed in a number of ancient epic poems and legends, such as the Labours of Heracles, the Argonautica and the Iliad. Courageous and fiercely independent, the Amazons, commanded by their queen, regularly undertook extensive military expeditions into the far corners of the world, from Scythia to Thrace, Asia Minor, and the Aegean Islands, reaching as far as Arabia and Egypt. Besides military raids, the Amazons are also associated with the foundation of temples and the establishment of numerous ancient cities like Ephesos, Cyme, Smyrna, Sinope, Myrina, Magnesia, Pygela, etc. The texts of the original myths envisioned the homeland of the Amazons at the periphery of the then-known world. Various claims to the exact place ranged from provinces in Asia Minor (Lycia, Caria, etc.) to the steppes around the Black Sea. However, authors most frequently referred to Pontus in northern Anatolia, on the southern shores of the Black Sea, as the independent Amazon kingdom where the Amazon queen resided at her capital Themiscyra, on the banks of the Thermodon river. Decades of archaeological discoveries of burial sites of female warriors, including royalty, in the Eurasian Steppes suggest that the horse cultures of the Scythian, Sarmatian, and Hittite peoples likely inspired the Amazon myth. In 2019, a grave with multiple generations of female Scythian warriors, armed and in golden headdresses, was found near Voronezh in southwestern Russia. In 2017, another discovery was found in Armenia, where the grave of a woman buried with jewelry dating back to the Iron Age reflected injuries and musculature of a horse-back warrior who frequented battle. Name The origin of the word is uncertain. It may be derived from an Iranian ethnonym *ha-mazan- 'warriors', a word attested indirectly through a derivation, a denominal verb in Hesychius of Alexandria's gloss "ἁμαζακάραν· πολεμεῖν. Πέρσαι" ("hamazakaran: 'to make war' in Persian"), where it appears together with the Indo-Iranian root *kar- 'make'. It may alternatively be a Greek word descended from *n̥-mn̥gʷ-yō-nós 'manless, without husbands' (alpha privative combined with a derivation from *man- cognate with Proto-Balto-Slavic *mangjá-, found in Czech muž) has been proposed, an explanation deemed "unlikely" by Hjalmar Frisk. A further explanation proposes Iranian *ama-janah 'virility-killing' as source. Among the ancient Greeks, the term Amazon was popularly folk etymologized as originating from the Greek ἀμαζός, amazos ('breastless'), from -a ('without') and mazos, a variant of mastos ('breast'), connected with an etiological tradition once claimed by Marcus Justinus who alleged that Amazons had their right breast cut off or burnt out. There is no indication of such a practice in ancient works of art, in which the Amazons are always represented with both breasts, although one is frequently covered. According to Philostratus, Amazon babies were not fed just with the right breast. Author Adrienne Mayor suggests that the false etymology led to the myth. Herodotus used the terms Androktones (Ἀνδροκτόνες) 'killers/slayers of men' or 'of husbands' and Androleteirai (Ἀνδρολέτειραι) 'destroyers of men, murderesses'. Amazons are called Antianeirai (Ἀντιάνειραι) 'equivalent to men' and Aeschylus used the term Styganor (Στυγάνωρ) 'those who loathe all men'. In his work Prometheus Bound and in The Suppliants, Aeschylus referred to the Amazons as 'the unwed, flesh-devouring Amazons' (...τὰς ἀνάνδρους κρεοβόρους τ᾽ Ἀμαζόνας). In the Hippolytus tragedy, Phaedra calls Hippolytus, 'the son of the horse-loving Amazon' (...τῆς φιλίππου παῖς Ἀμαζόνος βοᾷ Ἱππόλυτος...). In his Dionysiaca, Nonnus calls the Amazons of Dionysus Androphonus (Ἀνδροφόνους) 'men slaying'. Herodotus stated that in the Scythian language, the Amazons were called Oiorpata, which he explained as being from oior 'man' and pata 'to slay'. Historiography The ancient Greeks never had any doubts that the Amazons were, or had been, real. Not the only people enchanted by warlike women of nomadic cultures, such exciting tales also come from ancient Egypt, Persia, India, and China. Greek heroes of old had encounters with the queens of their martial society and fought them. However, their original home was not exactly known, thought to be in the obscure lands beyond the civilized world. As a result, many classical scholars consider Amazons to be entirely fictional figures, invented by Greek men to serve as "anti-women" or to symbolize Persians. Some authors preferred comparisons to cultures of Asia Minor or even Minoan Crete. The most obvious historical candidates are Lycia and Scythia and Sarmatia in line with the account by Herodotus. In his Histories (5th century BCE) Herodotus claims that the Sauromatae (predecessors of the Sarmatians), who ruled the lands between the Caspian Sea and the Black Sea, arose from a union of Scythians and Amazons. Herodotus also observed rather unusual customs among the Lycians of southwest Asia Minor. The Lycians obviously followed matrilineal rules of descent, virtue, and status. They named themselves along their maternal family line and a child's status was determined by the mother's reputation. This remarkably high esteem of women and legal regulations based on maternal lines, still in effect in the 5th century BCE in the Lycian regions that Herodotus had traveled to, suggested to him the idea that these people were descendants of the mythical Amazons. Modern historiography no longer relies exclusively on textual and artistic material, but also on the vast archaeological evidence of over a thousand nomad graves from steppe territories from the Black Sea all the way to Mongolia. Discoveries of battle-scarred female skeletons buried with their weapons (bows and arrows, quivers, and spears) prove that women warriors were not merely figments of imagination, but the product of the Scythian and Sarmatian horse-centered lifestyle, however it is not known for certain if these people were the inspiration for the Amazons of Greek mythology. Mythology According to myth, Otrera, the first Amazon queen, is the offspring of a romance between Ares the god of war and the nymph Harmonia of the Akmonian Wood, and as such a demigoddess. Early records refer to two events in which Amazons appeared prior to the Trojan War (before 1250 BCE). Within the epic context, Bellerophon, Greek hero, and grandfather of the brothers and Trojan War veterans Glaukos and Sarpedon, faced Amazons during his stay in Lycia, when King Iobates sent Bellerophon to fight the Amazons, hoping they would kill him, yet Bellerophon slew them all. The youthful King Priam of Troy fought on the side of the Phrygians, who were attacked by Amazons at the Sangarios River. On the Greek side: On the Trojan side: There are Amazon characters in Homer's Trojan War epic poem, the Iliad, one of the oldest surviving texts in Europe (around 8th century BCE). The now lost epic Aethiopis (probably by Arctinus of Miletus, 6th century BCE), like the Iliad and several other epics, is one of the works that in combination form the Trojan War Epic Cycle. In one of the few references to the text, an Amazon force under queen Penthesilea, who was of Thracian birth, came to join the ranks of the Trojans after Hector's death and initially put the Greeks under serious pressure. Only after the greatest effort and the help of the reinvigorated hero Achilles, the Greeks eventually triumphed. Penthesilea died fighting the mighty Achilles in single combat. Homer himself deemed the Amazon myths to be common knowledge all over Greece, which suggests that they had already been known for some time before him. He was also convinced that the Amazons lived not at its fringes, but somewhere in or around Lycia in Asia Minor - a place well within the Greek world. Troy is mentioned in the Iliad as the place of Myrine's death. Later identified as an Amazon queen, according to Diodorus (1st century BCE), the Amazons under her rule invaded the territories of the Atlantians, defeated the army of the Atlantian city of Cerne, and razed the city to the ground. The Poet Bacchylides (6th century BCE) and the historian Herodotus (5th century BCE) located the Amazon homeland in Pontus at the southern shores of the Black Sea, and the capital Themiscyra at the banks of the Thermodon (modern Terme river), by the modern city of Terme. Herodotus also explains how it came to be that some Amazons would eventually be living in Scythia. A Greek fleet, sailing home upon defeating the Amazons in battle at the Thermodon river, included three ships crowded with Amazon prisoners. Once out at sea, the Amazon prisoners overwhelmed and killed the small crews of the prisoner ships and, despite not having even basic navigation skills, managed to escape and safely disembark at the Scythian shore. As soon as the Amazons had caught enough horses, they easily asserted themselves in the steppe in between the Caspian Sea and the Black Sea and, according to Herodotus, would eventually assimilate with the Scythians, whose descendants were the Sauromatae, the predecessors of the Sarmatians. Strabo (1st century BCE) visits and confirms the original homeland of the Amazons on the plains by the Thermodon river. However, long gone and not seen again during his lifetime, the Amazons had allegedly retreated into the mountains. Strabo, however, added that other authors, among them Metrodorus of Scepsis and Hypsicrates claim that after abandoning Themiscyra, the Amazons had chosen to resettle beyond the borders of the Gargareans, an all-male tribe native to the northern foothills of the Caucasian Mountains. The Amazons and Gargareans had for many generations met in secrecy once a year during two months in spring, in order to produce children. These encounters would take place in accordance with ancient tribal customs and collective offers of sacrifices. All females were retained by the Amazons themselves, and males were returned to the Gargareans. 5th century BCE poet Magnes sings of the bravery of the Lydians in a cavalry-battle against the Amazons. Hippolyte was an Amazon queen killed by Heracles, who had set out to obtain the queen's magic belt in a task he was to accomplish as one of the Labours of Heracles. Although neither side had intended to resort to lethal combat, a misunderstanding led to the fight. In the course of this, Heracles killed the queen and several other Amazons. In awe of the strong hero, the Amazons eventually handed the belt to Heracles. In another version, Heracles does not kill the queen, but exchanges her kidnapped sister Melanippe for the belt. Queen Hippolyte was abducted by Theseus, who took her to Athens, where they got married and had a son, Hippolytus. In other versions, the kidnapped Amazon is called Antiope, the sister of Hippolyte. In revenge, the Amazons invaded Greece, plundered some cities along the coast of Attica, and besieged and occupied Athens. Hippolyte, who fought on the side of Athens, according to another account was killed during the final battle along with all of the Amazons. According to Plutarch, the god Dionysus and his companions fought Amazons at Ephesus. The Amazons fled to Samos and Dionysus pursued them and killed a great number of them at a site since called Panaema (blood-soaked field). The Christian author Eusebius writes that during the reign of Oxyntes, one of the mythical kings of Athens, the Amazons burned down the temple at Ephesus. In another myth Dionysus unites with the Amazons to fight against Cronus and the Titans. Polyaenus writes that after Dionysus has subdued the Indians, he allies with them and the Amazons and takes them into his service, who serve him in his campaign against the Bactrians. Nonnus in his Dionysiaca reports about the Amazons of Dionysus, but states that they do not come from Thermodon. Amazons are also mentioned by historians and biographers of Alexander the Great who reports Queen Thalestris's seeking him out in order to bear him a child. However, other biographers of Alexander dispute the claim, including the highly regarded Plutarch. He noted a moment when Alexander's naval commander Onesicritus read an Amazon myth passage of his Alexander History to King Lysimachus of Thrace who had taken part in the original expedition. The king smiled at him and said: "And where was I, then?" A story in the Alexander Romance involves his conquest of the Amazons, carried out mainly by an exchange of threatening letters. The Talmud recounts that Alexander wanted to conquer a "kingdom of women" but reconsidered when the women told him: If you kill us, people will say: Alexander kills women; and if we kill you, people will say: Alexander is the king whom women killed in battle. Virgil's characterization of the Volsci warrior maiden Camilla in the Aeneid borrows from the myths of the Amazons. Philostratus, in Heroica, writes that the Mysian women fought on horses alongside the men, just as the Amazons. The leader was Hiera, wife of Telephus. The Amazons are also said to have undertaken an expedition against the Island of Leuke, at the mouth of the Danube, where the ashes of Achilles were deposited by Thetis. The ghost of the dead hero so terrified the horses, that they threw off and trampled upon the invaders, who were forced to retreat. Virgil touches on the Amazons and their queen Penthesilea in his epic Aeneid (around 20 BCE). The biographer Suetonius had Julius Caesar remark in his De vita Caesarum that the Amazons once ruled a large part of Asia. Appian provides a vivid description of Themiscyra and its fortifications in his account of Lucius Licinius Lucullus's Siege of Themiscyra in 71 BCE during the Third Mithridatic War. An Amazon myth has been partly preserved in two badly fragmented versions around historical people in 7th century BCE Egypt. The Egyptian prince Petechonsis and allied Assyrian troops undertook a joint campaign into the Land of Women, to the Middle East at the border to India. Petechonsis initially fought the Amazons, but soon fell in love with their queen Sarpot and eventually allied with her against an invading Indian army. This story is said to have originated in Egypt independently of Greek influences. Sources provide names of individual Amazons, that are referred to as queens of their people, even as the head of a dynasty. Without a male companion, they are portrayed in command of their female warriors. Among the most prominent Amazon queens were: Various authors and chroniclers Quintus Smyrnaeus, author of the Posthomerica lists the attendant warriors of Penthesilea: "Clonie was there, Polemusa, Derinoe, Evandre, and Antandre, and Bremusa, Hippothoe, dark-eyed Harmothoe, Alcibie, Derimacheia, Antibrote, and Thermodosa glorying with the spear." Diodorus Siculus lists twelve Amazons who challenged and died fighting Heracles during his quest for Hippolyta's girdle: Aella, Philippis, Prothoe, Eriboea, Celaeno, Eurybia, Phoebe, Deianeira, Asteria, Marpe, Tecmessa, and Alcippe. After Alcippe's death, a group attack followed. Diodorus also mentions Melanippe, whom Heracles set free after accepting her girdle and Antiope as ransom. Diodorus lists another group with Myrina as the queen who commanded the Amazons in a military expedition in Libya, as well as her sister Mytilene, after whom she named the city of the same name. Myrina also named three more cities after the Amazons who held the most important commands under her, Cyme, Pitane, and Priene. Both Justin in his Epitome of Trogus Pompeius and Paulus Orosius give an account of the Amazons, citing the same names. Queens Marpesia and Lampedo shared the power during an incursion in Europe and Asia, where they were slain. Marpesia's daughter Orithyia succeeded them and was greatly admired for her skill on war. She shared power with her sister Antiope, but she was engaged in war abroad when Heracles attacked. Two of Antiope's sisters were taken prisoner, Melanippe by Heracles and Hippolyta by Theseus. Heracles latter restored Melanippe to her sister after receiving the queen's arms in exchange, though, on other accounts she was killed by Telamon. They also mention Penthesilea's role in the Trojan War. Another list of Amazons' names is found in Hyginus's Fabulae. Along with Hippolyta, Otrera, Antiope and Penthesilea, it attests the following names: Ocyale, Dioxippe, Iphinome, Xanthe, Hippothoe, Laomache, Glauce, Agave, Theseis, Clymene, Polydora. Perhaps the most important is Queen Otrera, consort of Ares and mother by him of Hippolyta and Penthesilea. She is also known for building a temple to Artemis at Ephesus. Another different set of names is found in Valerius Flaccus's Argonautica. He mentions Euryale, Harpe, Lyce, Menippe and Thoe. Of these Lyce also appears on a fragment, preserved in the Latin Anthology where she is said to have killed the hero Clonus of Moesia, son of Doryclus, with her javelin. Palaephatus, who himself might have been a fictional character, attempted to rationalize the Greek myths in his work On Unbelievable Tales. He suspected that the Amazons were probably men who were mistaken for women by their enemies because they wore clothing that reached their feet, tied up their hair in headbands, and shaved their beards. Probably the first in a long line of skeptics, he rejected any real basis for them, reasoning that because they did not exist during his time, most probably they did not exist in the past either. He himself contradicted this in his rationalizing of Oedipus and the Sphinx, portraying the latter as an Amazon woman named "Sphinx." Late Antiquity, Middle Ages, and Renaissance literature Stephanus of Byzantium (7th-century CE) provides numerous alternative lists of the Amazons, including for those who died in combat against Heracles, describing them as the "most prominent of their people". Both Stephanus and Eustathius connect these Amazons with the placename "Thibais", which they claim to have been derived from the Amazon Thiba's name. Several of Stephanus's Amazons served as eponyms for cities in Asia Minor, like Cyme and Smyrna or Amastris, who was believed to lend her name to the city previously known as Kromna, although in fact it was named after the historical Amastris. The city Anaea in Caria was named after an Amazon. In his work Getica (on the origin and history of the Goths, c. 551 CE), Jordanes asserts that the Goths' ancestors, descendants of Magog, originally lived in Scythia, at the Sea of Azov between the Dnieper and Don Rivers. When the Goths were abroad campaigning against Pharaoh Vesosis, their women, on their own successfully fended off a raid by a neighboring tribe. Emboldened, the women established their own army under Marpesia, crossed the Don and invaded eastward into Asia. Marpesia's sister Lampedo remained in Europe to guard the homeland. They procreated with men once a year. These women conquered Armenia, Syria, and all of Asia Minor, even reaching Ionia and Aeolis, holding this vast territory for 100 years. In Digenes Akritas, the twelfth century medieval epic of Basil, the Greco-Syrian knight of the Byzantine frontier, the hero battles and then commits adultery with the female warrior Maximo (killing her afterwards in one version of the epic), descended from some Amazons and taken by Alexander from the Brahmans. John Tzetzes lists in Posthomerica twenty Amazons, who fell at Troy. This list is unique in its attestation for all the names but Antianeira, Andromache, and Hippothoe. Other than these three, the remaining 17 Amazons were named as Toxophone, Toxoanassa, Gortyessa, Iodoce, Pharetre, Andro, Ioxeia, Oistrophe, Androdaixa, Aspidocharme, Enchesimargos, Cnemis, Thorece, Chalcaor, Eurylophe, Hecate, and Anchimache. Famous medieval traveller John Mandeville mentions them in his book: Beside the land of Chaldea is the land of Amazonia, that is the land of Feminye. And in that realm is all woman and no man; not as some may say, that men may not live there, but for because that the women will not suffer no men amongst them to be their sovereigns. Medieval and Renaissance authors credit the Amazons with the invention of the battle-axe. This is probably related to the sagaris, an axe-like weapon associated with both Amazons and Scythian tribes by Greek authors (see also Thracian tomb of Aleksandrovo kurgan). Paulus Hector Mair expresses astonishment that such a "manly weapon" should have been invented by a "tribe of women", but he accepts the attribution out of respect for his authority, Johannes Aventinus. Ariosto's Orlando Furioso contains a country of warrior women, ruled by Queen Orontea; the epic describes an origin much like that in Greek myth, in that the women, abandoned by a band of warriors and unfaithful lovers, rallied together to form a nation from which men were severely reduced, to prevent them from regaining power. The Amazons and Queen Hippolyta are also referenced in Geoffrey Chaucer's Canterbury Tales in "The Knight's Tale". Amazons continued to be subject of scholarly debate during the European Renaissance, and with the onset of the Age of Exploration, encounters were reported from ever more distant lands. In 1542, Francisco de Orellana reached the Amazon River, naming it after the Icamiabas [pt], a tribe of warlike women he claimed to have encountered and fought on the Nhamundá River, a tributary of the Amazon. Afterwards the whole basin and region of the Amazon (Amazônia in Portuguese, Amazonía in Spanish) were named after the river. Amazons also figure in the accounts of both Christopher Columbus and Walter Raleigh. Amazons in art Beginning around 550 BCE. depictions of Amazons as daring fighters and equestrian warriors appeared on vases. After the Battle of Marathon in 490 BCE the Amazon battle - Amazonomachy became popular motifs on pottery. By the sixth century BCE, public and privately displayed artwork used the Amazon imagery for pediment reliefs, sarcophagi, mosaics, pottery, jewelry and even monumental sculptures, that adorned important buildings like the Parthenon in Athens. Amazon motifs remained popular until the Roman imperial period and into Late antiquity. Apart from the artistic desire to express the passionate womanhood of the Amazons in contrast with the manhood of their enemies, some modern historians interpret the popularity of Amazon in art as indicators of societal trends, both positive and negative. Greek and Roman societies, however, utilized the Amazon mythology as a literary and artistic vehicle to unite against a commonly held enemy. The metaphysical characteristics of Amazons were seen as personifications of both nature and religion. Roman authors like Virgil, Strabo, Pliny the Elder, Curtius, Plutarch, Arrian, and Pausanias advocated the greatness of the state, as Amazon myths served to discuss the creation of origin and identity for the Roman people. However, that changed over time. Amazons in Roman literature and art have many faces, such as the Trojan ally, the warrior goddess, the native Latin, the warmongering Celt, the proud Sarmatian, the hedonistic and passionate Thracian warrior queen, the subdued Asian city, and the worthy Roman foe. In Renaissance Europe, artists started to reevaluate and depict Amazons based on Christian ethics. Queen Elizabeth of England was associated with Amazon warrior qualities (the foremost ancient examples of feminism) during her reign and was indeed depicted as such. Though, as explained in Divina Virago by Winfried Schleiner, Celeste T. Wright has given a detailed account of the bad reputation Amazons had in the Renaissance. She notes that she has not found any Elizabethans comparing the Queen to an Amazon and suggests that they might have hesitated to do so because of the association of Amazons with enfranchisement of women, which was considered contemptible. Elizabeth was present at a tournament celebrating the marriage of the Earl of Warwick and Anne Russell at Westminster Palace on 11 November 1565 involving male riders dressed as Amazons. They accompanied the challengers carrying their heraldry. These riders wore crimson gowns, masks with long hair attached, and swords. Peter Paul Rubens and Jan Brueghel depicted the Battle of the Amazons around 1598, a most dramatic baroque painting, followed by a painting of the Rococo period by Johann Georg Platzer, also titled Battle of the Amazons. In 19th-century European Romanticism German artist Anselm Feuerbach occupied himself with the Amazons as well. Of Faeurbach's painting, Gert Schiff wrote that: It engendered all the aspirations of the Romantics: their desire to transcend the boundaries of the ego and of the known world; their interest in the occult in nature and in the soul; their search for a national identity, and the ensuing search for the mythic origins of the Germanic nation; finally, their wish to escape the harsh realities of the present through immersion in an idealized past. Maps On medieval Borgia Velletri map picture of females with bow and arrow and with spear and shield with description The land formerly of illustrious women of place North (on the bottom) on Edilus fluuius maximus (Volga). In medieval Fra Mauro map country Ancient Greek: Ἀμαζόnia, romanized: Amazonia placed on the Middle Volga. Archaeology Speculation that the idea of Amazons, specifically the Amazons known to the Greeks, contains a core of reality is based on archaeological discoveries at kurgan burial sites in the steppes of southern Ukraine and Russia. The varied war weapon artifacts found in graves of numerous high-ranking Scythian and Sarmatian warrior women have led scholars to conclude that the Amazonian legend has been inspired by the real world: About 20% of the warrior graves on the lower Don and lower Volga contained women dressed for battle similar to how men dress. Armed women accounted for up to 25% of Sarmatian military burials. Russian archaeologist Vera Kovalevskaya asserts that when Scythian men were abroad fighting or hunting, women would have to be able to competently defend themselves, their animals, and their pastures. In early 20th century Minoan archeology, a theory regarding Amazon origins in Minoan civilization was raised in an essay by Lewis Richard Farnell and John Myres. According to Myres, the tradition interpreted in the light of evidence furnished by supposed Amazon cults seems to have been very similar and may have even originated in Minoan culture. Modern legacy The city of Samsun in modern-day Samsun Province, Turkey features an Amazon Village museum, to help bring attention to the legacy of the Amazons and to promote both academic interest and tourism. The Amazon warriors have been seen as a symbol of empowerment for feminist movements. The legacy has empowered and encouraged other women to build their strength and stand against societal norms. They have inspired countless amounts of women to stand up for themselves and what they believe. An annual Amazon Celebration Festival takes place in the Terme district. During the Ottoman–Egyptian invasion of Mani in 1826, in the battle of Diros the women of Mani defeated the Ottoman army and for this were given the name of 'The Amazons of Diros'. From 1936 to 1939, annual propaganda events, called Night of the Amazons (Nacht der Amazonen) were performed in Nazi Germany at the Nymphenburg Palace Park in Munich. Announced as evening highlights of the International Horse Racing Week Munich-Riem, bare-breasted variety show girls of the SS-Cavalry, 2,500 participants and international guests performed at the open-air revue. These revues served to promote an allegedly emancipated female role and a cosmopolitan and foreigner-friendly Nazi regime.[citation needed] Amazons are featured in the following roleplay - and video games: Diablo, Heroes Unlimited, Aliens Unlimited, Amazon: Guardians of Eden, Flight of the Amazon Queen, A Total War Saga: Troy, Rome: Total War, Final Fantasy IV, Age of Wonders: Planetfall, Legend of Zelda series and Yu-Gi-Oh games. Zeus - Master Of Olympus. In Zeus - Master Of Olympus, the Temple of Artemis has two companies of Amazon troops,and in certain main quest campaigns, they are in Themiscyra and three other queendoms that the protagonist has to deal with in the quests. The Neptune trojans, asteroids 60° ahead or beyond Neptune on its orbit, are individually named after mythological Amazons. See also References Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Simon_Maccabaeus] | [TOKENS: 1193] |
Contents Simon Thassi Simon Thassi (Hebrew: שִׁמְעוֹן הַתַּסִּי Šīməʿōn haTassī; died 135 BC) was a Jewish leader of the Hasmonean dynasty, serving as high priest, military commander, and ruler of Judea. The second son of the Hasmonean patriarch Mattathias and one of the Maccabean brothers, he assumed leadership after his brother Jonathan Apphus was captured by the Seleucid general Diodotus Tryphon. Simon played a central role in consolidating Hasmonean rule: he strengthened Judea's fortifications, expelled the Seleucid garrison from Jerusalem, and expanded Jewish settlement, laying the foundation for the Hasmonean state. His rule marked the beginning of effective Jewish independence. Simon assumed leadership in 143 BCE, completing Jerusalem's fortifications and securing key areas, including Gezer and the port city of Jaffa, where he stationed Jewish garrisons and settled Jewish inhabitants. He defended Judea from Tryphon's forces and recovered his brother Jonathan’s body for burial at Modi'in. Simon consolidated Judea's independence, cultivated relations with Rome, Sparta, and the Seleucid Empire, and was granted rights such as tax exemption and coinage, though he may not have used them. In 142/141 BCE, he captured the Acra fortress in Jerusalem, removing the last remaining Hellenistic presence in the city. A public assembly formalized his rule as high priest, military commander, and national leader of the Jews, with hereditary succession in his family "until a new prophet should arise." Simon was assassinated in 134 BCE at the fortress of Dok near Jericho by his son-in-law Ptolemy ben Abubus. His third son, John Hyrcanus, escaped and succeeded him, continuing the Hasmonean dynasty and expanding Judea's borders. Names The name "Thassi" has a connotation of "the Wise", a title which can also mean "the Director", "the Guide", "the Man of Counsel", and "the Zealous". This Simon is also sometimes distinguished as Simon the Hasmonean, Simon Maccabee, or (from Latin) Simon Maccabeus. History Simon took a prominent part in the Maccabean Revolt against the Seleucid Empire led by his brothers, Judas Maccabaeus and Jonathan Apphus. The successes of the Jews rendered it expedient for the Seleucid leaders in Syria to show them special favour. Therefore, Antiochus VI appointed Simon strategos, or military commander, of the coastal region stretching from the Ladder of Tyre to Egypt. As strategos, Simon gained control of the cities of Beth-zur and Joppa, garrisoning them with Jewish troops, and built the fortress of Adida. After the capture of Jonathan by the Seleucid general Diodotus Tryphon, Simon was elected leader by the people, assembled at Jerusalem. He at once completed the fortification of the capital, and made Joppa secure. At Hadid he blocked the advance of Tryphon, who was attempting to enter the country and seize the throne of Syria. Realizing he could gain nothing by force, Tryphon demanded a ransom for Jonathan and for the release of Jonathan's sons as hostages. Although Simon was aware that Tryphon would deceive him, both Josephus and 1 Maccabees state that he acceded to both demands so that the people might see that he had done everything possible for his brother. Jonathan was nevertheless assassinated, and the hostages were not returned. Simon thus became the sole leader of the people. As an opponent of Diodotus Tryphon, Simon decided to side with the Seleucid king, Demetrius II, to whom he sent a deputation requesting freedom from taxation for the country. The fact that his request was granted implied recognition of the political independence of Judea in the year 142 BCE. In 141 BCE, the Jews themselves issued a public decree at a large assembly "of the priests and the people and of the elders of the land, to the effect that Simon should be their leader and high priest forever, until there should arise a faithful prophet". This when Simon Thassi became High Priest of Judaea and Ethnarch (Prince of Judaea). He was the first prince of the Hasmonean dynasty, reigning from 141 to 134 BCE. Recognition of the new dynasty by the Roman Republic was accorded by the Senate about 139 BCE, when the delegation representing Simon was in Rome. Simon had made the Jewish people semi-independent of the Seleucid Empire. In 134 BCE, Simon and his two sons Mattathias and Judah were assassinated at a banquet at Dok by his son-in-law Ptolemy, the Seleucid governor at Jericho; Simon was the last of the Maccabees to 'die with his boots on'. Simon's third son John Hyrcanus succeeded him as high priest and ruler of Judea but was unable to capture Ptolemy, initially because the latter held John's mother hostage, and subsequently because his army disbanded in observance of the custom at the time of resting every seventh year. Under Hyrcanus (134–104 BCE) Jewish independence was finally achieved. Legacy Simon (and its Hebrew form, Simeon) would go on to become the most popular male name for some three centuries afterward in both the Hasmonean Kingdom and Roman Judaea. This was both to honor a Jewish hero who had attained independence for the Jewish state, as well as because "Simon" did not sound artificial or strange to Greek ears. References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Mars#cite_note-hot-18] | [TOKENS: 11899] |
Contents Mars Mars is the fourth planet from the Sun. It is also known as the "Red Planet", for its orange-red appearance. Mars is a desert-like rocky planet with a tenuous atmosphere that is primarily carbon dioxide (CO2). At the average surface level the atmospheric pressure is a few thousandths of Earth's, atmospheric temperature ranges from −153 to 20 °C (−243 to 68 °F), and cosmic radiation is high. Mars retains some water, in the ground as well as thinly in the atmosphere, forming cirrus clouds, fog, frost, larger polar regions of permafrost and ice caps (with seasonal CO2 snow), but no bodies of liquid surface water. Its surface gravity is roughly a third of Earth's or double that of the Moon. Its diameter, 6,779 km (4,212 mi), is about half the Earth's, or twice the Moon's, and its surface area is the size of all the dry land of Earth. Fine dust is prevalent across the surface and the atmosphere, being picked up and spread at the low Martian gravity even by the weak wind of the tenuous atmosphere. The terrain of Mars roughly follows a north-south divide, the Martian dichotomy, with the northern hemisphere mainly consisting of relatively flat, low lying plains, and the southern hemisphere of cratered highlands. Geologically, the planet is fairly active with marsquakes trembling underneath the ground, but also hosts many enormous volcanoes that are extinct (the tallest is Olympus Mons, 21.9 km or 13.6 mi tall), as well as one of the largest canyons in the Solar System (Valles Marineris, 4,000 km or 2,500 mi long). Mars has two natural satellites that are small and irregular in shape: Phobos and Deimos. With a significant axial tilt of 25 degrees, Mars experiences seasons, like Earth (which has an axial tilt of 23.5 degrees). A Martian solar year is equal to 1.88 Earth years (687 Earth days), a Martian solar day (sol) is equal to 24.6 hours. Mars formed along with the other planets approximately 4.5 billion years ago. During the martian Noachian period (4.5 to 3.5 billion years ago), its surface was marked by meteor impacts, valley formation, erosion, the possible presence of water oceans and the loss of its magnetosphere. The Hesperian period (beginning 3.5 billion years ago and ending 3.3–2.9 billion years ago) was dominated by widespread volcanic activity and flooding that carved immense outflow channels. The Amazonian period, which continues to the present, is the currently dominating and remaining influence on geological processes. Because of Mars's geological history, the possibility of past or present life on Mars remains an area of active scientific investigation, with some possible traces needing further examination. Being visible with the naked eye in Earth's sky as a red wandering star, Mars has been observed throughout history, acquiring diverse associations in different cultures. In 1963 the first flight to Mars took place with Mars 1, but communication was lost en route. The first successful flyby exploration of Mars was conducted in 1965 with Mariner 4. In 1971 Mariner 9 entered orbit around Mars, being the first spacecraft to orbit any body other than the Moon, Sun or Earth; following in the same year were the first uncontrolled impact (Mars 2) and first successful landing (Mars 3) on Mars. Probes have been active on Mars continuously since 1997. At times, more than ten probes have simultaneously operated in orbit or on the surface, more than at any other planet beyond Earth. Mars is an often proposed target for future crewed exploration missions, though no such mission is currently planned. Natural history Scientists have theorized that during the Solar System's formation, Mars was created as the result of a random process of run-away accretion of material from the protoplanetary disk that orbited the Sun. Mars has many distinctive chemical features caused by its position in the Solar System. Elements with comparatively low boiling points, such as chlorine, phosphorus, and sulfur, are much more common on Mars than on Earth; these elements were probably pushed outward by the young Sun's energetic solar wind. After the formation of the planets, the inner Solar System may have been subjected to the so-called Late Heavy Bombardment. About 60% of the surface of Mars shows a record of impacts from that era, whereas much of the remaining surface is probably underlain by immense impact basins caused by those events. However, more recent modeling has disputed the existence of the Late Heavy Bombardment. There is evidence of an enormous impact basin in the Northern Hemisphere of Mars, spanning 10,600 by 8,500 kilometres (6,600 by 5,300 mi), or roughly four times the size of the Moon's South Pole–Aitken basin, which would be the largest impact basin yet discovered if confirmed. It has been hypothesized that the basin was formed when Mars was struck by a Pluto-sized body about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet. A 2023 study shows evidence, based on the orbital inclination of Deimos (a small moon of Mars), that Mars may once have had a ring system 3.5 billion years to 4 billion years ago. This ring system may have been formed from a moon, 20 times more massive than Phobos, orbiting Mars billions of years ago; and Phobos would be a remnant of that ring. Epochs: The geological history of Mars can be split into many periods, but the following are the three primary periods: Geological activity is still taking place on Mars. The Athabasca Valles is home to sheet-like lava flows created about 200 million years ago. Water flows in the grabens called the Cerberus Fossae occurred less than 20 million years ago, indicating equally recent volcanic intrusions. The Mars Reconnaissance Orbiter has captured images of avalanches. Physical characteristics Mars is approximately half the diameter of Earth or twice that of the Moon, with a surface area only slightly less than the total area of Earth's dry land. Mars is less dense than Earth, having about 15% of Earth's volume and 11% of Earth's mass, resulting in about 38% of Earth's surface gravity. Mars is the only presently known example of a desert planet, a rocky planet with a surface akin to that of Earth's deserts. The red-orange appearance of the Martian surface is caused by iron(III) oxide (nanophase Fe2O3) and the iron(III) oxide-hydroxide mineral goethite. It can look like butterscotch; other common surface colors include golden, brown, tan, and greenish, depending on the minerals present. Like Earth, Mars is differentiated into a dense metallic core overlaid by less dense rocky layers. The outermost layer is the crust, which is on average about 42–56 kilometres (26–35 mi) thick, with a minimum thickness of 6 kilometres (3.7 mi) in Isidis Planitia, and a maximum thickness of 117 kilometres (73 mi) in the southern Tharsis plateau. For comparison, Earth's crust averages 27.3 ± 4.8 km in thickness. The most abundant elements in the Martian crust are silicon, oxygen, iron, magnesium, aluminum, calcium, and potassium. Mars is confirmed to be seismically active; in 2019, it was reported that InSight had detected and recorded over 450 marsquakes and related events. Beneath the crust is a silicate mantle responsible for many of the tectonic and volcanic features on the planet's surface. The upper Martian mantle is a low-velocity zone, where the velocity of seismic waves is lower than surrounding depth intervals. The mantle appears to be rigid down to the depth of about 250 km, giving Mars a very thick lithosphere compared to Earth. Below this the mantle gradually becomes more ductile, and the seismic wave velocity starts to grow again. The Martian mantle does not appear to have a thermally insulating layer analogous to Earth's lower mantle; instead, below 1050 km in depth, it becomes mineralogically similar to Earth's transition zone. At the bottom of the mantle lies a basal liquid silicate layer approximately 150–180 km thick. The Martian mantle appears to be highly heterogenous, with dense fragments up to 4 km across, likely injected deep into the planet by colossal impacts ~4.5 billion years ago; high-frequency waves from eight marsquakes slowed as they passed these localized regions, and modeling indicates the heterogeneities are compositionally distinct debris preserved because Mars lacks plate tectonics and has a sluggishly convecting interior that prevents complete homogenization. Mars's iron and nickel core is at least partially molten, and may have a solid inner core. It is around half of Mars's radius, approximately 1650–1675 km, and is enriched in light elements such as sulfur, oxygen, carbon, and hydrogen. The temperature of the core is estimated to be 2000–2400 K, compared to 5400–6230 K for Earth's solid inner core. In 2025, based on data from the InSight lander, a group of researchers reported the detection of a solid inner core 613 kilometres (381 mi) ± 67 kilometres (42 mi) in radius. Mars is a terrestrial planet with a surface that consists of minerals containing silicon and oxygen, metals, and other elements that typically make up rock. The Martian surface is primarily composed of tholeiitic basalt, although parts are more silica-rich than typical basalt and may be similar to andesitic rocks on Earth, or silica glass. Regions of low albedo suggest concentrations of plagioclase feldspar, with northern low albedo regions displaying higher than normal concentrations of sheet silicates and high-silicon glass. Parts of the southern highlands include detectable amounts of high-calcium pyroxenes. Localized concentrations of hematite and olivine have been found. Much of the surface is deeply covered by finely grained iron(III) oxide dust. The Phoenix lander returned data showing Martian soil to be slightly alkaline and containing elements such as magnesium, sodium, potassium and chlorine. These nutrients are found in soils on Earth, and are necessary for plant growth. Experiments performed by the lander showed that the Martian soil has a basic pH of 7.7, and contains 0.6% perchlorate by weight, concentrations that are toxic to humans. Streaks are common across Mars and new ones appear frequently on steep slopes of craters, troughs, and valleys. The streaks are dark at first and get lighter with age. The streaks can start in a tiny area, then spread out for hundreds of metres. They have been seen to follow the edges of boulders and other obstacles in their path. The commonly accepted hypotheses include that they are dark underlying layers of soil revealed after avalanches of bright dust or dust devils. Several other explanations have been put forward, including those that involve water or even the growth of organisms. Environmental radiation levels on the surface are on average 0.64 millisieverts of radiation per day, and significantly less than the radiation of 1.84 millisieverts per day or 22 millirads per day during the flight to and from Mars. For comparison the radiation levels in low Earth orbit, where Earth's space stations orbit, are around 0.5 millisieverts of radiation per day. Hellas Planitia has the lowest surface radiation at about 0.342 millisieverts per day, featuring lava tubes southwest of Hadriacus Mons with potentially levels as low as 0.064 millisieverts per day, comparable to radiation levels during flights on Earth. Although Mars has no evidence of a structured global magnetic field, observations show that parts of the planet's crust have been magnetized, suggesting that alternating polarity reversals of its dipole field have occurred in the past. This paleomagnetism of magnetically susceptible minerals is similar to the alternating bands found on Earth's ocean floors. One hypothesis, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands suggest plate tectonic activity on Mars four billion years ago, before the planetary dynamo ceased to function and the planet's magnetic field faded. Geography and features Although better remembered for mapping the Moon, Johann Heinrich von Mädler and Wilhelm Beer were the first areographers. They began by establishing that most of Mars's surface features were permanent and by more precisely determining the planet's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars. Features on Mars are named from a variety of sources. Albedo features are named for classical mythology. Craters larger than roughly 50 km are named for deceased scientists and writers and others who have contributed to the study of Mars. Smaller craters are named for towns and villages of the world with populations of less than 100,000. Large valleys are named for the word "Mars" or "star" in various languages; smaller valleys are named for rivers. Large albedo features retain many of the older names but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian "continents" and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The permanent northern polar ice cap is named Planum Boreum. The southern cap is called Planum Australe. Mars's equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mädler and Beer selected a line for their first maps of Mars in 1830. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen by Merton E. Davies, Harold Masursky, and Gérard de Vaucouleurs for the definition of 0.0° longitude to coincide with the original selection. Because Mars has no oceans, and hence no "sea level", a zero-elevation surface had to be selected as a reference level; this is called the areoid of Mars, analogous to the terrestrial geoid. Zero altitude was defined by the height at which there is 610.5 Pa (6.105 mbar) of atmospheric pressure. This pressure corresponds to the triple point of water, and it is about 0.6% of the sea level surface pressure on Earth (0.006 atm). For mapping purposes, the United States Geological Survey divides the surface of Mars into thirty cartographic quadrangles, each named for a classical albedo feature it contains. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023. The vast upland region Tharsis contains several massive volcanoes, which include the shield volcano Olympus Mons. The edifice is over 600 km (370 mi) wide. Because the mountain is so large, with complex structure at its edges, giving a definite height to it is difficult. Its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over 21 km (13 mi), a little over twice the height of Mauna Kea as measured from its base on the ocean floor. The total elevation change from the plains of Amazonis Planitia, over 1,000 km (620 mi) to the northwest, to the summit approaches 26 km (16 mi), roughly three times the height of Mount Everest, which in comparison stands at just over 8.8 kilometres (5.5 mi). Consequently, Olympus Mons is either the tallest or second-tallest mountain in the Solar System; the only known mountain which might be taller is the Rheasilvia peak on the asteroid Vesta, at 20–25 km (12–16 mi). The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. It is possible that, four billion years ago, the Northern Hemisphere of Mars was struck by an object one-tenth to two-thirds the size of Earth's Moon. If this is the case, the Northern Hemisphere of Mars would be the site of an impact crater 10,600 by 8,500 kilometres (6,600 by 5,300 mi) in size, or roughly the area of Europe, Asia, and Australia combined, surpassing Utopia Planitia and the Moon's South Pole–Aitken basin as the largest impact crater in the Solar System. Mars is scarred by 43,000 impact craters with a diameter of 5 kilometres (3.1 mi) or greater. The largest exposed crater is Hellas, which is 2,300 kilometres (1,400 mi) wide and 7,000 metres (23,000 ft) deep, and is a light albedo feature clearly visible from Earth. There are other notable impact features, such as Argyre, which is around 1,800 kilometres (1,100 mi) in diameter, and Isidis, which is around 1,500 kilometres (930 mi) in diameter. Due to the smaller mass and size of Mars, the probability of an object colliding with the planet is about half that of Earth. Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter. Martian craters can[discuss] have a morphology that suggests the ground became wet after the meteor impact. The large canyon, Valles Marineris (Latin for 'Mariner Valleys, also known as Agathodaemon in the old canal maps), has a length of 4,000 kilometres (2,500 mi) and a depth of up to 7 kilometres (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 kilometres (277 mi) long and nearly 2 kilometres (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 kilometres (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement. Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the volcano Arsia Mons. The caves, named after loved ones of their discoverers, are collectively known as the "seven sisters". Cave entrances measure from 100 to 252 metres (328 to 827 ft) wide and they are estimated to be at least 73 to 96 metres (240 to 315 ft) deep. Because light does not reach the floor of most of the caves, they may extend much deeper than these lower estimates and widen below the surface. "Dena" is the only exception; its floor is visible and was measured to be 130 metres (430 ft) deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface. Martian geysers (or CO2 jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms – are the two most visible types of features ascribed to these eruptions. Similarly sized dust will settle from the thinner Martian atmosphere sooner than it would on Earth. For example, the dust suspended by the 2001 global dust storms on Mars only remained in the Martian atmosphere for 0.6 years, while the dust from Mount Pinatubo took about two years to settle. However, under current Martian conditions, the mass movements involved are generally much smaller than on Earth. Even the 2001 global dust storms on Mars moved only the equivalent of a very thin dust layer – about 3 μm thick if deposited with uniform thickness between 58° north and south of the equator. Dust deposition at the two rover sites has proceeded at a rate of about the thickness of a grain every 100 sols. Atmosphere Mars lost its magnetosphere 4 billion years ago, possibly because of numerous asteroid strikes, so the solar wind interacts directly with the Martian ionosphere, lowering the atmospheric density by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected ionized atmospheric particles trailing off into space behind Mars, and this atmospheric loss is being studied by the MAVEN orbiter. Compared to Earth, the atmosphere of Mars is quite rarefied. Atmospheric pressure on the surface today ranges from a low of 30 Pa (0.0044 psi) on Olympus Mons to over 1,155 Pa (0.1675 psi) in Hellas Planitia, with a mean pressure at the surface level of 600 Pa (0.087 psi). The highest atmospheric density on Mars is equal to that found 35 kilometres (22 mi) above Earth's surface. The resulting mean surface pressure is only 0.6% of Earth's 101.3 kPa (14.69 psi). The scale height of the atmosphere is about 10.8 kilometres (6.7 mi), which is higher than Earth's 6 kilometres (3.7 mi), because the surface gravity of Mars is only about 38% of Earth's. The atmosphere of Mars consists of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 μm in diameter which give the Martian sky a tawny color when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it. Despite repeated detections of methane on Mars, there is no scientific consensus as to its origin. One suggestion is that methane exists on Mars and that its concentration fluctuates seasonally. The existence of methane could be produced by non-biological process such as serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars, or by Martian life. Compared to Earth, its higher concentration of atmospheric CO2 and lower surface pressure may be why sound is attenuated more on Mars, where natural sources are rare apart from the wind. Using acoustic recordings collected by the Perseverance rover, researchers concluded that the speed of sound there is approximately 240 m/s for frequencies below 240 Hz, and 250 m/s for those above. Auroras have been detected on Mars. Because Mars lacks a global magnetic field, the types and distribution of auroras there differ from those on Earth; rather than being mostly restricted to polar regions as is the case on Earth, a Martian aurora can encompass the planet. In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25 times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month. Mars has seasons, alternating between its northern and southern hemispheres, similar to on Earth. Additionally the orbit of Mars has, compared to Earth's, a large eccentricity and approaches perihelion when it is summer in its southern hemisphere and winter in its northern, and aphelion when it is winter in its southern hemisphere and summer in its northern. As a result, the seasons in its southern hemisphere are more extreme and the seasons in its northern are milder than would otherwise be the case. The summer temperatures in the south can be warmer than the equivalent summer temperatures in the north by up to 30 °C (54 °F). Martian surface temperatures vary from lows of about −110 °C (−166 °F) to highs of up to 35 °C (95 °F) in equatorial summer. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure (about 1% that of the atmosphere of Earth), and the low thermal inertia of Martian soil. The planet is 1.52 times as far from the Sun as Earth, resulting in just 43% of the amount of sunlight. Mars has the largest dust storms in the Solar System, reaching speeds of over 160 km/h (100 mph). These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase global temperature. Seasons also produce dry ice covering polar ice caps. Hydrology While Mars contains water in larger amounts, most of it is dust covered water ice at the Martian polar ice caps. The volume of water ice in the south polar ice cap, if melted, would be enough to cover most of the surface of the planet with a depth of 11 metres (36 ft). Water in its liquid form cannot persist on the surface due to Mars's low atmospheric pressure, which is less than 1% that of Earth. Only at the lowest of elevations are the pressure and temperature high enough for liquid water to exist for short periods. Although little water is present in the atmosphere, there is enough to produce clouds of water ice and different cases of snow and frost, often mixed with snow of carbon dioxide dry ice. Landforms visible on Mars strongly suggest that liquid water has existed on the planet's surface. Huge linear swathes of scoured ground, known as outflow channels, cut across the surface in about 25 places. These are thought to be a record of erosion caused by the catastrophic release of water from subsurface aquifers, though some of these structures have been hypothesized to result from the action of glaciers or lava. One of the larger examples, Ma'adim Vallis, is 700 kilometres (430 mi) long, much greater than the Grand Canyon, with a width of 20 kilometres (12 mi) and a depth of 2 kilometres (1.2 mi) in places. It is thought to have been carved by flowing water early in Mars's history. The youngest of these channels is thought to have formed only a few million years ago. Elsewhere, particularly on the oldest areas of the Martian surface, finer-scale, dendritic networks of valleys are spread across significant proportions of the landscape. Features of these valleys and their distribution strongly imply that they were carved by runoff resulting from precipitation in early Mars history. Subsurface water flow and groundwater sapping may play important subsidiary roles in some networks, but precipitation was probably the root cause of the incision in almost all cases. Along craters and canyon walls, there are thousands of features that appear similar to terrestrial gullies. The gullies tend to be in the highlands of the Southern Hemisphere and face the Equator; all are poleward of 30° latitude. A number of authors have suggested that their formation process involves liquid water, probably from melting ice, although others have argued for formation mechanisms involving carbon dioxide frost or the movement of dry dust. No partially degraded gullies have formed by weathering and no superimposed impact craters have been observed, indicating that these are young features, possibly still active. Other geological features, such as deltas and alluvial fans preserved in craters, are further evidence for warmer, wetter conditions at an interval or intervals in earlier Mars history. Such conditions necessarily require the widespread presence of crater lakes across a large proportion of the surface, for which there is independent mineralogical, sedimentological and geomorphological evidence. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water. The chemical signature of water vapor on Mars was first unequivocally demonstrated in 1963 by spectroscopy using an Earth-based telescope. In 2004, Opportunity detected the mineral jarosite. This forms only in the presence of acidic water, showing that water once existed on Mars. The Spirit rover found concentrated deposits of silica in 2007 that indicated wet conditions in the past, and in December 2011, the mineral gypsum, which also forms in the presence of water, was found on the surface by NASA's Mars rover Opportunity. It is estimated that the amount of water in the upper mantle of Mars, represented by hydroxyl ions contained within Martian minerals, is equal to or greater than that of Earth at 50–300 parts per million of water, which is enough to cover the entire planet to a depth of 200–1,000 metres (660–3,280 ft). On 18 March 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 centimetres (24 in), during the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain. In September 2015, NASA announced that they had found strong evidence of hydrated brine flows in recurring slope lineae, based on spectrometer readings of the darkened areas of slopes. These streaks flow downhill in Martian summer, when the temperature is above −23 °C, and freeze at lower temperatures. These observations supported earlier hypotheses, based on timing of formation and their rate of growth, that these dark streaks resulted from water flowing just below the surface. However, later work suggested that the lineae may be dry, granular flows instead, with at most a limited role for water in initiating the process. A definitive conclusion about the presence, extent, and role of liquid water on the Martian surface remains elusive. Researchers suspect much of the low northern plains of the planet were covered with an ocean hundreds of meters deep, though this theory remains controversial. In March 2015, scientists stated that such an ocean might have been the size of Earth's Arctic Ocean. This finding was derived from the ratio of protium to deuterium in the modern Martian atmosphere compared to that ratio on Earth. The amount of Martian deuterium (D/H = 9.3 ± 1.7 10−4) is five to seven times the amount on Earth (D/H = 1.56 10−4), suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the former presence of an ocean. Other scientists caution that these results have not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water. Near the northern polar cap is the 81.4 kilometres (50.6 mi) wide Korolev Crater, which the Mars Express orbiter found to be filled with approximately 2,200 cubic kilometres (530 cu mi) of water ice. In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior (which is 12,100 cubic kilometers). During observations from 2018 through 2021, the ExoMars Trace Gas Orbiter spotted indications of water, probably subsurface ice, in the Valles Marineris canyon system. Orbital motion Mars's average distance from the Sun is roughly 230 million km (143 million mi), and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. The gravitational potential difference and thus the delta-v needed to transfer between Mars and Earth is the second lowest for Earth. The axial tilt of Mars is 25.19° relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day, the orientation of the north pole of Mars is close to the star Deneb. Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury has a larger orbital eccentricity. It is known that in the past, Mars has had a much more circular orbit. At one point, 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. Mars's cycle of eccentricity is 96,000 Earth years compared to Earth's cycle of 100,000 years. Mars has its closest approach to Earth (opposition) in a synodic period of 779.94 days. It should not be confused with Mars conjunction, where the Earth and Mars are at opposite sides of the Solar System and form a straight line crossing the Sun. The average time between the successive oppositions of Mars, its synodic period, is 780 days; but the number of days between successive oppositions can range from 764 to 812. The distance at close approach varies between about 54 and 103 million km (34 and 64 million mi) due to the planets' elliptical orbits, which causes comparable variation in angular size. At their furthest Mars and Earth can be as far as 401 million km (249 million mi) apart. Mars comes into opposition from Earth every 2.1 years. The planets come into opposition near Mars's perihelion in 2003, 2018 and 2035, with the 2020 and 2033 events being particularly close to perihelic opposition. The mean apparent magnitude of Mars is +0.71 with a standard deviation of 1.05. Because the orbit of Mars is eccentric, the magnitude at opposition from the Sun can range from about −3.0 to −1.4. The minimum brightness is magnitude +1.86 when the planet is near aphelion and in conjunction with the Sun. At its brightest, Mars (along with Jupiter) is second only to Venus in apparent brightness. Mars usually appears distinctly yellow, orange, or red. When farthest away from Earth, it is more than seven times farther away than when it is closest. Mars is usually close enough for particularly good viewing once or twice at 15-year or 17-year intervals. Optical ground-based telescopes are typically limited to resolving features about 300 kilometres (190 mi) across when Earth and Mars are closest because of Earth's atmosphere. As Mars approaches opposition, it begins a period of retrograde motion, which means it will appear to move backwards in a looping curve with respect to the background stars. This retrograde motion lasts for about 72 days, and Mars reaches its peak apparent brightness in the middle of this interval. Moons Mars has two relatively small (compared to Earth's) natural moons, Phobos (about 22 km (14 mi) in diameter) and Deimos (about 12 km (7.5 mi) in diameter), which orbit at 9,376 km (5,826 mi) and 23,460 km (14,580 mi) around the planet. The origin of both moons is unclear, although a popular theory states that they were asteroids captured into Martian orbit. Both satellites were discovered in 1877 by Asaph Hall and were named after the characters Phobos (the deity of panic and fear) and Deimos (the deity of terror and dread), twins from Greek mythology who accompanied their father Ares, god of war, into battle. Mars was the Roman equivalent to Ares. In modern Greek, the planet retains its ancient name Ares (Aris: Άρης). From the surface of Mars, the motions of Phobos and Deimos appear different from that of the Earth's satellite, the Moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbit – where the orbital period would match the planet's period of rotation – rises as expected in the east, but slowly. Because the orbit of Phobos is below a synchronous altitude, tidal forces from Mars are gradually lowering its orbit. In about 50 million years, it could either crash into Mars's surface or break up into a ring structure around the planet. The origin of the two satellites is not well understood. Their low albedo and carbonaceous chondrite composition have been regarded as similar to asteroids, supporting a capture theory. The unstable orbit of Phobos would seem to point toward a relatively recent capture. But both have circular orbits near the equator, which is unusual for captured objects, and the required capture dynamics are complex. Accretion early in the history of Mars is plausible, but would not account for a composition resembling asteroids rather than Mars itself, if that is confirmed. Mars may have yet-undiscovered moons, smaller than 50 to 100 metres (160 to 330 ft) in diameter, and a dust ring is predicted to exist between Phobos and Deimos. A third possibility for their origin as satellites of Mars is the involvement of a third body or a type of impact disruption. More-recent lines of evidence for Phobos having a highly porous interior, and suggesting a composition containing mainly phyllosilicates and other minerals known from Mars, point toward an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's satellite. Although the visible and near-infrared (VNIR) spectra of the moons of Mars resemble those of outer-belt asteroids, the thermal infrared spectra of Phobos are reported to be inconsistent with chondrites of any class. It is also possible that Phobos and Deimos were fragments of an older moon, formed by debris from a large impact on Mars, and then destroyed by a more recent impact upon the satellite. More recently, a study conducted by a team of researchers from multiple countries suggests that a lost moon, at least fifteen times the size of Phobos, may have existed in the past. By analyzing rocks which point to tidal processes on the planet, it is possible that these tides may have been regulated by a past moon. Human observations and exploration The history of observations of Mars is marked by oppositions of Mars when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars, which are distinguished because Mars is close to perihelion, making it even closer to Earth. The ancient Sumerians named Mars Nergal, the god of war and plague. During Sumerian times, Nergal was a minor deity of little significance, but, during later times, his main cult center was the city of Nineveh. In Mesopotamian texts, Mars is referred to as the "star of judgement of the fate of the dead". The existence of Mars as a wandering object in the night sky was also recorded by the ancient Egyptian astronomers and, by 1534 BCE, they were familiar with the retrograde motion of the planet. By the period of the Neo-Babylonian Empire, the Babylonian astronomers were making regular records of the positions of the planets and systematic observations of their behavior. For Mars, they knew that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. They invented arithmetic methods for making minor corrections to the predicted positions of the planets. In Ancient Greece, the planet was known as Πυρόεις. Commonly, the Greek name for the planet now referred to as Mars, was Ares. It was the Romans who named the planet Mars, for their god of war, often represented by the sword and shield of the planet's namesake. In the fourth century BCE, Aristotle noted that Mars disappeared behind the Moon during an occultation, indicating that the planet was farther away. Ptolemy, a Greek living in Alexandria, attempted to address the problem of the orbital motion of Mars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection later called the Almagest (from the Arabic for "greatest"), which became the authoritative treatise on Western astronomy for the next fourteen centuries. Literature from ancient China confirms that Mars was known by Chinese astronomers by no later than the fourth century BCE. In the East Asian cultures, Mars is traditionally referred to as the "fire star" (火星) based on the Wuxing system. In 1609 Johannes Kepler published a 10 year study of Martian orbit, using the diurnal parallax of Mars, measured by Tycho Brahe, to make a preliminary calculation of the relative distance to the planet. From Brahe's observations of Mars, Kepler deduced that the planet orbited the Sun not in a circle, but in an ellipse. Moreover, Kepler showed that Mars sped up as it approached the Sun and slowed down as it moved farther away, in a manner that later physicists would explain as a consequence of the conservation of angular momentum.: 433–437 In 1610 the first use of a telescope for astronomical observation, including Mars, was performed by Italian astronomer Galileo Galilei. With the telescope the diurnal parallax of Mars was again measured in an effort to determine the Sun-Earth distance. This was first performed by Giovanni Domenico Cassini in 1672. The early parallax measurements were hampered by the quality of the instruments. The only occultation of Mars by Venus observed was that of 13 October 1590, seen by Michael Maestlin at Heidelberg. By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. On 5 September 1877, a perihelic opposition to Mars occurred. The Italian astronomer Giovanni Schiaparelli used a 22-centimetre (8.7 in) telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which, with the possible exception of the natural canyon Valles Marineris, were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means "channels" or "grooves", was popularly mistranslated in English as "canals". Influenced by the observations, the orientalist Percival Lowell founded an observatory which had 30- and 45-centimetre (12- and 18-in) telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894, and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were independently observed by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time. The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summers) in combination with the canals led to speculation about life on Mars, and it was a long-held belief that Mars contained vast seas and vegetation. As bigger telescopes were used, fewer long, straight canali were observed. During observations in 1909 by Antoniadi with an 84-centimetre (33 in) telescope, irregular patterns were observed, but no canali were seen. The first spacecraft from Earth to visit Mars was Mars 1 of the Soviet Union, which flew by in 1963, but contact was lost en route. NASA's Mariner 4 followed and became the first spacecraft to successfully transmit from Mars; launched on 28 November 1964, it made its closest approach to the planet on 15 July 1965. Mariner 4 detected the weak Martian radiation belt, measured at about 0.1% that of Earth, and captured the first images of another planet from deep space. Once spacecraft visited the planet during the 1960s and 1970s, many previous concepts of Mars were radically broken. After the results of the Viking life-detection experiments, the hypothesis of a dead planet was generally accepted. The data from Mariner 9 and Viking allowed better maps of Mars to be made. Until 1997 and after Viking 1 shut down in 1982, Mars was only visited by three unsuccessful probes, two flying past without contact (Phobos 1, 1988; Mars Observer, 1993), and one (Phobos 2 1989) malfunctioning in orbit before reaching its destination Phobos. In 1997 Mars Pathfinder became the first successful rover mission beyond the Moon and started together with Mars Global Surveyor (operated until late 2006) an uninterrupted active robotic presence at Mars that has lasted until today. It produced complete, extremely detailed maps of the Martian topography, magnetic field and surface minerals. Starting with these missions a range of new improved crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars, with successful missions by the NASA (United States), Jaxa (Japan), ESA, United Kingdom, ISRO (India), Roscosmos (Russia), the United Arab Emirates, and CNSA (China) to study the planet's surface, climate, and geology, uncovering the different elements of the history and dynamic of the hydrosphere of Mars and possible traces of ancient life. As of 2023[update], Mars is host to ten functioning spacecraft. Eight are in orbit: 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, ExoMars Trace Gas Orbiter, the Hope orbiter, and the Tianwen-1 orbiter. Another two are on the surface: the Mars Science Laboratory Curiosity rover and the Perseverance rover. Collected maps are available online at websites including Google Mars. NASA provides two online tools: Mars Trek, which provides visualizations of the planet using data from 50 years of exploration, and Experience Curiosity, which simulates traveling on Mars in 3-D with Curiosity. Planned missions to Mars include: As of February 2024[update], debris from these types of missions has reached over seven tons. Most of it consists of crashed and inactive spacecraft as well as discarded components. In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Key areas include establishing telecommunications, payload delivery and surface imaging. Habitability and habitation During the late 19th century, it was widely accepted in the astronomical community that Mars had life-supporting qualities, including the presence of oxygen and water. However, in 1894 W. W. Campbell at Lick Observatory observed the planet and found that "if water vapor or oxygen occur in the atmosphere of Mars it is in quantities too small to be detected by spectroscopes then available". That observation contradicted many of the measurements of the time and was not widely accepted. Campbell and V. M. Slipher repeated the study in 1909 using better instruments, but with the same results. It was not until the findings were confirmed by W. S. Adams in 1925 that the myth of the Earth-like habitability of Mars was finally broken. However, even in the 1960s, articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars. The current understanding of planetary habitability – the ability of a world to develop environmental conditions favorable to the emergence of life – favors planets that have liquid water on their surface. Most often this requires the orbit of a planet to lie within the habitable zone, which for the Sun is estimated to extend from within the orbit of Earth to about that of Mars. During perihelion, Mars dips inside this region, but Mars's thin (low-pressure) atmosphere prevents liquid water from existing over large regions for extended periods. The past flow of liquid water demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface may have been too salty and acidic to support regular terrestrial life. The environmental conditions on Mars are a challenge to sustaining organic life: the planet has little heat transfer across its surface, it has poor insulation against bombardment by the solar wind due to the absence of a magnetosphere and has insufficient atmospheric pressure to retain water in a liquid form (water instead sublimes to a gaseous state). Mars is nearly, or perhaps totally, geologically dead; the end of volcanic activity has apparently stopped the recycling of chemicals and minerals between the surface and interior of the planet. Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there remains unknown. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites and had positive results, including a temporary increase in CO2 production on exposure to water and nutrients. This sign of life was later disputed by scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A 2014 analysis of Martian meteorite EETA79001 found chlorate, perchlorate, and nitrate ions in sufficiently high concentrations to suggest that they are widespread on Mars. UV and X-ray radiation would turn chlorate and perchlorate ions into other, highly reactive oxychlorines, indicating that any organic molecules would have to be buried under the surface to survive. Small quantities of methane and formaldehyde detected by Mars orbiters are both claimed to be possible evidence for life, as these chemical compounds would quickly break down in the Martian atmosphere. Alternatively, these compounds may instead be replenished by volcanic or other geological means, such as serpentinite. Impact glass, formed by the impact of meteors, which on Earth can preserve signs of life, has also been found on the surface of the impact craters on Mars. Likewise, the glass in impact craters on Mars could have preserved signs of life, if life existed at the site. The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available. Several plans for a human mission to Mars have been proposed, but none have come to fruition. The NASA Authorization Act of 2017 directed NASA to study the feasibility of a crewed Mars mission in the early 2030s; the resulting report concluded that this would be unfeasible. In addition, in 2021, China was planning to send a crewed Mars mission in 2033. Privately held companies such as SpaceX have also proposed plans to send humans to Mars, with the eventual goal to settle on the planet. As of 2024, SpaceX has proceeded with the development of the Starship launch vehicle with the goal of Mars colonization. In plans shared with the company in April 2024, Elon Musk envisions the beginning of a Mars colony within the next twenty years. This would be enabled by the planned mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. The moon Phobos has been proposed as an anchor point for a space elevator. Besides national space agencies and space companies, groups such as the Mars Society and The Planetary Society advocate for human missions to Mars. In culture Mars is named after the Roman god of war (Greek Ares), but was also associated with the demi-god Heracles (Roman Hercules) by ancient Greek astronomers, as detailed by Aristotle. This association between Mars and war dates back at least to Babylonian astronomy, in which the planet was named for the god Nergal, deity of war and destruction. It persisted into modern times, as exemplified by Gustav Holst's orchestral suite The Planets, whose famous first movement labels Mars "The Bringer of War". The planet's symbol, a circle with a spear pointing out to the upper right, is also used as a symbol for the male gender. The symbol dates from at least the 11th century, though a possible predecessor has been found in the Greek Oxyrhynchus Papyri. The idea that Mars was populated by intelligent Martians became widespread in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". In the present day, high-resolution mapping of the surface of Mars has revealed no artifacts of habitation, but pseudoscientific speculation about intelligent life on Mars still continues. Reminiscent of the canali observations, these speculations are based on small scale features perceived in the spacecraft images, such as "pyramids" and the "Face on Mars". In his book Cosmos, planetary astronomer Carl Sagan wrote: "Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears." The depiction of Mars in fiction has been stimulated by its dramatic red color and by nineteenth-century scientific speculations that its surface conditions might support not just life but intelligent life. This gave way to many science fiction stories involving these concepts, such as H. G. Wells's The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth; Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization; as well as Edgar Rice Burroughs's series Barsoom, C. S. Lewis's novel Out of the Silent Planet (1938), and a number of Robert A. Heinlein stories before the mid-sixties. Since then, depictions of Martians have also extended to animation. A comic figure of an intelligent Martian, Marvin the Martian, appeared in Haredevil Hare (1948) as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. After the Mariner and Viking spacecraft had returned pictures of Mars as a lifeless and canal-less world, these ideas about Mars were abandoned; for many science-fiction authors, the new discoveries initially seemed like a constraint, but eventually the post-Viking knowledge of Mars became itself a source of inspiration for works like Kim Stanley Robinson's Mars trilogy. See also Notes References Further reading External links Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of". |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Orion_(constellation)#cite_note-14] | [TOKENS: 4993] |
Contents Orion (constellation) Orion is a prominent set of stars visible during winter in the northern celestial hemisphere. It is one of the 88 modern constellations; it was among the 48 constellations listed by the 2nd-century AD/CE astronomer Ptolemy. It is named after a hunter in Greek mythology. Orion is most prominent during winter evenings in the Northern Hemisphere, as are five other constellations that have stars in the Winter Hexagon asterism. Orion's two brightest stars, Rigel (β) and Betelgeuse (α), are both among the brightest stars in the night sky; both are supergiants and slightly variable. There are a further six stars brighter than magnitude 3.0, including three making the short straight line of the Orion's Belt asterism. Orion also hosts the radiant of the annual Orionids, the strongest meteor shower associated with Halley's Comet, and the Orion Nebula, one of the brightest nebulae in the sky. Characteristics Orion is bordered by Taurus to the northwest, Eridanus to the southwest, Lepus to the south, Monoceros to the east, and Gemini to the northeast. Covering 594 square degrees, Orion ranks 26th of the 88 constellations in size. The constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 26 sides. In the equatorial coordinate system, the right ascension coordinates of these borders lie between 04h 43.3m and 06h 25.5m , while the declination coordinates are between 22.87° and −10.97°. The constellation's three-letter abbreviation, as adopted by the International Astronomical Union in 1922, is "Ori". Orion is most visible in the evening sky from January to April, winter in the Northern Hemisphere, and summer in the Southern Hemisphere. In the tropics (less than about 8° from the equator), the constellation transits at the zenith. From May to July (summer in the Northern Hemisphere, winter in the Southern Hemisphere), Orion is in the daytime sky and thus invisible at most latitudes. However, for much of Antarctica in the Southern Hemisphere's winter months, the Sun is below the horizon even at midday. Stars (and thus Orion, but only the brightest stars) are then visible at twilight for a few hours around local noon, just in the brightest section of the sky low in the North where the Sun is just below the horizon. At the same time of day at the South Pole itself (Amundsen–Scott South Pole Station), Rigel is only 8° above the horizon, and the Belt sweeps just along it. In the Southern Hemisphere's summer months, when Orion is normally visible in the night sky, the constellation is actually not visible in Antarctica because the Sun does not set at that time of year south of the Antarctic Circle. In countries close to the equator (e.g. Kenya, Indonesia, Colombia, Ecuador), Orion appears overhead in December around midnight and in the February evening sky. Navigational aid Orion is very useful as an aid to locating other stars. By extending the line of the Belt southeastward, Sirius (α CMa) can be found; northwestward, Aldebaran (α Tau). A line eastward across the two shoulders indicates the direction of Procyon (α CMi). A line from Rigel through Betelgeuse points to Castor and Pollux (α Gem and β Gem). Additionally, Rigel is part of the Winter Circle asterism. Sirius and Procyon, which may be located from Orion by following imaginary lines (see map), also are points in both the Winter Triangle and the Circle. Features Orion's seven brightest stars form a distinctive hourglass-shaped asterism, or pattern, in the night sky. Four stars—Rigel, Betelgeuse, Bellatrix, and Saiph—form a large roughly rectangular shape, at the center of which lie the three stars of Orion's Belt—Alnitak, Alnilam, and Mintaka. His head is marked by an additional eighth star called Meissa, which is fairly bright to the observer. Descending from the Belt is a smaller line of three stars, Orion's Sword (the middle of which is in fact not a star but the Orion Nebula), also known as the hunter's sword. Many of the stars are luminous hot blue supergiants, with the stars of the Belt and Sword forming the Orion OB1 association. Standing out by its red hue, Betelgeuse may nevertheless be a runaway member of the same group. Orion's Belt, or The Belt of Orion, is an asterism within the constellation. It consists of three bright stars: Alnitak (Zeta Orionis), Alnilam (Epsilon Orionis), and Mintaka (Delta Orionis). Alnitak is around 800 light-years away from Earth, 100,000 times more luminous than the Sun, and shines with a magnitude of 1.8; much of its radiation is in the ultraviolet range, which the human eye cannot see. Alnilam is approximately 2,000 light-years from Earth, shines with a magnitude of 1.70, and with an ultraviolet light that is 375,000 times more luminous than the Sun. Mintaka is 915 light-years away and shines with a magnitude of 2.21. It is 90,000 times more luminous than the Sun and is a double star: the two orbit each other every 5.73 days. In the Northern Hemisphere, Orion's Belt is best visible in the night sky during the month of January at around 9:00 pm, when it is approximately around the local meridian. Just southwest of Alnitak lies Sigma Orionis, a multiple star system composed of five stars that have a combined apparent magnitude of 3.7 and lying at a distance of 1150 light-years. Southwest of Mintaka lies the quadruple star Eta Orionis. Orion's Sword contains the Orion Nebula, the Messier 43 nebula, Sh 2-279 (also known as the Running Man Nebula), and the stars Theta Orionis, Iota Orionis, and 42 Orionis. Three stars comprise a small triangle that marks the head. The apex is marked by Meissa (Lambda Orionis), a hot blue giant of spectral type O8 III and apparent magnitude 3.54, which lies some 1100 light-years distant. Phi-1 and Phi-2 Orionis make up the base. Also nearby is the young star FU Orionis. Stretching north from Betelgeuse are the stars that make up Orion's club. Mu Orionis marks the elbow, Nu and Xi mark the handle of the club, and Chi1 and Chi2 mark the end of the club. Just east of Chi1 is the Mira-type variable red giant star U Orionis. West from Bellatrix lie six stars all designated Pi Orionis (π1 Ori, π2 Ori, π3 Ori, π4 Ori, π5 Ori, and π6 Ori) which make up Orion's shield. Around 20 October each year, the Orionid meteor shower (Orionids) reaches its peak. Coming from the border with the constellation Gemini, as many as 20 meteors per hour can be seen. The shower's parent body is Halley's Comet. Hanging from Orion's Belt is his sword, consisting of the multiple stars θ1 and θ2 Orionis, called the Trapezium and the Orion Nebula (M42). This is a spectacular object that can be clearly identified with the naked eye as something other than a star. Using binoculars, its clouds of nascent stars, luminous gas, and dust can be observed. The Trapezium cluster has many newborn stars, including several brown dwarfs, all of which are at an approximate distance of 1,500 light-years. Named for the four bright stars that form a trapezoid, it is largely illuminated by the brightest stars, which are only a few hundred thousand years old. Observations by the Chandra X-ray Observatory show both the extreme temperatures of the main stars—up to 60,000 kelvins—and the star forming regions still extant in the surrounding nebula. M78 (NGC 2068) is a nebula in Orion. With an overall magnitude of 8.0, it is significantly dimmer than the Great Orion Nebula that lies to its south; however, it is at approximately the same distance, at 1600 light-years from Earth. It can easily be mistaken for a comet in the eyepiece of a telescope. M78 is associated with the variable star V351 Orionis, whose magnitude changes are visible in very short periods of time. Another fairly bright nebula in Orion is NGC 1999, also close to the Great Orion Nebula. It has an integrated magnitude of 10.5 and is 1500 light-years from Earth. The variable star V380 Orionis is embedded in NGC 1999. Another famous nebula is IC 434, the Horsehead Nebula, near Alnitak (Zeta Orionis). It contains a dark dust cloud whose shape gives the nebula its name. NGC 2174 is an emission nebula located 6400 light-years from Earth. Besides these nebulae, surveying Orion with a small telescope will reveal a wealth of interesting deep-sky objects, including M43, M78, and multiple stars including Iota Orionis and Sigma Orionis. A larger telescope may reveal objects such as the Flame Nebula (NGC 2024), as well as fainter and tighter multiple stars and nebulae. Barnard's Loop can be seen on very dark nights or using long-exposure photography. All of these nebulae are part of the larger Orion molecular cloud complex, which is located approximately 1,500 light-years away and is hundreds of light-years across. Due to its proximity, it is one of the most intense regions of stellar formation visible from Earth. The Orion molecular cloud complex forms the eastern part of an even larger structure, the Orion–Eridanus Superbubble, which is visible in X-rays and in hydrogen emissions. History and mythology The distinctive pattern of Orion is recognized in numerous cultures around the world, and many myths are associated with it. Orion is used as a symbol in the modern world. In Siberia, the Chukchi people see Orion as a hunter; an arrow he has shot is represented by Aldebaran (Alpha Tauri), with the same figure as other Western depictions. In Greek mythology, Orion was a gigantic, supernaturally strong hunter, born to Euryale, a Gorgon, and Poseidon (Neptune), god of the sea. One myth recounts Gaia's rage at Orion, who dared to say that he would kill every animal on Earth. The angry goddess tried to dispatch Orion with a scorpion. This is given as the reason that the constellations of Scorpius and Orion are never in the sky at the same time. However, Ophiuchus, the Serpent Bearer, revived Orion with an antidote. This is said to be the reason that the constellation of Ophiuchus stands midway between the Scorpion and the Hunter in the sky. The constellation is mentioned in Horace's Odes (Ode 3.27.18), Homer's Odyssey (Book 5, line 283) and Iliad, and Virgil's Aeneid (Book 1, line 535). In old Hungarian tradition, Orion is known as "Archer" (Íjász), or "Reaper" (Kaszás). In recently rediscovered myths, he is called Nimrod (Hungarian: Nimród), the greatest hunter, father of the twins Hunor and Magor. The π and o stars (on upper right) form together the reflex bow or the lifted scythe. In other Hungarian traditions, Orion's Belt is known as "Judge's stick" (Bírópálca). In Ireland and Scotland, Orion was called An Bodach, a figure from Irish folklore whose name literally means "the one with a penis [bod]" and was the husband of the Cailleach (hag). In Scandinavian tradition, Orion's Belt was known as "Frigg's Distaff" (friggerock) or "Freyja's distaff". The Finns call Orion's Belt and the stars below it "Väinämöinen's scythe" (Väinämöisen viikate). Another name for the asterism of Alnilam, Alnitak, and Mintaka is "Väinämöinen's Belt" (Väinämöisen vyö) and the stars "hanging" from the Belt as "Kaleva's sword" (Kalevanmiekka). There are claims in popular media that the Adorant from the Geißenklösterle cave, an ivory carving estimated to be 35,000 to 40,000 years old, is the first known depiction of the constellation. Scholars dismiss such interpretations, saying that perceived details such as a belt and sword derive from preexisting features in the grain structure of the ivory. The Babylonian star catalogues of the Late Bronze Age name Orion MULSIPA.ZI.AN.NA,[note 1] "The Heavenly Shepherd" or "True Shepherd of Anu" – Anu being the chief god of the heavenly realms. The Babylonian constellation is sacred to Papshukal and Ninshubur, both minor gods fulfilling the role of "messenger to the gods". Papshukal is closely associated with the figure of a walking bird on Babylonian boundary stones, and on the star map the figure of the Rooster is located below and behind the figure of the True Shepherd—both constellations represent the herald of the gods, in his bird and human forms respectively. In ancient Egypt, the stars of Orion were regarded as a god, called Sah. Because Orion rises before Sirius, the star whose heliacal rising was the basis for the Solar Egyptian calendar, Sah was closely linked with Sopdet, the goddess who personified Sirius. The god Sopdu is said to be the son of Sah and Sopdet. Sah is syncretized with Osiris, while Sopdet is syncretized with Osiris' mythological wife, Isis. In the Pyramid Texts, from the 24th and 23rd centuries BC, Sah is one of many gods whose form the dead pharaoh is said to take in the afterlife. The Armenians identified their legendary patriarch and founder Hayk with Orion. Hayk is also the name of the Orion constellation in the Armenian translation of the Bible. The Bible mentions Orion three times, naming it "Kesil" (כסיל, literally – fool). Though, this name perhaps is etymologically connected with "Kislev", the name for the ninth month of the Hebrew calendar (i.e. November–December), which, in turn, may derive from the Hebrew root K-S-L as in the words "kesel, kisla" (כֵּסֶל, כִּסְלָה, hope, positiveness), i.e. hope for winter rains.: Job 9:9 ("He is the maker of the Bear and Orion"), Job 38:31 ("Can you loosen Orion's belt?"), and Amos 5:8 ("He who made the Pleiades and Orion"). In ancient Aram, the constellation was known as Nephîlā′, the Nephilim are said to be Orion's descendants. In medieval Muslim astronomy, Orion was known as al-jabbar, "the giant". Orion's sixth brightest star, Saiph, is named from the Arabic, saif al-jabbar, meaning "sword of the giant". In China, Orion was one of the 28 lunar mansions Sieu (Xiù) (宿). It is known as Shen (參), literally meaning "three", for the stars of Orion's Belt. The Chinese character 參 (pinyin shēn) originally meant the constellation Orion (Chinese: 參宿; pinyin: shēnxiù); its Shang dynasty version, over three millennia old, contains at the top a representation of the three stars of Orion's Belt atop a man's head (the bottom portion representing the sound of the word was added later). The Rigveda refers to the constellation as Mriga (the Deer). Nataraja, "the cosmic dancer", is often interpreted as the representation of Orion. Rudra, the Rigvedic form of Shiva, is the presiding deity of Ardra nakshatra (Betelgeuse) of Hindu astrology. The Jain Symbol carved in the Udayagiri and Khandagiri Caves, India in 1st century BCE has a striking resemblance with Orion. Bugis sailors identified the three stars in Orion's Belt as tanra tellué, meaning "sign of three". The Seri people of northwestern Mexico call the three stars in Orion's Belt Hapj (a name denoting a hunter) which consists of three stars: Hap (mule deer), Haamoja (pronghorn), and Mojet (bighorn sheep). Hap is in the middle and has been shot by the hunter; its blood has dripped onto Tiburón Island. The same three stars are known in Spain and most of Latin America as "Las tres Marías" (Spanish for "The Three Marys"). In Puerto Rico, the three stars are known as the "Los Tres Reyes Magos" (Spanish for The Three Wise Men). The Ojibwa/Chippewa Native Americans call this constellation Mesabi for Big Man. To the Lakota Native Americans, Tayamnicankhu (Orion's Belt) is the spine of a bison. The great rectangle of Orion is the bison's ribs; the Pleiades star cluster in nearby Taurus is the bison's head; and Sirius in Canis Major, known as Tayamnisinte, is its tail. Another Lakota myth mentions that the bottom half of Orion, the Constellation of the Hand, represented the arm of a chief that was ripped off by the Thunder People as a punishment from the gods for his selfishness. His daughter offered to marry the person who can retrieve his arm from the sky, so the young warrior Fallen Star (whose father was a star and whose mother was human) returned his arm and married his daughter, symbolizing harmony between the gods and humanity with the help of the younger generation. The index finger is represented by Rigel; the Orion Nebula is the thumb; the Belt of Orion is the wrist; and the star Beta Eridani is the pinky finger. The seven primary stars of Orion make up the Polynesian constellation Heiheionakeiki which represents a child's string figure similar to a cat's cradle. Several precolonial Filipinos referred to the belt region in particular as "balatik" (ballista) as it resembles a trap of the same name which fires arrows by itself and is usually used for catching pigs from the bush. Spanish colonization later led to some ethnic groups referring to Orion's Belt as "Tres Marias" or "Tatlong Maria." In Māori tradition, the star Rigel (known as Puanga or Puaka) is closely connected with the celebration of Matariki. The rising of Matariki (the Pleiades) and Rigel before sunrise in midwinter marks the start of the Māori year. In Javanese culture, the constellation is often called Lintang Waluku or Bintang Bajak, referring to the shape of a paddy field plow. The imagery of the Belt and Sword has found its way into popular Western culture, for example in the form of the shoulder insignia of the 27th Infantry Division of the United States Army during both World Wars, probably owing to a pun on the name of the division's first commander, Major General John F. O'Ryan. The film distribution company Orion Pictures used the constellation as its logo. In artistic renderings, the surrounding constellations are sometimes related to Orion: he is depicted standing next to the river Eridanus with his two hunting dogs Canis Major and Canis Minor, fighting Taurus. He is sometimes depicted hunting Lepus the hare. He sometimes is depicted to have a lion's hide in his hand. There are alternative ways to visualise Orion. From the Southern Hemisphere, Orion is oriented south-upward, and the Belt and Sword are sometimes called the saucepan or pot in Australia and New Zealand. Orion's Belt is called Drie Konings (Three Kings) or the Drie Susters (Three Sisters) by Afrikaans speakers in South Africa and are referred to as les Trois Rois (the Three Kings) in Daudet's Lettres de Mon Moulin (1866). The appellation Driekoningen (the Three Kings) is also often found in 17th and 18th-century Dutch star charts and seaman's guides. The same three stars are known in Spain, Latin America, and the Philippines as "Las Tres Marías" (The Three Marys), and as "Los Tres Reyes Magos" (The Three Wise Men) in Puerto Rico. Even traditional depictions of Orion have varied greatly. Cicero drew Orion in a similar fashion to the modern depiction. The Hunter held an unidentified animal skin aloft in his right hand; his hand was represented by Omicron2 Orionis and the skin was represented by the five stars designated Pi Orionis. Saiph and Rigel represented his left and right knees, while Eta Orionis and Lambda Leporis were his left and right feet, respectively. As in the modern depiction, Mintaka, Alnilam, and Alnitak represented his Belt. His left shoulder was represented by Betelgeuse, and Mu Orionis made up his left arm. Meissa was his head, and Bellatrix his right shoulder. The depiction of Hyginus was similar to that of Cicero, though the two differed in a few important areas. Cicero's animal skin became Hyginus's shield (Omicron and Pi Orionis), and instead of an arm marked out by Mu Orionis, he holds a club (Chi Orionis). His right leg is represented by Theta Orionis and his left leg is represented by Lambda, Mu, and Epsilon Leporis. Further Western European and Arabic depictions have followed these two models. Future Orion is located on the celestial equator, but it will not always be so located due to the effects of precession of the Earth's axis. Orion lies well south of the ecliptic, and it only happens to lie on the celestial equator because the point on the ecliptic that corresponds to the June solstice is close to the border of Gemini and Taurus, to the north of Orion. Precession will eventually carry Orion further south, and by AD 14000, Orion will be far enough south that it will no longer be visible from the latitude of Great Britain. Further in the future, Orion's stars will gradually move away from the constellation due to proper motion. However, Orion's brightest stars all lie at a large distance from Earth on an astronomical scale—much farther away than Sirius, for example. Orion will still be recognizable long after most of the other constellations—composed of relatively nearby stars—have distorted into new configurations, with the exception of a few of its stars eventually exploding as supernovae, for example Betelgeuse, which is predicted to explode sometime in the next million years. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/September_8] | [TOKENS: 62] |
Contents September 8 Page version status This is an accepted version of this page September 8 is the 251st day of the year (252nd in leap years) in the Gregorian calendar; 114 days remain until the end of the year. Events Births Deaths Holidays and observances References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/History_of_computing_hardware] | [TOKENS: 13388] |
Contents History of computing hardware The history of computing hardware spans developments from early devices used for simple calculations to today's complex computers, encompassing advances in both analog and digital technology. The first aids to computation were purely mechanical devices which required the operator to set up the initial values of an elementary arithmetic operation, then manipulate the device to obtain the result. In later stages, computing devices began representing numbers in continuous forms, such as by distance along a scale, rotation of a shaft, or a specific voltage level. Numbers could also be represented in the form of digits, automatically manipulated by a mechanism. Although this approach generally required more complex mechanisms, it greatly increased the precision of results. The development of transistor technology, followed by the invention of integrated circuit chips, led to revolutionary breakthroughs. Transistor-based computers and, later, integrated circuit-based computers enabled digital systems to gradually replace analog systems, increasing both efficiency and processing power. Metal-oxide-semiconductor (MOS) large-scale integration (LSI) then enabled semiconductor memory and the microprocessor, leading to another key breakthrough, the miniaturized personal computer (PC), in the 1970s. The cost of computers gradually became so low that personal computers by the 1990s, and then mobile computers (smartphones and tablets) in the 2000s, became ubiquitous. Early devices Devices have been used to aid computation for thousands of years, often using one-to-one correspondence with fingers. The earliest counting device was probably a form of tally stick. The Lebombo bone from the mountains between Eswatini and South Africa may be the oldest known mathematical artifact. It dates from 35,000 BCE and consists of 29 distinct notches that were deliberately cut into a baboon's fibula. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, probably livestock or grains, sealed in hollow unbaked clay containers.[b][c] The use of counting rods is one example. The abacus was used early for arithmetic tasks. What is now called the Roman abacus was used in Babylonia as early as c. 2700–2300 BC. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. Several analog computers were constructed in ancient and medieval times to perform astronomical calculations. These included the astrolabe and Antikythera mechanism from the Hellenistic world (c. 150–100 BC). A Greek bronze combination lock from the Augustan or Hadrianic period operated on a primitive form of mechanical logic: the central bolt was physically blocked from retracting until the notches of two independent rotary dials were correctly aligned. In Roman Egypt, Hero of Alexandria (c. 10–70 AD) made mechanical devices including automata and a programmable cart. The steam-powered automatic flute described by the Book of Ingenious Devices (850) by the Persian-Baghdadi Banū Mūsā brothers may have been the first programmable device. Other early mechanical devices used to perform one or another type of calculations include the planisphere and other mechanical computing devices invented by Al-Biruni (c. AD 1000); the equatorium and universal latitude-independent astrolabe by Al-Zarqali (c. AD 1015); the astronomical analog computers of other medieval Muslim astronomers and engineers; and the astronomical clock tower of Su Song (1094) during the Song dynasty. The castle clock, a hydropowered mechanical astronomical clock invented by Ismail al-Jazari in 1206, was the first programmable analog computer.[disputed (for: The cited source doesn't support the claim, and the claim is misleading.) – discuss] Ramon Llull invented the Lullian Circle: a notional machine for calculating answers to philosophical questions (in this case, to do with Christianity) via logical combinatorics. This idea was taken up by Leibniz centuries later, and is thus one of the founding elements in computing and information science. Scottish mathematician and physicist John Napier discovered that the multiplication and division of numbers could be performed by the addition and subtraction, respectively, of the logarithms of those numbers. While producing the first logarithmic tables, Napier needed to perform many tedious multiplications. It was at this point that he designed his 'Napier's bones', an abacus-like device that greatly simplified calculations that involved multiplication and division.[d] Since real numbers can be represented as distances or intervals on a line, the slide rule was invented in the 1620s, shortly after Napier's work, to allow multiplication and division operations to be carried out significantly faster than was previously possible. Edmund Gunter built a calculating device with a single logarithmic scale at the University of Oxford. His device greatly simplified arithmetic calculations, including multiplication and division. William Oughtred greatly improved this in 1630 with his circular slide rule. He followed this up with the modern slide rule in 1632, essentially a combination of two Gunter rules, held together with the hands. Slide rules were used by generations of engineers and other mathematically involved professional workers, until the invention of the pocket calculator. In 1609, Guidobaldo del Monte made a mechanical multiplier to calculate fractions of a degree. Based on a system of four gears, the rotation of an index on one quadrant corresponds to 60 rotations of another index on an opposite quadrant. Thanks to this machine, errors in the calculation of first, second, third and quarter degrees can be avoided. Guidobaldo is the first to document the use of gears for mechanical calculation. Wilhelm Schickard, a German polymath, designed a calculating machine in 1623 which combined a mechanized form of Napier's rods with the world's first mechanical adding machine built into the base. Because it made use of a single-tooth gear there were circumstances in which its carry mechanism would jam. A fire destroyed at least one of the machines in 1624 and it is believed Schickard was too disheartened to build another. In 1642, while still a teenager, Blaise Pascal started some pioneering work on calculating machines and after three years of effort and 50 prototypes he invented a mechanical calculator. He built twenty of these machines (called Pascal's calculator or Pascaline) in the following ten years. Nine Pascalines have survived, most of which are on display in European museums.[e] A continuing debate exists over whether Schickard or Pascal should be regarded as the "inventor of the mechanical calculator" and the range of issues to be considered is discussed elsewhere. Gottfried Wilhelm von Leibniz invented the stepped reckoner and his famous stepped drum mechanism around 1672. He attempted to create a machine that could be used not only for addition and subtraction but would use a moveable carriage to enable multiplication and division. Leibniz once said "It is unworthy of excellent men to lose hours like slaves in the labour of calculation which could safely be relegated to anyone else if machines were used." However, Leibniz did not incorporate a fully successful carry mechanism. Leibniz also described the binary numeral system, a central ingredient of all modern computers. However, up to the 1940s, many subsequent designs (including Charles Babbage's machines of 1822 and even ENIAC of 1945) were based on the decimal system.[f] Around 1820, Charles Xavier Thomas de Colmar created what would over the rest of the century become the first successful, mass-produced mechanical calculator, the Thomas Arithmometer. It could be used to add and subtract, and with a moveable carriage the operator could also multiply, and divide by a process of long multiplication and long division. It utilised a stepped drum similar in conception to that invented by Leibniz. Mechanical calculators remained in use until the 1970s. In 1804, French weaver Joseph Marie Jacquard developed a loom in which the pattern being woven was controlled by a paper tape constructed from punched cards. The paper tape could be changed without altering the mechanical design of the loom. This was a landmark achievement in programmability. His machine was an improvement over similar weaving looms. Punched cards were preceded by punch bands, as in the machine proposed by Basile Bouchon. These bands would inspire information recording for automatic pianos and more recently numerical control machine tools. In the late 1880s, the American Herman Hollerith invented data storage on punched cards that could then be read by a machine. To process these punched cards, he invented the tabulator and the keypunch machine. His machines used electromechanical relays and counters. Hollerith's method was used in the 1890 United States census. That census was processed two years faster than the prior census had been. Hollerith's company eventually became the core of IBM. By 1920, electromechanical tabulating machines could add, subtract, and print accumulated totals. Machine functions were directed by inserting dozens of wire jumpers into removable control panels. When the United States instituted Social Security in 1935, IBM punched-card systems were used to process records of 26 million workers. Punched cards became ubiquitous in industry and government for accounting and administration. Leslie Comrie's articles on punched-card methods and W. J. Eckert's publication of Punched Card Methods in Scientific Computation in 1940, described punched-card techniques sufficiently advanced to solve some differential equations or perform multiplication and division using floating-point representations, all on punched cards and unit record machines. Such machines were used during World War II for cryptographic statistical processing, as well as a vast number of administrative uses. The Astronomical Computing Bureau of Columbia University performed astronomical calculations representing the state of the art in computing. By the 20th century, earlier mechanical calculators, cash registers, accounting machines, and so on were redesigned to use electric motors, with gear position as the representation for the state of a variable. The word "computer" was a job title assigned to primarily women who used these calculators to perform mathematical calculations. By the 1920s, British scientist Lewis Fry Richardson's interest in weather prediction led him to propose human computers and numerical analysis to model the weather; to this day, the most powerful computers on Earth are needed to adequately model its weather using the Navier–Stokes equations. Companies like Friden, Marchant Calculator and Monroe made desktop mechanical calculators from the 1930s that could add, subtract, multiply and divide. In 1948, the Curta was introduced by Austrian inventor Curt Herzstark. It was a small, hand-cranked mechanical calculator and as such, a descendant of Gottfried Leibniz's Stepped Reckoner and Thomas' Arithmometer. The world's first all-electronic desktop calculator was the British Bell Punch ANITA, released in 1961. It used vacuum tubes, cold-cathode tubes and Dekatrons in its circuits, with 12 cold-cathode "Nixie" tubes for its display. The ANITA sold well since it was the only electronic desktop calculator available, and was silent and quick. The tube technology was superseded in June 1963 by the U.S. manufactured Friden EC-130, which had an all-transistor design, a stack of four 13-digit numbers displayed on a 5-inch (13 cm) CRT, and introduced reverse Polish notation (RPN). First proposed general-purpose computing device The Industrial Revolution (late 18th to early 19th century) had a significant impact on the evolution of computing hardware, as the era's rapid advancements in machinery and manufacturing laid the groundwork for mechanized and automated computing. Industrial needs for precise, large-scale calculations—especially in fields such as navigation, engineering, and finance—prompted innovations in both design and function, setting the stage for devices like Charles Babbage's difference engine (1822). This mechanical device was intended to automate the calculation of polynomial functions and represented one of the earliest applications of computational logic. Babbage, often regarded as the "father of the computer," envisioned a fully mechanical system of gears and wheels, powered by steam, capable of handling complex calculations that previously required intensive manual labor. His difference engine, designed to aid navigational calculations, ultimately led him to conceive the analytical engine in 1833. This concept, far more advanced than his difference engine, included an arithmetic logic unit, control flow through conditional branching and loops, and integrated memory. Babbage's plans made his analytical engine the first general-purpose design that could be described as Turing-complete in modern terms. The analytical engine was programmed using punched cards, a method adapted from the Jacquard loom invented by Joseph Marie Jacquard in 1804, which controlled textile patterns with a sequence of punched cards. These cards became foundational in later computing systems as well. Babbage's machine would have featured multiple output devices, including a printer, a curve plotter, and even a bell, demonstrating his ambition for versatile computational applications beyond simple arithmetic. Ada Lovelace expanded on Babbage's vision by conceptualizing algorithms that could be executed by his machine. Her notes on the analytical engine, written in the 1840s, are now recognized as the earliest examples of computer programming. Lovelace saw potential in computers to go beyond numerical calculations, predicting that they might one day generate complex musical compositions or perform tasks like language processing. Although Babbage's designs were never fully realized due to technical and financial challenges, they influenced a range of subsequent developments in computing hardware. Notably, in the 1890s, Herman Hollerith adapted the idea of punched cards for automated data processing, which was utilized in the U.S. Census and sped up data tabulation significantly, bridging industrial machinery with data processing. The Industrial Revolution's advancements in mechanical systems demonstrated the potential for machines to conduct complex calculations, influencing engineers like Leonardo Torres Quevedo and Vannevar Bush in the early 20th century. Torres Quevedo designed an electromechanical machine with floating-point arithmetic, while Bush's later work explored electronic digital computing. By the mid-20th century, these innovations paved the way for the first fully electronic computers. Analog computers In the first half of the 20th century, analog computers were considered by many to be the future of computing. These devices used the continuously changeable aspects of physical phenomena such as electrical, mechanical, or hydraulic quantities to model the problem being solved, in contrast to digital computers that represented varying quantities symbolically, as their numerical values change. As an analog computer does not use discrete values, but rather continuous values, processes cannot be reliably repeated with exact equivalence, as they can with Turing machines. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson, later Lord Kelvin, in 1872. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location and was of great utility to navigation in shallow waters. His device was the foundation for further developments in analog computing. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the brother of the more famous Lord Kelvin. He explored the possible construction of such calculators, but was stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. A notable series of analog calculating machines were developed by Leonardo Torres Quevedo since 1895, including one that was able to compute the roots of arbitrary polynomials of order eight, including the complex ones, with a precision down to thousandths. An important advance in analog computing was the development of the first fire-control systems for long range ship gunlaying. When gunnery ranges increased dramatically in the late 19th century it was no longer a simple matter of calculating the proper aim point, given the flight times of the shells. Various spotters on board the ship would relay distance measures and observations to a central plotting station. There the fire direction teams fed in the location, speed and direction of the ship and its target, as well as various adjustments for Coriolis effect, weather effects on the air, and other adjustments; the computer would then output a firing solution, which would be fed to the turrets for laying. In 1912, British engineer Arthur Pollen developed the first electrically powered mechanical analogue computer (called at the time the Argo Clock).[citation needed] It was used by the Imperial Russian Navy in World War I.[citation needed] The alternative Dreyer Table fire control system was fitted to British capital ships by mid-1916. Mechanical devices were also used to aid the accuracy of aerial bombing. Drift Sight was the first such aid, developed by Harry Wimperis in 1916 for the Royal Naval Air Service; it measured the wind speed from the air, and used that measurement to calculate the wind's effects on the trajectory of the bombs. The system was later improved with the Course Setting Bomb Sight, and reached a climax with World War II bomb sights, Mark XIV bomb sight (RAF Bomber Command) and the Norden (United States Army Air Forces). The art of mechanical analog computing reached its zenith with the differential analyzer, built by H. L. Hazen and Vannevar Bush at MIT starting in 1927, which built on the mechanical integrators of James Thomson and the torque amplifiers invented by H. W. Nieman. A dozen of these devices were built before their obsolescence became obvious; the most powerful was constructed at the University of Pennsylvania's Moore School of Electrical Engineering, where the ENIAC was built. A fully electronic analog computer was built by Helmut Hölzer in 1942 at Peenemünde Army Research Center. By the 1950s the success of digital electronic computers had spelled the end for most analog computing machines, but hybrid analog computers, controlled by digital electronics, remained in substantial use into the 1950s and 1960s, and later in some specialized applications. Advent of the digital computer The principle of the modern computer was first described by computer scientist Alan Turing, who set out the idea in his seminal 1936 paper, On Computable Numbers. Turing reformulated Kurt Gödel's 1931 results on the limits of proof and computation, replacing Gödel's universal arithmetic-based formal language with the formal and simple hypothetical devices that became known as Turing machines. He proved that some such machine would be capable of performing any conceivable mathematical computation if it were representable as an algorithm. He went on to prove that there was no solution to the Entscheidungsproblem by first showing that the halting problem for Turing machines is undecidable: in general, it is not possible to decide algorithmically whether a given Turing machine will ever halt. He also introduced the notion of a "universal machine" (now known as a universal Turing machine), with the idea that such a machine could perform the tasks of any other machine, or in other words, it is provably capable of computing anything that is computable by executing a program stored on tape, allowing the machine to be programmable. John von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. The era of modern computing began with a flurry of development before and during World War II. Most digital computers built in this period were electromechanical – electric switches drove mechanical relays to perform the calculation. These mechanical components had a low operating speed due to their mechanical nature and were eventually superseded by much faster all-electric components, originally using vacuum tubes and later transistors. The Z2 was one of the earliest examples of an electric operated digital computer built with electromechanical relays and was created by civil engineer Konrad Zuse in 1940 in Germany. It was an improvement on his earlier, mechanical Z1; although it used the same mechanical memory, it replaced the arithmetic and control logic with electrical relay circuits. In the same year, electro-mechanical devices called bombes were built by British cryptologists to help decipher German Enigma-machine-encrypted secret messages during World War II. The bombe's initial design was created in 1939 at the UK Government Code and Cypher School at Bletchley Park by Alan Turing, with an important refinement devised in 1940 by Gordon Welchman. The engineering design and construction was the work of Harold Keen of the British Tabulating Machine Company. It was a substantial development from a device that had been designed in 1938 by Polish Cipher Bureau cryptologist Marian Rejewski, and known as the "cryptologic bomb" (Polish: "bomba kryptologiczna"). In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code and data were stored on punched film. It was similar to modern machines in several respects, pioneering numerous advances such as floating-point numbers. Replacement of the hard-to-implement decimal system (used in Charles Babbage's earlier design) by the simpler binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. Despite lacking explicit conditional execution, the Z3 was proven to have been a theoretically Turing-complete machine in 1998 by Raúl Rojas. In two 1936 patent applications, Zuse also anticipated that machine instructions could be stored in the same storage used for data—the key insight of what became known as the von Neumann architecture, first implemented in 1948 in America in the electromechanical IBM SSEC and in Britain in the fully electronic Manchester Baby. Zuse suffered setbacks during World War II when some of his machines were destroyed in the course of Allied bombing campaigns. Apparently his work remained largely unknown to engineers in the UK and US until much later, although at least IBM was aware of it as it financed his post-war startup company in 1946 in return for an option on Zuse's patents. In 1944, the Harvard Mark I was constructed at IBM's Endicott laboratories. It was a similar general purpose electro-mechanical computer to the Z3, but was not quite Turing-complete. The term digital was first suggested by George Robert Stibitz and refers to where a signal, such as a voltage, is not used to directly represent a value (as it would be in an analog computer), but to encode it. In November 1937, Stibitz, then working at Bell Labs (1930–1941), completed a relay-based calculator he later dubbed the "Model K" (for "kitchen table", on which he had assembled it), which became the first binary adder. Typically signals have two states – low (usually representing 0) and high (usually representing 1), but sometimes three-valued logic is used, especially in high-density memory. Modern computers generally use binary logic, but many early machines were decimal computers. In these machines, the basic unit of data was the decimal digit, encoded in one of several schemes, including binary-coded decimal or BCD, bi-quinary, excess-3, and two-out-of-five code. The mathematical basis of digital computing is Boolean algebra, developed by the British mathematician George Boole in his work The Laws of Thought, published in 1854. His Boolean algebra was further refined in the 1860s by William Jevons and Charles Sanders Peirce, and was first presented systematically by Ernst Schröder and A. N. Whitehead. In 1879 Gottlob Frege developed the formal approach to logic and proposes the first logic language for logical equations. In the 1930s and working independently, American electronic engineer Claude Shannon and Soviet logician Victor Shestakov both showed a one-to-one correspondence between the concepts of Boolean logic and certain electrical circuits, now called logic gates, which are now ubiquitous in digital computers. They showed that electronic relays and switches can realize the expressions of Boolean algebra. This thesis essentially founded practical digital circuit design. In addition Shannon's paper gives a correct circuit diagram for a 4 bit digital binary adder. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. Machines such as the Z3, the Atanasoff–Berry Computer, the Colossus computers, and the ENIAC were built by hand, using circuits containing relays or valves (vacuum tubes), and often used punched cards or punched paper tape for input and as the main (non-volatile) storage medium. Engineer Tommy Flowers joined the telecommunications branch of the General Post Office in 1926. While working at the research station in Dollis Hill in the 1930s, he began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation 5 years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, in 1940 Arthur Dickinson (IBM) invented the first digital electronic computer. This calculating device was fully electronic – control, calculations and output (the first electronic display). John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed the Atanasoff–Berry Computer (ABC) in 1942, the first binary electronic digital calculating device. This design was semi-electronic (electro-mechanical control and electronic calculations), and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. However, its paper card writer/reader was unreliable and the regenerative drum contact system was mechanical. The machine's special-purpose nature and lack of changeable, stored program distinguish it from modern computers. Computers whose logic was primarily built using vacuum tubes are now known as first generation computers. During World War II, British codebreakers at Bletchley Park, 40 miles (64 km) north of London, achieved a number of successes at breaking encrypted enemy military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes. They ruled out possible Enigma settings by performing chains of logical deductions implemented electrically. Most possibilities led to a contradiction, and the few remaining could be tested by hand. The Germans also developed a series of teleprinter encryption systems, quite different from Enigma. The Lorenz SZ 40/42 machine was used for high-level Army communications, code-named "Tunny" by the British. The first intercepts of Lorenz messages began in 1941. As part of an attack on Tunny, Max Newman and his colleagues developed the Heath Robinson, a fixed-function machine to aid in code breaking. Tommy Flowers, a senior engineer at the Post Office Research Station was recommended to Max Newman by Alan Turing and spent eleven months from early February 1943 designing and building the more flexible Colossus computer (which superseded the Heath Robinson). After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. By the time Germany surrendered in May 1945, there were ten Colossi working at Bletchley Park. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of Boolean logical operations on its data, but it was not Turing-complete. Data input to Colossus was by photoelectric reading of a paper tape transcription of the enciphered intercepted message. This was arranged in a continuous loop so that it could be read and re-read multiple times – there being no internal store for the data. The reading mechanism ran at 5,000 characters per second with the paper tape moving at 40 ft/s (12.2 m/s; 27.3 mph). Colossus Mark 1 contained 1500 thermionic valves (tubes), but Mark 2 with 2400 valves and five processors in parallel, was both 5 times faster and simpler to operate than Mark 1, greatly speeding the decoding process. Mark 2 was designed while Mark 1 was being constructed. Allen Coombs took over leadership of the Colossus Mark 2 project when Tommy Flowers moved on to other projects. The first Mark 2 Colossus became operational on 1 June 1944, just in time for the Allied Invasion of Normandy on D-Day. Most of the use of Colossus was in determining the start positions of the Tunny rotors for a message, which was called "wheel setting". Colossus included the first-ever use of shift registers and systolic arrays, enabling five simultaneous tests, each involving up to 100 Boolean calculations. This enabled five different possible start positions to be examined for one transit of the paper tape. As well as wheel setting some later Colossi included mechanisms intended to help determine pin patterns known as "wheel breaking". Both models were programmable using switches and plug panels in a way their predecessors had not been. Without the use of these machines, the Allies would have been deprived of the very valuable intelligence that was obtained from reading the vast quantity of enciphered high-level telegraphic messages between the German High Command (OKW) and their army commands throughout occupied Europe. Details of their existence, design, and use were kept secret well into the 1970s. Winston Churchill personally issued an order for their destruction into pieces no larger than a man's hand, to keep secret that the British were capable of cracking Lorenz SZ cyphers (from German rotor stream cipher machines) during the oncoming Cold War. Two of the machines were transferred to the newly formed GCHQ and the others were destroyed. As a result, the machines were not included in many histories of computing.[g] A reconstructed working copy of one of the Colossus machines is now on display at Bletchley Park. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the US. Although the ENIAC used similar technology to the Colossi, it was much faster and more flexible and was Turing-complete. Like the Colossi, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored-program electronic machines that came later. Once a program was ready to be run, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were women who had been trained as mathematicians. It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High-speed memory was limited to 20 words (equivalent to about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. One of its major engineering feats was to minimize the effects of tube burnout, which was a common problem in machine reliability at that time. The machine was in almost constant use for the next ten years. Stored-program computer The theoretical basis for the stored-program computer was proposed by Alan Turing in his 1936 paper On Computable Numbers. Whilst Turing was at Princeton University working on his PhD, John von Neumann got to know him and became intrigued by his concept of a universal computing machine. Early computing machines executed the set sequence of steps, known as a 'program', that could be altered by changing electrical connections using switches or a patch panel (or plugboard). However, this process of 'reprogramming' was often difficult and time-consuming, requiring engineers to create flowcharts and physically re-wire the machines. Stored-program computers, by contrast, were designed to store a set of instructions (a program), in memory – typically the same memory as stored data. ENIAC inventors John Mauchly and J. Presper Eckert proposed, in August 1944, the construction of a machine called the Electronic Discrete Variable Automatic Computer (EDVAC) and design work for it commenced at the University of Pennsylvania's Moore School of Electrical Engineering, before the ENIAC was fully operational. The design implemented a number of important architectural and logical improvements conceived during the ENIAC's construction, and a high-speed serial-access memory. However, Eckert and Mauchly left the project and its construction floundered. In 1945, von Neumann visited the Moore School and wrote notes on what he saw, which he sent to the project. The U.S. Army liaison there had them typed and circulated as the First Draft of a Report on the EDVAC. The draft did not mention Eckert and Mauchly and, despite its incomplete nature and questionable lack of attribution of the sources of some of the ideas, the computer architecture it outlined became known as the 'von Neumann architecture'. In 1945, Turing joined the UK National Physical Laboratory and began work on developing an electronic stored-program digital computer. His late-1945 report 'Proposed Electronic Calculator' was the first reasonably detailed specification for such a device. Turing presented a more detailed paper to the National Physical Laboratory (NPL) Executive Committee in March 1946, giving the first substantially complete design of a stored-program computer, a device that was called the Automatic Computing Engine (ACE). Turing considered that the speed and the size of computer memory were crucial elements,: p.4 so he proposed a high-speed memory of what would today be called 25 KB, accessed at a speed of 1 MHz. The ACE implemented subroutine calls, whereas the EDVAC did not, and the ACE also used Abbreviated Computer Instructions, an early form of programming language. The Manchester Baby (Small Scale Experimental Machine, SSEM) was the world's first electronic stored-program computer. It was built at the Victoria University of Manchester by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. The machine was not intended to be a practical computer, but was instead designed as a testbed for the Williams tube, the first random-access digital storage device. Invented by Freddie Williams and Tom Kilburn at the University of Manchester in 1946 and 1947, it was a cathode-ray tube that used an effect called secondary emission to temporarily store electronic binary data, and was used successfully in several early computers. Described as small and primitive in a 1998 retrospective, the Baby was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as it had demonstrated the feasibility of its design, a project was initiated at the university to develop the design into a more usable computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. The Baby had a 32-bit word length and a memory of 32 words. As it was designed to be the simplest possible stored-program computer, the only arithmetic operations implemented in hardware were subtraction and negation; other arithmetic operations were implemented in software. The first of three programs written for the machine found the highest proper divisor of 218 (262,144), a calculation that was known would take a long time to run—and so prove the computer's reliability—by testing every integer from 218 − 1 downwards, as division was implemented by repeated subtraction of the divisor. The program consisted of 17 instructions and ran for 52 minutes before producing the correct answer of 131,072, after the Baby had performed 3.5 million operations (for an effective CPU speed of 1.1 kIPS). The successive approximations to the answer were displayed as a pattern of dots on the output CRT which mirrored the pattern held on the Williams tube used for storage. The SSEM led to the development of the Manchester Mark 1 at the University of Manchester. Work began in August 1948, and the first version was operational by April 1949; a program written to search for Mersenne primes ran error-free for nine hours on the night of 16/17 June 1949. The machine's successful operation was widely reported in the British press, which used the phrase "electronic brain" in describing it to their readers. The computer is especially historically significant because of its pioneering inclusion of index registers, an innovation which made it easier for a program to read sequentially through an array of words in memory. Thirty-four patents resulted from the machine's development, and many of the ideas behind its design were incorporated in subsequent commercial products such as the IBM 701 and 702 as well as the Ferranti Mark 1. The chief designers, Frederic C. Williams and Tom Kilburn, concluded from their experiences with the Mark 1 that computers would be used more in scientific roles than in pure mathematics. In 1951 they started development work on Meg, the Mark 1's successor, which would include a floating-point unit. The other contender for being the first recognizably modern digital stored-program computer was the EDSAC, designed and constructed by Maurice Wilkes and his team at the University of Cambridge Mathematical Laboratory in England at the University of Cambridge in 1949. The machine was inspired by John von Neumann's seminal First Draft of a Report on the EDVAC and was one of the first usefully operational electronic digital stored-program computers.[h] EDSAC ran its first programs on 6 May 1949, when it calculated a table of squares and a list of prime numbers.The EDSAC also served as the basis for the first commercially applied computer, the LEO I, used by food manufacturing company J. Lyons & Co. Ltd. EDSAC 1 was finally shut down on 11 July 1958, having been superseded by EDSAC 2 which stayed in use until 1965. The "brain" [computer] may one day come down to our level [of the common people] and help with our income-tax and book-keeping calculations. But this is speculation and there is no sign of it so far. — British newspaper The Star in a June 1949 news article about the EDSAC computer, long before the era of the personal computers. ENIAC inventors John Mauchly and J. Presper Eckert proposed the EDVAC's construction in August 1944, and design work for the EDVAC commenced at the University of Pennsylvania's Moore School of Electrical Engineering, before the ENIAC was fully operational. The design implemented a number of important architectural and logical improvements conceived during the ENIAC's construction, and a high-speed serial-access memory. However, Eckert and Mauchly left the project and its construction floundered. It was finally delivered to the U.S. Army's Ballistics Research Laboratory at the Aberdeen Proving Ground in August 1949, but due to a number of problems, the computer only began operation in 1951, and then only on a limited basis. The first commercial electronic computer was the Ferranti Mark 1, built by Ferranti and delivered to the University of Manchester in February 1951. It was based on the Manchester Mark 1. The main improvements over the Manchester Mark 1 were in the size of the primary storage (using random access Williams tubes), secondary storage (using a magnetic drum), a faster multiplier, and additional instructions. The basic cycle time was 1.2 milliseconds, and a multiplication could be completed in about 2.16 milliseconds. The multiplier used almost a quarter of the machine's 4,050 vacuum tubes (valves). A second machine was purchased by the University of Toronto, before the design was revised into the Mark 1 Star. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947, the directors of J. Lyons & Company, a British catering company famous for its teashops but with strong interests in new office management techniques, decided to take an active role in promoting the commercial development of computers. The LEO I computer (Lyons Electronic Office) became operational in April 1951 and ran the world's first regular routine office computer job. On 17 November 1951, the J. Lyons company began weekly operation of a bakery valuations job on the LEO – the first business application to go live on a stored-program computer.[i] In June 1951, the UNIVAC I (Universal Automatic Computer) was delivered to the U.S. Census Bureau. Remington Rand eventually sold 46 machines at more than US$1 million each ($12.4 million as of 2025). UNIVAC was the first "mass-produced" computer. It used 5,200 vacuum tubes and consumed 125 kW of power. Its primary storage was serial-access mercury delay lines capable of storing 1,000 words of 11 decimal digits plus sign (72-bit words). In 1952, Compagnie des Machines Bull released the Gamma 3 computer, which became a large success in Europe, eventually selling more than 1,200 units, and the first computer produced in more than 1,000 units. The Gamma 3 had innovative features for its time including a dual-mode, software switchable, BCD and binary ALU, as well as a hardwired floating-point library for scientific computing. In its E.T configuration, the Gamma 3 drum memory could fit about 50,000 instructions for a capacity of 16,384 words (around 100 kB), a large amount for the time. Compared to the UNIVAC, IBM introduced a smaller, more affordable computer in 1954 that proved very popular.[j] The IBM 650 weighed over 900 kg, the attached power supply weighed around 1350 kg and both were held in separate cabinets of roughly 1.5 × 0.9 × 1.8 m. The system cost US$500,000 ($5.99 million as of 2025) or could be leased for US$3,500 a month ($40,000 as of 2025). Its drum memory was originally 2,000 ten-digit words, later expanded to 4,000 words. Memory limitations such as this were to dominate programming for decades afterward. The program instructions were fetched from the spinning drum as the code ran. Efficient execution using drum memory was provided by a combination of hardware architecture – the instruction format included the address of the next instruction – and software: the Symbolic Optimal Assembly Program, SOAP, assigned instructions to the optimal addresses (to the extent possible by static analysis of the source program). Thus many instructions were, when needed, located in the next row of the drum to be read and additional wait time for drum rotation was reduced. In 1951, British scientist Maurice Wilkes developed the concept of microprogramming from the realisation that the central processing unit of a computer could be controlled by a miniature, highly specialized computer program in high-speed ROM. Microprogramming allows the base instruction set to be defined or extended by built-in programs (now called firmware or microcode). This concept greatly simplified CPU development. He first described this at the University of Manchester Computer Inaugural Conference in 1951, then published in expanded form in IEEE Spectrum in 1955.[citation needed] It was widely used in the CPUs and floating-point units of mainframe and other computers; it was implemented for the first time in EDSAC 2, which also used multiple identical "bit slices" to simplify design. Interchangeable, replaceable tube assemblies were used for each bit of the processor.[k] Magnetic memory Magnetic drum memories were developed for the US Navy during WW II with the work continuing at Engineering Research Associates (ERA) in 1946 and 1947. ERA, then a part of Univac included a drum memory in its 1103, announced in February 1953. The first mass-produced computer, the IBM 650, also announced in 1953 had about 8.5 kilobytes of drum memory. Magnetic-core memory patented in 1949 with its first usage demonstrated for the Whirlwind computer in August 1953. Commercialization followed quickly. Magnetic core was used in peripherals of the IBM 702 delivered in July 1955, and later in the 702 itself. The IBM 704 (1955) and the Ferranti Mercury (1957) used magnetic-core memory. It went on to dominate the field into the 1970s, when it was replaced with semiconductor memory. Magnetic core peaked in volume about 1975 and declined in usage and market share thereafter. As late as 1980, PDP-11/45 machines using magnetic-core main memory and drums for swapping were still in use at many of the original UNIX sites. Early digital computer characteristics Transistor computers The bipolar transistor was invented in 1947. From 1955 onward transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Silicon junction transistors were much more reliable than vacuum tubes and had longer service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. Transistors greatly reduced computers' size, initial cost, and operating cost. Typically, second-generation computers were composed of large numbers of printed circuit boards such as the IBM Standard Modular System, each carrying one to four logic gates or flip-flops. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves (vacuum tubes). Initially the only devices available were germanium point-contact transistors, less reliable than the valves they replaced but which consumed far less power. Their first transistorized computer, and the first in the world, was operational by 1953, and a second version was completed there in April 1955. The 1955 version used 200 transistors, 1,300 solid-state diodes, and had a power consumption of 150 watts. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The design featured a 64-kilobyte magnetic drum memory store with multiple moving heads that had been designed at the National Physical Laboratory, UK. By 1953 this team had transistor circuits operating to read and write on a smaller magnetic drum from the Royal Radar Establishment. The machine used a low clock speed of only 58 kHz to avoid having to use any valves to generate the clock waveforms. CADET used 324-point-contact transistors provided by the UK company Standard Telephones and Cables; 76 junction transistors were used for the first stage amplifiers for data read from the drum, since point-contact transistors were too noisy. From August 1956, CADET was offering a regular computing service, during which it often executed continuous computing runs of 80 hours or more. Problems with the reliability of early batches of point contact and alloyed junction transistors meant that the machine's mean time between failures was about 90 minutes, but this improved once the more reliable bipolar junction transistors became available. The Manchester University Transistor Computer's design was adopted by the local engineering firm of Metropolitan-Vickers in their Metrovick 950, the first commercial transistor computer anywhere. Six Metrovick 950s were built, the first completed in 1956. They were successfully deployed within various departments of the company and were in use for about five years. A second generation computer, the IBM 1401, captured about one third of the world market. IBM installed more than ten thousand 1401s between 1960 and 1964. Transistorized electronics improved not only the CPU (Central Processing Unit), but also the peripheral devices. The second generation disk data storage units were able to store tens of millions of letters and digits. Next to the fixed disk storage units, connected to the CPU via high-speed data transmission, were removable disk data storage units. A removable disk pack can be easily exchanged with another pack in a few seconds. Even if the removable disks' capacity is smaller than fixed disks, their interchangeability guarantees a nearly unlimited quantity of data close at hand. Magnetic tape provided archival capability for this data, at a lower cost than disk. Many second-generation CPUs delegated peripheral device communications to a secondary processor. For example, while the communication processor controlled card reading and punching, the main CPU executed calculations and binary branch instructions. One databus would bear data between the main CPU and core memory at the CPU's fetch-execute cycle rate, and other databusses would typically serve the peripheral devices. On the PDP-1, the core memory's cycle time was 5 microseconds; consequently most arithmetic instructions took 10 microseconds (100,000 operations per second) because most operations took at least two memory cycles; one for the instruction, one for the operand data fetch. During the second generation remote terminal units (often in the form of Teleprinters like a Friden Flexowriter) saw greatly increased use.[l] Telephone connections provided sufficient speed for early remote terminals and allowed hundreds of kilometers separation between remote-terminals and the computing center. Eventually these stand-alone computer networks would be generalized into an interconnected network of networks—the Internet.[m] The early 1960s saw the advent of supercomputing. The Atlas was a joint development between the University of Manchester, Ferranti, and Plessey, and was first installed at Manchester University and officially commissioned in 1962 as one of the world's first supercomputers – considered to be the most powerful computer in the world at that time. It was said that whenever Atlas went offline half of the United Kingdom's computer capacity was lost. It was a second-generation machine, using discrete germanium transistors. Atlas also pioneered the Atlas Supervisor, "considered by many to be the first recognisable modern operating system". In the US, a series of computers at Control Data Corporation (CDC) were designed by Seymour Cray to use innovative designs and parallelism to achieve superior computational peak performance. The CDC 6600, released in 1964, is generally considered the first supercomputer. The CDC 6600 outperformed its predecessor, the IBM 7030 Stretch, by about a factor of 3. With performance of about 1 megaFLOPS, the CDC 6600 was the world's fastest computer from 1964 to 1969, when it relinquished that status to its successor, the CDC 7600. Integrated circuit computers The "third-generation" of digital electronic computers used integrated circuit (IC) chips as the basis of their logic. The idea of an integrated circuit was conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. The first working integrated circuits were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. Kilby's invention was a hybrid integrated circuit (hybrid IC). It had external wire connections, which made it difficult to mass-produce. Noyce came up with his own idea of an integrated circuit half a year after Kilby. Noyce's invention was a monolithic integrated circuit (IC) chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. The basis for Noyce's monolithic IC was Fairchild's planar process, which allowed integrated circuits to be laid out using the same principles as those of printed circuits. The planar process was developed by Noyce's colleague Jean Hoerni in early 1959, based on Mohamed M. Atalla's work on semiconductor surface passivation by silicon dioxide at Bell Labs in the late 1950s. Third generation (integrated circuit) computers first appeared in the early 1960s in computers developed for government purposes, and then in commercial computers beginning in the mid-1960s. The first silicon IC computer was the Apollo Guidance Computer or AGC. Although not the most powerful computer of its time, the extreme constraints on size, mass, and power of the Apollo spacecraft required the AGC to be much smaller and denser than any prior computer, weighing in at only 70 pounds (32 kg). Each lunar landing mission carried two AGCs, one each in the command and lunar ascent modules. Semiconductor memory The MOSFET (metal–oxide–semiconductor field-effect transistor, or MOS transistor) was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959. In addition to data processing, the MOSFET enabled the practical use of MOS transistors as memory cell storage elements, a function previously served by magnetic cores. Semiconductor memory, also known as MOS memory, was cheaper and consumed less power than magnetic-core memory. MOS random-access memory (RAM), in the form of static RAM (SRAM), was developed by John Schmidt at Fairchild Semiconductor in 1964. In 1966, Robert Dennard at the IBM Thomas J. Watson Research Center developed MOS dynamic RAM (DRAM). In 1967, Dawon Kahng and Simon Sze at Bell Labs developed the floating-gate MOSFET, the basis for MOS non-volatile memory such as EPROM, EEPROM and flash memory. Microprocessor computers The "fourth-generation" of digital electronic computers used microprocessors as the basis of their logic. The microprocessor has origins in the MOS integrated circuit (MOS IC) chip. Due to rapid MOSFET scaling, MOS IC chips rapidly increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of transistors on a single MOS chip by the late 1960s. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor could be contained on a single MOS LSI chip. The subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor". The earliest multi-chip microprocessors were the Four-Phase Systems AL-1 in 1969 and Garrett AiResearch MP944 in 1970, developed with multiple MOS LSI chips. The first single-chip microprocessor was the Intel 4004, developed on a single PMOS LSI chip. It was designed and realized by Ted Hoff, Federico Faggin, Masatoshi Shima and Stanley Mazor at Intel, and released in 1971.[n] While the earliest microprocessor ICs literally contained only the processor, i.e. the central processing unit, of a computer, their progressive development naturally led to chips containing most or all of the internal electronic parts of a computer. The integrated circuit in the image on the right, for example, an Intel 8742, is an 8-bit microcontroller that includes a CPU running at 12 MHz, 128 bytes of RAM, 2048 bytes of EPROM, and I/O in the same chip. During the 1960s, there was considerable overlap between second and third generation technologies.[o] IBM implemented its IBM Solid Logic Technology modules in hybrid circuits for the IBM System/360 in 1964. As late as 1975, Sperry Univac continued the manufacture of second-generation machines such as the UNIVAC 494. The Burroughs large systems such as the B5000 were stack machines, which allowed for simpler programming. These pushdown automatons were also implemented in minicomputers and microprocessors later, which influenced programming language design. Minicomputers served as low-cost computer centers for industry, business and universities. It became possible to simulate analog circuits with the simulation program with integrated circuit emphasis, or SPICE (1971) on minicomputers, one of the programs for electronic design automation (EDA). The microprocessor led to the development of microcomputers, small, low-cost computers that could be owned by individuals and small businesses. Microcomputers, the first of which appeared in the 1970s, became ubiquitous in the 1980s and beyond. While which specific product is considered the first microcomputer system is a matter of debate, one of the earliest is R2E's Micral N (François Gernelle, André Truong) launched "early 1973" using the Intel 8008. The first commercially available microcomputer kit was the Intel 8080-based Altair 8800, which was announced in the January 1975 cover article of Popular Electronics. However, the Altair 8800 was an extremely limited system in its initial stages, having only 256 bytes of DRAM in its initial package and no input-output except its toggle switches and LED register display. Despite this, it was initially surprisingly popular, with several hundred sales in the first year, and demand rapidly outstripped supply. Several early third-party vendors such as Cromemco and Processor Technology soon began supplying additional S-100 bus hardware for the Altair 8800. In April 1975, at the Hannover Fair, Olivetti presented the P6060, the world's first complete, pre-assembled personal computer system. The central processing unit consisted of two cards, code named PUCE1 and PUCE2, and unlike most other personal computers was built with TTL components rather than a microprocessor. It had one or two 8" floppy disk drives, a 32-character plasma display, 80-column graphical thermal printer, 48 Kbytes of RAM, and BASIC language. It weighed 40 kg (88 lb). As a complete system, this was a significant step from the Altair, though it never achieved the same success. It was in competition with a similar product by IBM that had an external floppy disk drive. From 1975 to 1977, most microcomputers, such as the MOS Technology KIM-1, the Altair 8800, and some versions of the Apple I, were sold as kits for do-it-yourselfers. Pre-assembled systems did not gain much ground until 1977, with the introduction of the Apple II, the Tandy TRS-80, the first SWTPC computers, and the Commodore PET. Computing has evolved with microcomputer architectures, with features added from their larger brethren, now dominant in most market segments. A NeXT Computer and its object-oriented development tools and libraries were used by Tim Berners-Lee and Robert Cailliau at CERN to develop the world's first web server software, CERN httpd, and also used to write the first web browser, WorldWideWeb. Systems as complicated as computers require very high reliability. ENIAC remained on, in continuous operation from 1947 to 1955, for eight years before being shut down. Although a vacuum tube might fail, it would be replaced without bringing down the system. By the simple strategy of never shutting down ENIAC, the failures were dramatically reduced. The vacuum-tube SAGE air-defense computers became remarkably reliable – installed in pairs, one off-line, tubes likely to fail did so when the computer was intentionally run at reduced power to find them. Hot-pluggable hard disks, like the hot-pluggable vacuum tubes of yesteryear, continue the tradition of repair during continuous operation. Semiconductor memories routinely have no errors when they operate, although operating systems like Unix have employed memory tests on start-up to detect failing hardware. Today, the requirement of reliable performance is made even more stringent when server farms are the delivery platform. Google has managed this by using fault-tolerant software to recover from hardware failures, and is even working on the concept of replacing entire server farms on-the-fly, during a service event. In the 21st century, multi-core CPUs became commercially available. Content-addressable memory (CAM) has become inexpensive enough to be used in networking, and is frequently used for on-chip cache memory in modern microprocessors, although no computer system has yet implemented hardware CAMs for use in programming languages. Currently, CAMs (or associative arrays) in software are programming-language-specific. Semiconductor memory cell arrays are very regular structures, and manufacturers prove their processes on them; this allows price reductions on memory products. During the 1980s, CMOS logic gates developed into devices that could be made as fast as other circuit types; computer power consumption could therefore be decreased dramatically. Unlike the continuous current draw of a gate based on other logic types, a CMOS gate only draws significant current, except for leakage, during the 'transition' between logic states. CMOS circuits have allowed computing to become a commercial product which is now ubiquitous, embedded in many forms, from greeting cards and telephones to satellites. The thermal design power which is dissipated during operation has become as essential as computing speed of operation. In 2006 servers consumed 1.5% of the total U.S. electricity consumption. The energy consumption of computer data centers was expected to double to 3% of world consumption by 2011. The SoC (system on a chip) has compressed even more of the integrated circuitry into a single chip; SoCs are enabling phones and PCs to converge into single hand-held wireless mobile devices. Quantum computing is an emerging technology in the field of computing. MIT Technology Review reported 10 November 2017 that IBM has created a 50-qubit computer; currently its quantum state lasts 50 microseconds. Google researchers have been able to extend the 50 microsecond time limit, as reported 14 July 2021 in Nature; stability has been extended 100-fold by spreading a single logical qubit over chains of data qubits for quantum error correction. Physical Review X reported a technique for 'single-gate sensing as a viable readout method for spin qubits' (a singlet-triplet spin state in silicon) on 26 November 2018. A Google team has succeeded in operating their RF pulse modulator chip at 3 kelvins, simplifying the cryogenics of their 72-qubit computer, which is set up to operate at 0.3 K; but the readout circuitry and another driver remain to be brought into the cryogenics.[p] See: Quantum supremacy Silicon qubit systems have demonstrated entanglement at non-local distances. Computing hardware and its software have even become a metaphor for the operation of the universe. Epilogue An indication of the rapidity of development of this field can be inferred from the history of the seminal 1947 article by Burks, Goldstine and von Neumann. By the time that anyone had time to write anything down, it was obsolete. After 1945, others read John von Neumann's First Draft of a Report on the EDVAC, and immediately started implementing their own systems. To this day, the rapid pace of development has continued, worldwide.[q][r] See also Notes References Further reading External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.