text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Near_East] | [TOKENS: 8010] |
Contents Near East The Near East (Arabic: الشرق الأدنى, Greek: Εγγύς Ανατολή, Hebrew: המזרח הקרוב, Turkish: Yakın Doğu) is a transcontinental region around the Eastern Mediterranean encompassing the historical Fertile Crescent, the Levant, Anatolia, Egypt, the Balkans, Mesopotamia, and coastal areas of the Arabian Peninsula. The term was invented in the 20th century by modern Western geographers and was originally applied to the Ottoman Empire, but today has varying definitions within different academic circles. The term Near East was used in conjunction with the Middle East and the Far East (China and beyond), together known as the "three Easts"; it was a separate term from the Middle East during earlier times and official British usage. As of 2024, both terms are used interchangeably by politicians and news reporters to refer to the same region. Near East and Middle East are both Eurocentric terms. According to the National Geographic Society, the terms Near East and Middle East denote the same territories and are "generally accepted as comprising the countries of the Arabian Peninsula, Cyprus, Egypt, Iraq, Iran, Israel, Jordan, Lebanon, Palestinian territories, Syria, and Turkey". Also, Afghanistan is often included. In 1997, the Food and Agriculture Organization (FAO) of the United Nations defined the region similarly, but also included Afghanistan. The part of the region that is in Asia (not including Egypt, the Balkans, and Thrace) is "now commonly referred to as West Asia." Later on in 2012, the FAO defined the Near East as a subregion of the Middle East. The Near East included Iraq, Israel, Jordan, Lebanon, Palestine, Syrian Arab Republic, and Turkey while the Middle East included the Arabian Peninsula, the Caucasus, and Iran. Eastern question At the beginning of the nineteenth century, the Ottoman Empire included all of the Balkans, north to the southern edge of the Great Hungarian Plain. But by 1914, the empire had lost all of its European territories except Constantinople and Eastern Thrace to the rise of nationalist Balkan states, which saw the independence of the Kingdom of Greece, Kingdom of Serbia, the Danubian Principalities, and the Kingdom of Bulgaria. Up until 1912, the Ottomans retained a band of territory including Albania, Macedonia and the Adrianople Vilayet, which were lost in the two Balkan Wars of 1912–13. The Ottoman Empire, believed to be about to collapse, was portrayed in the press as the "sick man of Europe". The Balkan states, with the partial exception of Bosnia and Albania, were primarily Christian, as was the majority of Lebanon. Starting in 1894, the Ottomans struck at the Armenians and Assyrians on the explicit grounds that they were non-Muslim peoples and as such were a potential threat to the Muslim empire within which they lived. The Hamidian Massacres, Adana Massacres and Massacres of Badr Khan targeting Assyrians and Armenians aroused the indignation of the entire Christian world. In the United States, the then aging Julia Ward Howe, author of the Battle Hymn of the Republic, leapt into the war of words and joined the Red Cross. Relations of minorities within the Ottoman Empire and the disposition of former Ottoman lands became known as the "Eastern question", as the Ottomans were on the east of Europe. It now became relevant to define the east of the eastern question. In about the middle of the nineteenth century, Near East came into use to describe that part of the east closest to Europe. The term Far East appeared contemporaneously meaning Japan, China, Korea, Indonesia and Vietnam. Near East applied to what had been mainly known as the Levant, which was in the jurisdiction of the Ottoman Porte, or government. Europeans could not set foot on most of the shores of the southern and central Mediterranean from the Gulf of Sidra to Albania without permits from the Ottoman Empire. Some regions beyond the Ottoman Porte were included. One was North Africa west of Egypt. It was occupied by piratical kingdoms of the Barbary Coast, de facto-independent since the eighteenth century, formerly part of the empire at its apogee. Iran was included because it could not easily be reached except through the Ottoman Empire or neighboring Russia. In the 1890s the term tended to focus on the conflicts in the Balkan states and Armenia. The demise of "the sick man of Europe" left considerable confusion as to what was to be meant by Near East. It is now generally used only in historical contexts, to describe the countries of West Asia from the Mediterranean to (or including) Iran. There is, in short, no universally-understood fixed inventory of nations, languages, or historical assets defined to be in it. Background The geographical terms Near East and Far East refer to areas of the globe in or contiguous to the former British Empire and the neighboring colonies of the Dutch, Portuguese, Spanish and French. They fit together as a pair based on the opposites of far and near, suggesting that they were innovated together. They appear together in the journals of the mid-19th century. Both terms were used before then with local British and American meanings: the near or far east of a field, village or shire. There was a linguistic predisposition to use such terms. The Romans had used them in near Gaul / far Gaul, near Spain / far Spain and others. Before them the Greeks had the habit, which appears in Linear B, the oldest known script of Europe, referring to the near province and the far province of the kingdom of Pylos. Usually these terms were given with reference to a geographic feature, such as a mountain range or a river. Ptolemy's Geography divided Asia on a similar basis. In the north is "Scythia this side of the Himalayas" and "Scythia beyond the Himalayas". To the south is "India on this side of the Ganges" and "India beyond the Ganges". Asia began on the coast of Anatolia ("land of the rising Sun"). Beyond the Ganges and Himalayas (including the Tien Shan) were Serica and Serae (sections of China) and some other identifiable far eastern locations known to the voyagers and geographers but not to the general European public. By the time of John Seller's Atlas Maritima of 1670, "India Beyond the Ganges" had become "the East Indies" including China, Korea, southeast Asia and the islands of the Pacific in a map that was every bit as distorted as Ptolemy's, despite the lapse of approximately 1,500 years. That "east" in turn was only an English translation of Latin Oriens and Orientalis, "the land of the rising Sun", used since Roman times for "east". The world map of Jodocus Hondius of 1590 labels all of Asia from the Caspian to the Pacific as India Orientalis, shortly to appear in translation as the East Indies. Elizabeth I of England, primarily interested in trade with the east, collaborated with English merchants to form the first trading companies to the far-flung regions, using their own jargon. Their goals were to obtain trading concessions by treaty. The queen chartered the Company of Merchants of the Levant, shortened to Levant Company, and soon known also as The Turkey Company, in 1581. In 1582, the ship The Great Susan transported the first ambassador, William Harebone, to the Ottoman Porte (government of the Ottoman Empire) at Constantinople. Compared to Anatolia, Levant also means "land of the rising sun", but where Anatolia always only meant the projection of land currently occupied by the Republic of Turkey, Levant meant anywhere in the domain ruled by the Ottoman Porte. The East India Company (Originally charted as the "Governor and Company of Merchants of London Trading into the East-Indies") was chartered in 1600 for trade to the East Indies. It has pleased western historians to write of a decline of the Ottoman Empire as though a stable and uncontested polity of that name once existed. The borders did expand and contract but they were always dynamic and always in "question" right from the beginning. The Ottoman Empire was created from the lands of the former eastern Roman Empire on the occasion of the latter's violent demise. The last Roman emperor died fighting hand-to-hand in the streets of his capital, Constantinople, overwhelmed by the Ottoman military, in May 1453. The victors inherited his remaining territory in the Balkans. The Hungarian lands under Turkish rule had become part of the Habsburg monarchy by 1688. in the Great Turkish War. The Serbian Revolution, 1804–1833. created modern Serbia. The Greek War of Independence, 1821–1832, created modern Greece, which recovered most of the lands of ancient Greece, but could not gain Constantinople. The Ottoman Porte was continuously under attack from some quarter in its empire, primarily the Balkans. Also, on a number of occasions in the early 19th century, American and British warships had to attack the Barbary pirates to stop their piracy and recover thousands of enslaved Europeans and Americans. In 1853 the Russian Empire on behalf of the Slavic Balkan states began to question the very existence of the Ottoman Empire. The result was the Crimean War, 1853–1856, in which the British Empire and the French Empire supported the Ottoman Empire in its struggle against the incursions of the Russian Empire. Eventually, the Ottoman Empire lost control of the Balkan region. Until about 1855, the terms Near East and Far East did not refer to any particular region. The Far East, a phrase containing a noun, East, qualified by an adjective, far, could be at any location in the "far east" of the speaker's home territory. The Ottoman Empire, for example, was the far East as much as the East Indies. The Crimean War brought a change in vocabulary with the introduction of terms more familiar to the late 19th century. The Russian Empire had entered a more aggressive phase, becoming militarily active against the Ottoman Empire and also against China, with territorial aggrandizement explicitly in mind. Rethinking its policy the British government decided that the two polities under attack were necessary for the balance of power. It therefore undertook to oppose the Russians in both places, one result being the Crimean War. During that war the administration of the British Empire began promulgating a new vocabulary, giving specific regional meaning to the Near East, the Ottoman Empire, and the Far East, the East Indies. The two terms were now compound nouns often shown hyphenated. In 1855, a reprint of a letter earlier sent to The Times appeared in Littell's Living Age. Its author, an "official Chinese interpreter of 10 years' active service" and a member of the Oriental Club, Thomas Taylor Meadows, was replying to the suggestion by another interpreter that the British Empire was wasting its resources on a false threat from Russia against China. Toward the end of the letter he said: To support the "sick man" in the Near East is an arduous and costly affair; let England, France and America too, beware how they create a "sick giant" in the Far East, for they may rest assured that, if Turkey is [a] European necessity, China is a world necessity. Much of the colonial administration belonged to this club, which had been formed by the Duke of Wellington. Meadows' terminology must represent usage by that administration. If not the first use of the terms, the letter to the Times was certainly one of the earliest presentations of this vocabulary to the general public. They became immediately popular, supplanting "Levant" and "East Indies", which gradually receded to minor usages and then began to change meaning. Near East remained popular in diplomatic, trade and journalistic circles, but a variation soon developed among the scholars and the men of the cloth and their associates: the Nearer East, reverting to the classical and then more scholarly distinction of nearer and farther. They undoubtedly saw a need to separate the biblical lands from the terrain of the Ottoman Empire. The Christians saw the country as the land of the Old and New Testaments, where Christianity had developed. The scholars in the field of studies that eventually became biblical archaeology attempted to define it on the basis of archaeology. For example, The London Review of 1861 (Telford and Barber, unsigned) in reviewing several works by Rawlinson, Layard and others, defined themselves as making: "... an imperfect conspectus of the arrow-headed writings of the nearer east; writings which cover nearly the whole period of the postdiluvian Old Testament history ..." By arrow-headed writings they meant cuneiform texts. In defense of the Bible as history they said: "The primeval nations, that piled their glorious homes on the Euphrates, the Tigris, and the Nile, are among us again with their archives in their hands; ..." They further defined the nations as "... the countries lying between the Caspian, the Persian Gulf, and the Mediterranean ..." The regions in their inventory were Assyria, Chaldea, Mesopotamia, Persia, Armenia, Egypt, Arabia, Syria, Ancient Israel, Ethiopia, Caucasus, Libya, Anatolia and Abyssinia. Explicitly excluded is India. No mention is made of the Balkans. The British archaeologist D. G. Hogarth published The Nearer East in 1902, in which he stated his view of the Near East: The Nearer East is a term of current fashion for a region which our grandfathers were content to call simply The East. Its area is generally understood to coincide with those classic lands, historically the most interesting on the surface of the globe, which lie about the eastern basin of the Mediterranean Sea; but few probably could say offhand where should be the limits and why. Hogarth then proceeds to say where and why in some detail, but no more mention is made of the classics. His analysis is geopolitical. His map delineates the Nearer East with regular lines as though surveyed. They include Iran, the Balkans, but not the Danube lands, Egypt, but not the rest of North Africa. Except for the Balkans, the region matches the later Middle East. It differs from the Ottoman Empire of the times in including Greece and Iran. Hogarth gives no evidence of being familiar with the contemporaneous initial concept of the Middle East.[original research?] In the last years of the 19th century, the term Near East acquired considerable disrepute in eyes of the English-speaking public as did the Ottoman Empire itself. The cause of the onus was the religiously motivated Hamidian Massacres of Christian Armenians, but it seemed to spill over into the protracted conflicts of the Balkans. For a time, Near East often included the Balkans. Robert Hichens' 1913 book The Near East is subtitled "Dalmatia, Greece and Constantinople". The change is evident in the reports of influential British travelers to the Balkans. In 1894, Sir Henry Norman, 1st Baronet, a journalist, traveled to the Far East, afterwards writing a book called The Peoples and Politics of the Far East, which came out in 1895. By "Far East" he meant Siberia, China, Japan, Korea, Siam and Malaya. As the book was a big success, he was off to the Balkan states with his wife in 1896 to develop detail for a sequel, The People and Politics of the Near East, which Scribners planned to publish in 1897. Mrs. Norman, a writer herself, wrote glowing letters of the home and person of Mme. Zakki, "the wife of a Turkish cabinet minister," who, she said, was a cultivated woman living in a country home full of books. As for the natives of the Balkans, they were "a semi-civilized people". The planned book was never published, however Norman published the gist of the book, mixed with vituperation against the Ottoman Empire, in an article in June 1896, in Scribner's Magazine. The empire had descended from an enlightened civilization ruling over barbarians for their own good to something considerably less. The difference was the Hamidian Massacres, which were being conducted even as the couple traveled the Balkans. According to Norman now, the empire had been established by "the Moslem horde" from Asia, which was stopped by "intrepid Hungary." Furthermore, "Greece shook off the turbaned destroyer of her people" and so on. The Russians were suddenly liberators of oppressed Balkan states. Having portrayed the Armenians as revolutionaries in the name of freedom with the expectation of being rescued by the intervention of Christian Europe, he states "but her hope was vain." England had "turned her back." Norman concluded his exhortation with "In the Balkans, one learns to hate the Turk." Norman made sure that Gladstone read the article. Prince Nicolas of Montenegro wrote a letter thanking him for his article. Throughout this article, Norman uses "Near East" to mean the countries where "the eastern question" applied; that is, to all of the Balkans. The countries and regions mentioned are Greece, Bulgaria, Serbia, Bosnia-Herzegovina (which was Muslim and needed, in his view, to be suppressed), Macedonia, Montenegro, Albania, Romania. The rest of the Ottoman domain is demoted to just "the East". If Norman was apparently attempting to change British policy, it was perhaps William Miller (1864–1945), journalist and expert on the Near East, who did the most in that direction. In essence, he signed the death warrant, so to speak, of the Age of Empires. The fall of the Ottoman Empire ultimately enmeshed all the others as well. In the Travel and Politics in the Near East, 1898, Miller claimed to have made four trips to the Balkans, 1894, 1896, 1897 and 1898, and to be, in essence, an expert on "the Near East", by which he primarily meant the Balkans. Apart from the fact that he attended Oxford and played Rugby, not many biographical details have been promulgated. He was, in effect (whatever his formal associations if any), a point man of British Near Eastern intelligence. In Miller's view, the Ottoman officials were unfit to rule: The plain fact is that it is as hard for an Ottoman official to be honest as it is for a camel to enter through the eye of a needle. It is not so much the fault of the men as the fault of the system, which is thoroughly bad from top to bottom... Turkish administration is synonymous with corruption, inefficiency, and sloth. These were fighting words to be coming from a country that once insisted Europe needed Turkey and was willing to spill blood over it. For his authority Miller invokes the people, citing the "collective wisdom" of Europe, and introducing a concept to arise many times in the decades to follow under chilling circumstances: "... no final solution of the difficulty has yet been found." Miller's final pronouncements on the topic could not be ignored by either the British or the Ottoman governments: It remains then to consider whether the Great Powers can solve the Eastern Question ... Foreigners find it extremely difficult to understand the foreign, and especially the Eastern policy of Great Britain, and we cannot wonder at their difficulty, for it seems a mass of contradictions to Englishmen themselves ... At one moment we are bringing about the independence of Greece by sending the Turkish fleet to the bottom of the bay of Navarino. Twenty-seven years later we are spending immense sums and wasting thousands of lives in order to protect the Turks against Russia. If the British Empire was now going to side with the Russian Empire, the Ottoman Empire had no choice but to cultivate a relationship with the Austro-Hungarian Empire, which was supported by the German Empire. In a few years these alignments became the Triple Entente and the Triple Alliance (already formed in 1882), which were in part a cause of World War I. By its end in 1918 three empires were gone, a fourth was about to fall to revolution, and two more, the British and French, were forced to yield in revolutions started under the aegis of their own ideologies. By 1916, when millions of Europeans were becoming casualties of imperial war in the trenches of eastern and western Europe over "the eastern question", Arnold J. Toynbee, Hegelesque historian of civilization at large, was becoming metaphysical about the Near East. Geography alone was not a sufficient explanation of the terms, he believed. If the Ottoman Empire had been a sick man, then: There has been something pathological about the history of this Near Eastern World. It has had an undue share of political misfortunes, and had lain for centuries in a kind of spiritual paralysis between East and West—belonging to neither, partaking paradoxically of both, and wholly unable to rally itself decidedly to one or the other. Having supposed that it was sick, he kills it off: "The Near East has never been more true to itself than in its lurid dissolution; past and present are fused together in the flare." To Toynbee the Near East was a spiritual being of a "Janus-character", connected to both east and west: The limits of the Near East are not easy to define. On the north-west, Vienna is the most conspicuous boundary-mark, but one might almost equally well single out Trieste or Lvov or even Prag. Towards the southeast, the boundaries are even more shadowy. It is perhaps best to equate them with the frontiers of the Arabic language, yet the genius of the Near East overrides linguistic barriers, and encroaches on the Arabicspeaking world on the one side as well as on the German-speaking world on the other. Syria is essentially a Near Eastern country, and a physical geographer would undoubtedly carry the Near Eastern frontiers up to the desert belt of the Sahara, Nefud and Kevir. From the death of the Near East, new nations were able to rise from the ashes, notably the Republic of Turkey. Paradoxically it now aligned itself with the west rather than with the east. Mustafa Kemal, its founder, a former Ottoman high-ranking officer, was insistent on this social revolution, which, among other changes, liberated women from the strait rules still in effect in most Arabic-speaking countries. The demise of the political Near East now left a gap where it had been, into which stepped the Middle East. The term Middle East as a noun and adjective was common in the 19th century in nearly every context except diplomacy and archaeology. An uncountable number of places appear to have had their middle easts from gardens to regions, including the United States. The innovation of the term Near East to mean the holdings of the Ottoman Empire as early as the Crimean War had left a geographical gap. The East Indies, or "Far East", derived ultimately from Ptolemy's "India Beyond the Ganges." The Ottoman Empire ended at the eastern border of Iraq. "India This Side of the Ganges" and Iran had been omitted. The archaeologists counted Iran as the Near East because Old Persian cuneiform had been found there. This usage did not sit well with the diplomats; India was left in an equivocal state. They needed a regional term. The use of the term Middle East as a region of international affairs apparently began in British and American diplomatic circles quite independently of each other over concern for the security of the same country: Iran, then known to the west as Persia. In 1900 Thomas Edward Gordon published an article, The Problem of the Middle East, which began: It may be assumed that the most sensitive part of our external policy in the Middle East is the preservation of the independence and integrity of Persia and Afghanistan. Our active interest in Persia began with the present century, and was due to the belief that the invasion of India by a European Power was a probable event. The threat that caused Gordon, diplomat and military officer, to publish the article was resumption of work on a railway from Russia to the Persian Gulf. Gordon, a published author, had not used the term previously, but he was to use it from then on. A second strategic personality from American diplomatic and military circles, Alfred Thayer Mahan, concerned about the naval vulnerability of the trade routes in the Persian Gulf and Indian Ocean, commented in 1902: The middle East, if I may adopt a term which I have not seen, will some day need its Malta, as well as its Gibraltar; it does not follow that either will be in the Gulf. Naval force has the quality of mobility which carries with it the privilege of temporary absences; but it needs to find on every scene of operation established bases of refit, of supply, and, in case of disaster, of security. The British Navy should have the facility to concentrate in force, if occasion arise, about Aden, India, and the Gulf. Apparently the sailor did not connect with the soldier, as Mahan believed he was innovating the term Middle East. It was, however, already there to be seen. Until the interwar period following the First World War, the terms Near East and Middle East co-existed, but they were not always seen as distinct in the eyes of Western commentators.[citation needed] Bertram Lenox Simpson, a journalist who served for a period as an officer for the Chinese Maritime Customs Service, combined both terms in his 1910 work The Conflict of Colour: The Threatened Upheaval Throughout the World as "the Near and Middle East." According to Simpson, the combined region consisted of "India, Afghanistan, Persia, Arabistan, Asia Minor, and last, but not least, Egypt", explaining that the aforementioned regions were in actuality "politically one region – in spite of the divisions into which it is academically divided." In The Conflict of Colour, Simpson argued that what united these regions was their skin color and the fact that they were all under European colonial rule. The work included a "color chart" of the world, dividing it into a spectrum of 'black', 'brown', 'yellow' and 'white' races. Simpson also modified the Eastern Question (a diplomatic issue concerning the waning of the Ottoman Empire in the 19th century) to the "Problem of the Nearer East", which he rephrased around the issue of the future of European colonialism in the Near East, writing that in regards to "the white man": ... in India, in Central Asia, and in all the regions adjacent to the Near East, he still boldly remains a conqueror in possession of vast stretches of valuable territory; a conqueror who has no intention of lightly surrendering his conquests, and who indeed sees in every attempt to modify the old order of things a most hateful and unjustifiable revolt which must at all costs be repressed. This is so absolutely true that no candid person will be inclined to dispute it. The spirit of the Crusaders may thus be said still to linger in those latitudes which, to give geographical and political cohesion, are here broadly named the Middle and Near East; and, to use a somewhat dangerous but illuminating figure of speech, it may be even be maintained that to-day, as of old, the white man and the Cross remain as blindly opposed to the brown man and Islamism, Hinduism and what these creeds postulate, as the most uncompromising bigot could desire. According to Simpson, the reason why the "Problem of the Nearer East" remained so misunderstood in the Western world (compared to diplomatic and political issues in the Near East) was due to the fact that "there is no good work dealing with these problems as one whole, and much misunderstanding consequently exists." The term Near and Middle East, held the stage for a few years before World War I. It proved to be less acceptable to a colonial point of view that saw the entire region as one. In 1916 Captain T. C. Fowle, 40th Pathans (troops of British India), wrote of a trip he had taken from Karachi to Syria just before the war. The book does not contain a single instance of Near East. Instead, the entire region is considered the Middle East. The formerly Near Eastern sections of his trip are now "Turkish" and not Ottoman. Subsequently, with the disgrace of Near East in diplomatic and military circles, Middle East prevailed. However, Near East continues in some circles at the discretion of the defining agency or academic department. They are not generally considered distinct regions as they were at their original definition. Although racial and colonial definitions of the Middle East are no longer considered ideologically sound, the sentiment of unity persists. For much, but by no means all, of the Middle East, the predominance of Islam lends some unity, as does the accident of geographical continuity. Otherwise there is but little basis except for history and convention to lump together peoples of multiple, often unrelated languages, governments, loyalties and customs. Current meaning In the 20th century, subsequent to major warfare and decades of intense political turmoil, the terms such as Near East, Far East, and Middle East continued to be used, but evolved in their meaning and scope. This increased confusion, the resolution of which became the study of experts in the new field of political science. The new wave of diplomats often came from those programmes. Archaeology on the international scene, though very much of intellectual interest to major universities, was overshadowed by international relations. The archaeologists' domain became the ancient Near East, which could no longer be relied upon to be the actual Near East. The Ottoman Empire was gone, along with all the other empires of the 19th century, replaced in the region with a number of republics with various affinities, regional and global. The many and varied specialized agencies that were formed to handle specific aspects of complex international relations, evolved with the terms. Definitions from the present came to be not in concert with those of the past. Reconciling these terms and their definitions remains difficult due to ongoing territorial disputes and non-free nuclear powers' territorial ambitions, putting any reconciliation of definitions out of scope of diplomatic corps in the classical sense. The ancient Near East is frozen in time. The living Near East is primarily what the agencies each define as a matter of practice; often guided by their political leadership. In most cases, this single term is inadequate to describe the geographical range in practical applications. This has resulted is multiple definitions used differently by each major region, power, or institution. The United States is the chief remaining nation to assign official responsibilities to a region called the Near East. Within the government the State Department has been most influential in promulgating the Near Eastern regional system. The countries of the former empires of the 19th century have in general abandoned the term and the subdivision in favor of Middle East, North Africa, and various forms of Asia. In many cases, such as France, no distinct regional substructures have been employed. Each country has its own French diplomatic apparatus, although regional terms, including Proche-Orient and Moyen-Orient, can be used in a descriptive sense.[citation needed] Some of the most influential agencies in the United States still use Near East as a working concept. For example, the Bureau of Near Eastern Affairs, a division of the United States Department of State, is perhaps the most influential agency to still use the term Near East. Under the Secretary of State, it implements the official diplomacy of the United States, called also statecraft by Secretary Hillary Clinton. The name of the bureau is traditional and historic. There is, however, no distinct Middle East. All official Middle Eastern affairs are referred to this bureau. Working closely in conjunction with the definition of the Near East provided by the State Department is the Near East South Asia Center for Strategic Studies (NESA), an educational institution of the United States Department of Defense. It teaches courses and holds seminars and workshops for government officials and military officers who will work or are working within its region. As the name indicates, that region is a combination of State Department regions; however, NESA is careful to identify the State Department region. As its Near East is not different from the State Department's it does not appear in the table. Its name, however, is not entirely accurate. For example, its region includes Mauritania, a member of the State Department's Africa (Sub-Sahara).[citation needed] The Washington Institute for Near East Policy (WINEP) is a non-profit organization for research and advice on Middle Eastern policy. It regards its target countries as the Middle East but adopts the convention of calling them the Near East to be in conformance with the practices of the State Department. Its views are independent. The WINEP bundles the countries of Northwest Africa together under "North Africa". Details can be found in Policy Focus #65. Legend: ✓ included; ✗ excluded The United Nations formulates multiple regional divisions as is convenient for its various operations. But few of them include a Near East, and that poorly defined. UNICEF recognizes the "Middle East and North Africa" region, where the Middle East is bounded by the Red Sea on the west and includes Iran on the east. UNESCO recognizes neither a Near East nor a Middle East, dividing the countries instead among three regions: Arab States, Asia and the Pacific, and Africa. Its division "does not forcibly reflect geography" but "refers to the execution of regional activities." The United Nations Statistics Division defines West Asia to contain the countries included elsewhere in the Middle East. Its total area extends further into Central Asia than that of most agencies. The Directorate of Intelligence, one of four directorates into which the US Central Intelligence Agency (CIA) is divided, includes the Office of Near Eastern and South Asian Analysis (NESA). Its duties are defined as "support on Middle Eastern and North African countries, as well as on the South Asian nations of India, Pakistan, and Afghanistan." The combined range of countries is in fact the same as the State Department's Near East, but the names do not correspond. The Near East of the NESA is the same as the Middle East defined in the CIA-published on-line resource, The World Factbook. Its list of countries is limited by the Red Sea, comprises the entire eastern coast of the Mediterranean, including Israel, Turkey, the small nations of the Caucasus, Iran and the states of the Arabian Peninsula. The US Agency for International Development (USAID), an independent agency under the Department of State established in place of the Marshall Plan for the purpose of determining and distributing foreign aid, does not use the term Near East. Its definition of Middle East corresponds to that of the State Department, which officially prefers the term Near East. The Foreign and Commonwealth Office of United Kingdom recognises a Middle East and North Africa region, but not a Near East. Their original Middle East consumed the Near East as far as the Red Sea, ceded India to the Asia and Oceania region, and went into partnership with North Africa as far as the Atlantic. The Ministry of Foreign Affairs of the Hellenic Republic conducts "bilateral relationships" with the countries of the "Mediterranean – Middle East Region" but has formulated no Near East Region. The Ministry of Foreign Affairs of the Republic of Turkey also does not use the term Near East. Its regions include the Middle East, the Balkans and others. The ancient Near East is a term of the 20th century intended to stabilize the geographical application of Near East to ancient history.[citation needed] The Near East may acquire varying meanings, but the ancient Near East always has the same meaning: the ancient nations, people and languages of the enhanced Fertile Crescent; a sweep of land from the Nile Valley through Anatolia and southward to the limits of Mesopotamia. Resorting to this verbal device, however, did not protect the ancient Near East from the inroads of the Middle East. For example, a high point in the use of ancient Near East for Biblical scholars was the Ancient Near Eastern Texts relating to the Old Testament by James Bennett Pritchard, a textbook of first edition dated 1950. The last great book written by Leonard Woolley, British archaeologist, excavator of ancient Ur and associate of T. E. Lawrence and Arthur Evans, was The Art of the Middle East, Including Persia, Mesopotamia and Palestine, published in 1961. Woolley had completed it in 1960 two weeks before his death. The geographical ranges in each case are identical. Parallel with the growth of specialized agencies for conducting or supporting statescraft in the second half of the 20th century has been the collection of resources for scholarship and research typically in university settings. Most universities teaching the liberal arts have library and museum collections. These are not new; however, the erection of these into "centres" of national and international interest in the second half of the 20th century have created larger databases not available to the scholars of the past. Many of these focus on the ancient Near East or Near East in the sense of ancient Near East. One such institution is the Centre for the Study of Ancient Documents (CSAD) founded by and located centrally at Oxford University, Great Britain. Among its many activities CSAD numbers "a long-term project to create a library of digitised images of Greek inscriptions." These it arranges by region. The Egypt and the Near East region besides Egypt includes Cyprus, Persia and Afghanistan but not Asia Minor (a separate region). A large percentage of experts on the modern Middle East began their training in university departments named for the Near East. Similarly the journals associated with these fields of expertise include the words Near East or Near Eastern. The meaning of Near East in these numerous establishments and publications is Middle East. Expertise on the modern Middle East is almost never mixed or confused with studies of the ancient Near East, although often ancient Near East is abbreviated to Near East without any implication of modern times. For example, Near Eastern languages in the ancient sense includes such languages as Sumerian and Akkadian. In the modern sense, it is likely to mean any or all of the Arabic languages. See also References External links 32°48′N 35°36′E / 32.800°N 35.600°E / 32.800; 35.600 |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Grey_alien#cite_note-9] | [TOKENS: 2835] |
Contents Grey alien Grey aliens, also referred to as Zeta Reticulans, Roswell Greys, or simply, Greys,[a] are purported extraterrestrial beings. They are frequently featured in claims of close encounter and alien abduction. Greys are typically described as having small, humanoid bodies, smooth, grey skin, disproportionately large, hairless heads, and large, black, almond-shaped eyes. The 1961 Barney and Betty Hill abduction claim was key to the popularization of Grey aliens. Precursor figures have been described in science fiction and similar descriptions appeared in later accounts of the 1947 Roswell UFO incident and early accounts of the 1948 Aztec UFO hoax. The Grey alien is cited an archetypal image of an intelligent non-human creature and extraterrestrial life in general, as well as an iconic trope of popular culture in the age of space exploration. Description Greys are typically depicted as grey-skinned, diminutive humanoid beings that possess reduced forms of, or completely lack, external human body parts such as noses, ears, or sex organs. Their bodies are usually depicted as being elongated, having a small chest, and lacking in muscular definition and visible skeletal structure. Their legs are depicted as being shorter and jointed differently from humans with limbs proportionally different from a human. Greys are depicted as having unusually large heads in proportion to their bodies, and as having no hair, no noticeable outer ears or noses, and small orifices for ears, nostrils, and mouths. In drawings, Greys are almost always shown with very large, opaque, black eyes, without eye whites. They are frequently described as shorter than average adult humans. The association between Grey aliens and Zeta Reticuli originated with the interpretation of a map drawn by Betty Hill by a school-teacher named Marjorie Fish sometime in 1969. Betty Hill, under hypnosis, had claimed to have been shown a map that displayed the aliens' home system and nearby stars. Upon learning of this, Fish attempted to create a model from a drawing produced by Hill, eventually determining that the stars marked as the aliens' home were Zeta Reticuli, a binary star system. History In literature, descriptions of beings similar to Grey aliens predate claims of supposed encounters with them. In 1893, H. G. Wells presented a description of humanity's future appearance in the article "The Man of the Year Million", describing humans as having no mouths, noses, or hair, and with large heads. In 1895, Wells also depicted the Eloi, a successor species to humanity, in similar terms in the novel The Time Machine. Both share many characteristics with future perceptions of Greys. As early as 1917, the occultist Aleister Crowley described a meeting with a "preternatural entity" named Lam that was similar in appearance to a modern Grey. Crowley claimed to have contacted Lam through a process called the "Amalantrah Workings," which he believed allowed humans to contact beings from outer space and across dimensions. Other occultists and ufologists, many of whom have retroactively linked Lam to later Grey encounters, have since described their own visitations from him, with one describing the being as a "cold, computer-like intelligence," and utterly beyond human comprehension. ...the creatures did not resemble any race of humans. They were short, shorter than the average Japanese, and their heads were big and bald, with strong, square foreheads, and very small noses and mouths, and weak chins. What was most extraordinary about them were the eyes—large, dark, gleaming, with a sharp gaze. They wore clothes made of soft grey fabric, and their limbs seemed to be similar to those of humans. In 1933, the Swedish novelist Gustav Sandgren, using the pen name Gabriel Linde, published a science fiction novel called Den okända faran (The Unknown Danger), in which he describes a race of extraterrestrials who wore clothes made of soft grey fabric and were short, with big bald heads, and large, dark, gleaming eyes. The novel, aimed at young readers, included illustrations of the imagined aliens. This description would become the template upon which the popular image of grey aliens is based. The conception remained a niche one until 1965, when newspaper reports of the Betty and Barney Hill abduction made the archetype famous. The alleged abductees, Betty and Barney Hill, claimed that in 1961, humanoid alien beings with greyish skin had abducted them and taken them to a flying saucer. In his 1990 article "Entirely Unpredisposed", Martin Kottmeyer suggested that Barney's memories revealed under hypnosis might have been influenced by an episode of the science-fiction television show The Outer Limits titled "The Bellero Shield", which was broadcast 12 days before Barney's first hypnotic session. The episode featured an extraterrestrial with large eyes, who says, "In all the universes, in all the unities beyond the universes, all who have eyes have eyes that speak." The report from the regression featured a scenario that was in some respects similar to the television show. In part, Kottmeyer wrote: Wraparound eyes are an extreme rarity in science fiction films. I know of only one instance. They appeared on the alien of an episode of an old TV series The Outer Limits entitled "The Bellero Shield." A person familiar with Barney's sketch in "The Interrupted Journey" and the sketch done in collaboration with the artist David Baker will find a "frisson" of "déjà vu" creeping up his spine when seeing this episode. The resemblance is much abetted by an absence of ears, hair, and nose on both aliens. Could it be by chance? Consider this: Barney first described and drew the wraparound eyes during the hypnosis session dated 22 February 1964. "The Bellero Shield" was first broadcast on 10 February 1964. Only twelve days separate the two instances. If the identification is admitted, the commonness of wraparound eyes in the abduction literature falls to cultural forces. — Martin Kottmeyer, Entirely Unpredisposed: The Cultural Background of UFO Reports Carl Sagan echoed Kottmeyer's suspicions in his 1997 book, The Demon Haunted World: Science as a Candle in the Dark, where Invaders from Mars was cited as another potential inspiration. After the Hills' encounter, Greys would go on to become an integral part of ufology and other extraterrestrial-related folklore. This is particularly true in the case of the United States: according to journalist C. D. B. Bryan, 73% of all reported alien encounters in the United States describe Grey aliens, a significantly higher proportion than other countries.: 68 During the early 1980s, Greys were linked to the alleged crash-landing of a flying saucer in Roswell, New Mexico, in 1947. A number of publications contained statements from individuals who claimed to have seen the U.S. military handling a number of unusually proportioned, bald, child-sized beings. These individuals claimed, during and after the incident, that the beings had oversized heads and slanted eyes, but scant other distinguishable facial features. In 1987, novelist Whitley Strieber published the book Communion, which, unlike his previous works, was categorized as non-fiction, and in which he describes a number of close encounters he alleges to have experienced with Greys and other extraterrestrial beings. The book became a New York Times bestseller, and New Line Cinema released a 1989 film adaption that starred Christopher Walken as Strieber. In 1988, Christophe Dechavanne interviewed the French science-fiction writer and ufologist Jimmy Guieu on TF1's Ciel, mon mardi !. Besides mentioning Majestic 12, Guieu described the existence of what he called "the little greys", which later on became better known in French under the name: les Petits-Gris. Guieu later wrote two docudramas, using as a plot the Grey aliens / Majestic-12 conspiracy theory as described by John Lear and Milton William Cooper: the series "E.B.E." (for "Extraterrestrial Biological Entity"): E.B.E.: Alerte rouge (first part) (1990) and E.B.E.: L'entité noire d'Andamooka (second part) (1991).[citation needed] Greys have since become the subject of many conspiracy theories. Many conspiracy theorists believe that Greys represent part of a government-led disinformation or plausible deniability campaign, or that they are a product of government mind-control experiments. During the 1990s, popular culture also began to increasingly link Greys to a number of military-industrial complex and New World Order conspiracy theories. In 1995, filmmaker Ray Santilli claimed to have obtained 22 reels of 16 mm film that depicted the autopsy of a "real" Grey supposedly recovered from the site of the 1947 incident in Roswell. In 2006, though, Santilli announced that the film was not original, but was instead a "reconstruction" created after the original film was found to have degraded. He maintained that a real Grey had been found and autopsied on camera in 1947, and that the footage released to the public contained a percentage of that original footage. Analysis Greys are often involved in alien abduction claims. Among reports of alien encounters, Greys make up about 50% in Australia, 73% in the United States, 48% in continental Europe, and around 12% in the United Kingdom.: 68 These reports include two distinct groups of Greys that differ in height.: 74 Abduction claims are often described as extremely traumatic, similar to an abduction by humans or even a sexual assault in the level of trauma and distress. The emotional impact of perceived abductions can be as great as that of combat, sexual abuse, and other traumatic events. The eyes are often a focus of abduction claims, which often describe a Grey staring into the eyes of an abductee when conducting mental procedures. This staring is claimed to induce hallucinogenic states or directly provoke different emotions. Neurologist Steven Novella proposes that Grey aliens are a byproduct of the human imagination, with the Greys' most distinctive features representing everything that modern humans traditionally link with intelligence. "The aliens, however, do not just appear as humans, they appear like humans with those traits we psychologically associate with intelligence." In 2005, Frederick V. Malmstrom, writing in Skeptic magazine, Volume 11, issue 4, presents his idea that Greys are actually residual memories of early childhood development. Malmstrom reconstructs the face of a Grey through transformation of a mother's face based on our best understanding of early-childhood sensation and perception. Malmstrom's study offers another alternative to the existence of Greys, the intense instinctive response many people experience when presented an image of a Grey, and the act of regression hypnosis and recovered-memory therapy in "recovering" memories of alien abduction experiences, along with their common themes. According to biologist Jack Cohen, the typical image of a Grey, assuming that it would have evolved from a world with different environmental and ecological conditions from Earth, is too physiologically similar to a human to be credible as a representation of an alien. The interdimensional hypothesis, the cryptoterrestrial hypothesis, and the time-traveller hypothesis attempt to provide an alternative explanation to the humanoid anatomy and behavior of these alleged beings. In popular culture Depictions of Grey aliens have gone on to appear in a number of films and television shows, supplanting the previously popular little green men. As early as 1966, for example, the superhero character Ultraman was explicitly based on them, and in 1977 they were featured in Close Encounters of the Third Kind. Greys have also been worked into space opera and other interstellar settings: in Babylon 5, the Greys are referred to as the "Vree", and are depicted as being allies and trade partners of 23rd-century Earth, while in the Stargate franchise they are called the "Asgard" and depicted as ancient astronauts allied with modern-day Earth.[citation needed] South Park refers to them as "visitors". During the 1990s, plotlines wherein Greys were linked to conspiracy theories became common. A well-known example is the Fox television series The X-Files, which first aired in 1993. It combined the quest to find proof of the existence of Grey-like extraterrestrials with a number of UFO conspiracy theory subplots, to form its primary story arc. Other notable examples include the XCOM video game franchise (where they are called "Sectoids"); Dark Skies, first broadcast in 1996, which expanded upon the MJ-12 conspiracy;[citation needed] and American Dad!, which features a Grey-like alien named Roger, whose backstory draws from both the Roswell incident and Area 51 conspiracy theories. The 2011 film Paul tells the story of a Grey named Paul who attributes the Greys' frequent presence in science fiction pop culture to the US government deliberately inserting the stereotypical Grey alien image into mainstream media; this is done so that if humanity came into contact with Paul's species, no immediate shock would occur as to their appearance. Child abduction by Greys is a key plot point in the 2013 film, Dark Skies. Greys appear in Syfy's 2021 science fiction dramedy series Resident Alien. The Greys appear as the main antagonistic faction in the 2023 independent game Greyhill Incident. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Vespasian] | [TOKENS: 5763] |
Contents Vespasian Vespasian (/vɛsˈpeɪʒ(i)ən, -ziən/; Latin: Vespasianus [wɛspasjˈaːnus]; 17 November AD 9 – 23 June 79) was Roman emperor from 69 to 79. The last emperor to reign in the Year of the Four Emperors, he founded the Flavian dynasty, which ruled the empire for 27 years. His fiscal reforms and consolidation of the empire brought political stability and an extensive building program. Vespasian was the first emperor from an equestrian family who rose only later in his lifetime into the senatorial rank as the first of his family to do so. He rose to prominence through military achievement: he served as legate of Legio II Augusta during the Roman invasion of Britain in 43, and later led the suppression of the Jewish rebellion of 66–70. While he was engaged in the campaign in Judaea, Emperor Nero died by suicide in June 68, plunging Rome into a year of civil war known as the Year of the Four Emperors. After two, Galba and Otho, died in quick succession, Vitellius became emperor in April 69. The Roman legions of Egypt and Judaea reacted by declaring Vespasian, their commander, emperor on 1 July 69. Vespasian joined forces with Mucianus, the governor of Syria, and Primus, a general in Pannonia. They led the Flavian forces against Vitellius, while Vespasian took control of Egypt. On 20 December 69, Vitellius was defeated, and the following day Vespasian was declared emperor by the Senate. Little information survives about the government during Vespasian's ten-year rule. He reformed the financial system of the Roman Empire after the campaign against Judaea ended successfully, and initiated several ambitious construction projects, including the building of the Flavian Amphitheatre, better known today as the Colosseum. Through his general Agricola, Vespasian increased imperial expansion in Britain. Vespasian is often credited with restoring political stability to Rome following the chaotic reigns of his predecessors. After he died in 79, he was succeeded by his eldest son Titus, thus becoming the first Roman emperor to be succeeded by his natural son and establishing the Flavian dynasty. Early life Vespasian (born Titus Flavius Vespasianus, pronounced [ˈt̪ɪt̪ʊs ˈfɫaːwijʊs wɛs.pasiˈjaːnʊs]) was born in a village north-east of Rome called Falacrinae. His family was relatively undistinguished and lacking in pedigree. Vespasian was the son of Titus Flavius Sabinus, a Roman moneylender, debt collector, and tax collector. His mother, Vespasia Polla, also belonged to the equestrian order in society, with her father rising to the rank of prefect of the camp and her brother becoming a Senator. He was educated in the countryside, in Cosa, near what is today Ansedonia, Italy, under the guidance of his paternal grandmother, so much so that even when he became emperor, he often returned to the places of his childhood, having left the former villa exactly as it had been. Early in his life he was somewhat overshadowed by his older brother, Titus Flavius Sabinus, who had entered public life and pursued the cursus honorum, holding an important military command in the Danube. Military and political career In preparation for a praetorship, Vespasian needed two periods of service in the minor magistracies, one military and the other public. Vespasian served in the military in Thracia for about three years. On his return to Rome in about 30 AD, he obtained a post in the vigintivirate, the minor magistracies, most probably in one of the posts in charge of street cleaning. His early performance was so unsuccessful that Emperor Caligula reportedly stuffed handfuls of muck down his toga to correct the uncleaned Roman streets, formally his responsibility. During the period of the ascendancy of Sejanus, there is no record of Vespasian engaging in any significant political activity. After completion of a term in the vigintivirate, Vespasian was entitled to stand for election as quaestor, a senatorial office. However, his lack of political or family influence meant that Vespasian served as quaestor in one of the provincial posts in Crete, rather than as assistant to important men in Rome. Next he needed to gain a praetorship, carrying the Imperium, but non-patricians and the less well-connected had to serve in at least one intermediary post as an aedile or tribune. Vespasian failed at his first attempt to gain an aedileship but was successful in his second attempt, becoming an aedile in 38. Despite his lack of significant family connections or success in office, he achieved praetorship in either 39 or 40, at the youngest age permitted (30), during a period of political upheaval in the organisation of elections. His long-standing relationship with freed-woman Antonia Caenis, confidential secretary to Antonia Minor (the Emperor's grandmother) and part of the circle of courtiers and servants around the Emperor, may have contributed to his success. Upon the accession of Claudius as emperor in 41, Vespasian was appointed legate of Legio II Augusta, stationed in Germania, thanks to the influence of the Imperial freedman Narcissus. In 43, Vespasian and the II Augusta participated in the Roman invasion of Britain, and he distinguished himself under the overall command of Aulus Plautius. After participating in crucial early battles on the rivers Medway and Thames, he was sent to reduce the south west, penetrating through regions later known as the counties of Hampshire, Wiltshire, Dorset, Somerset, Devon and Cornwall with the probable objectives of securing the south coast ports and harbours along with the tin mines of Cornwall and the silver and lead mines of Somerset. Vespasian marched from Noviomagus Reginorum (Chichester) to subdue the hostile Durotriges and Dumnonii tribes, and captured twenty oppida (towns, or more probably hill forts, including Hod Hill and Maiden Castle in Dorset). He also invaded Vectis (now the Isle of Wight), finally setting up a fortress and legionary headquarters at Isca Dumnoniorum (Exeter). During this time he injured himself and had not fully recovered until he went to Egypt. These successes earned him triumphal regalia (ornamenta triumphalia) on his return to Rome. His success as the legate of a legion earned him a consulship in 51, after which he retired from public life, having incurred the enmity of Claudius' wife, Agrippina, who was the most powerful and influential figure in her husband's reign. He came out of retirement in 63 when he was sent as governor to Africa Province. According to Tacitus (ii.97), his rule was "infamous and odious" but according to Suetonius (Vesp. 4), he was "upright and, highly honourable". On one occasion, Suetonius writes, Vespasian was pelted with turnips. Vespasian used his time in North Africa wisely. Usually, governorships were seen by ex-consuls as opportunities to extort huge amounts of money to regain the wealth they had spent on their previous political campaigns. Corruption was so rife that it was almost expected that a governor would come back from these appointments with his pockets full. However, Vespasian used his time in North Africa making friends instead of money, something that would be far more valuable in the years to come. During his time in North Africa, he found himself in financial difficulties and was forced to mortgage his estates to his brother. To revive his fortunes he turned to the mule trade and gained the nickname mulio (muleteer). Returning from Africa, Vespasian toured Greece in Nero's retinue, but lost Imperial favor after paying insufficient attention (some sources suggest he fell asleep) during one of the Emperor's recitals on the lyre, and found himself in the political wilderness. In 66 AD, Vespasian was appointed to suppress the Jewish revolt underway in Judea. The fighting there had killed the previous governor and routed Cestius Gallus, the governor of Syria, when he tried to restore order. Two legions, with eight cavalry squadrons and ten auxiliary cohorts, were therefore dispatched under the command of Vespasian while his elder son, Titus, arrived from Alexandria with another. During this time he became the patron of Flavius Josephus, a Jewish resistance leader captured at the Siege of Yodfat, who would later write his people's history in Greek. Ultimately, thousands of Jews were killed and the Romans destroyed many towns in re-establishing control over Judea; they also took Jerusalem in 70. Vespasian is remembered by Josephus (writing as a Roman citizen), in his Antiquities of the Jews, as a fair and humane official, in contrast with the notorious Herod Agrippa II whom Josephus goes to great lengths to demonize. While under the emperor's patronage, Josephus wrote that after the Roman Legio X Fretensis, accompanied by Vespasian, destroyed Jericho on 21 June 68, Vespasian took a group of Jews who could not swim (possibly Essenes from Qumran), fettered them, and threw them into the Dead Sea to test the sea's legendary buoyancy. Indeed, the captives bobbed up to the surface after being thrown in the water from the boats. At the conclusion of the Jewish war, Josephus discussed a prophecy from sacred scripture that about the time when Jerusalem and the Second Temple would be taken, a man from their own nation would become "governor of the habitable earth", as in the Messiah. Josephus interpreted the prophecy as denoting the government of Vespasian. Tacitus agreed that the prophecy discussed Vespasian (as well as Titus), but that "the common people, with the usual blindness of ambition, had interpreted these mighty destinies of themselves, and could not be brought even by disasters to believe the truth." Year of the Four Emperors (69) After the death of Nero in 68, Rome saw a succession of short-lived emperors and a year of civil wars. Galba was murdered by supporters of Otho, who was defeated by Vitellius. Otho's supporters, looking for another candidate to support, settled on Vespasian. According to Suetonius, a prophecy ubiquitous in the Eastern provinces claimed that from Judaea would come the future rulers of the world. Vespasian eventually believed that this prophecy applied to him, and found a number of omens and oracles that reinforced this belief. Although Vespasian and Titus resolved to challenge for the Principate in February 69, they made no move until later in the year. Throughout the early months of 69, Vespasian convened frequently with the Eastern generals. Gaius Licinius Mucianus was a notable ally. Governor of Syria and commander of three legions, Mucianus also held political connections to many of the most powerful Roman military commanders from Illyricum to Britannia by virtue of his service to the famous Neronian general Gnaeus Domitius Corbulo. In May 69, Mucianus formally implored Vespasian to challenge Vitellius. His appeal was followed by Vespasian's official proclamation as Emperor in early July. Under instructions from the prefect Tiberius Alexander, the legions at Alexandria took an oath of loyalty to Vespasian on 1 July. They were swiftly followed by Vespasian's Judaean legions on 3 July and thereafter by Mucianus' Syrian legions on 15 July. Vitellius, the occupant of the throne, had the veteran legions of Gaul and the Rhineland. But the feeling in Vespasian's favour quickly gathered strength, and the armies of Moesia, Pannonia, and Illyricum soon declared for him. The praefectus Aegypti, who had been governor since Nero's reign, proclaimed Vespasian emperor at Alexandria on 1 July 69 AD.: 13 While Vespasian himself was in Egypt, his troops entered Italy from the northeast under the leadership of Marcus Antonius Primus. They defeated Vitellius' army (which had awaited him in Mevania) at Bedriacum (or Betriacum), sacked Cremona and advanced on Rome. Vitellius hastily arranged a peace with Antonius, but the Emperor's Praetorian Guard forced him to retain his seat. After furious fighting, Antonius' army entered Rome. In the resulting confusion, the Capitol was destroyed by fire and both Vitellius and Vespasian's brother Sabinus were killed. At Alexandria, Vespasian immediately sent supplies of urgently needed grain to Rome, along with an edict assuring he would reverse the laws of Nero, especially those relating to treason.[citation needed] He was the first emperor since Augustus to appear in Egypt.: 13 While there, he visited the Temple of Serapis where he reportedly experienced a vision, and he performed healing miracles.: 14 He was hailed as pharaoh and proclaimed the son of the creator-deity Amun (Zeus-Ammon) in the style of the ancient pharaohs, and an incarnation of Serapis in the manner of the Ptolemies.: 13–14 Emperor (69–79) Vespasian was declared emperor by the Senate while he was in Egypt on 21 December 69 through the passage of the Lex de imperio Vespasiani; the Egyptians had declared him emperor in the summer. In the short-term, administration of the empire was given to Mucianus, who was aided by Vespasian's son, Domitian. Mucianus started off Vespasian's rule with tax reform that was to restore the empire's finances. After Vespasian arrived in Rome in mid-70, Mucianus continued to press Vespasian to collect as many taxes as possible. Vespasian and Mucianus renewed old taxes and instituted new ones, increased the tribute of the provinces, and kept a watchful eye upon the treasury officials. Before Vespasian, Emperor Nero introduced a urine tax on public toilets under the name of vectigal urinae in the 1st century AD (see Pay toilet). However, the tax was removed after a while and it was Vespasian's new imposition of this tax around AD 70 which we still remember to this day, possibly giving origin to the Latin proverb Pecunia non olet ("Money does not stink"): Writing about Vespasian in their history books, Dio Cassius and Suetonius mentioned "When [Vespasian's] son Titus blamed him for even laying a tax upon urine, he applied to his nose a piece of the money he received in the first instalment, and asked him if it stunk. And he replying no, 'And yet,' said he, 'it is derived from urine". Since then, this phrase "Money does not stink" has been used to whitewash dubious or illegal origin of money. In early 70 Vespasian was still in Egypt, the source of Rome's grain supply, and had not yet left for Rome. According to Tacitus, his trip was delayed due to bad weather. Modern historians theorize that Vespasian had been and was continuing to consolidate support from the Egyptians before departing. During this period, protests erupted in Alexandria over his new tax policies and grain shipments were held up. Vespasian eventually restored order and grain shipments to Rome resumed. Notably Titus attended the consecration of a new Apis bull at Memphis in 70, and Vespasian's reign saw imperial patronage given to Egyptian temples: at the Dakhla Oasis in the Western Desert as well as Esna, Kom Ombo, Medinet Habu, Silsila in the Nile Valley.: 14 In addition to the uprising in Egypt, unrest and civil war continued in the rest of the empire in 70. Judea had been rebelling since 66. Vespasian's son, Titus, finally subdued the rebellion with the capture of Jerusalem and destruction of the Jewish Temple in 70. According to Eusebius, Vespasian then ordered all descendants of the royal line of David to be hunted down, causing the Jews to be persecuted from province to province. Several modern historians have suggested that Vespasian, already having been told by Josephus that he was prophesied to become emperor whilst in Judaea, was probably reacting to other widely known Messianic prophecies circulating at the time, to suppress any rival claimants arising from that dynasty. The Jewish temple at Leontopolis was sacked in 73.: 14 In January 70, an uprising occurred in Gaul and Germany, known as the second Batavian Rebellion. This rebellion was headed by Gaius Julius Civilis and Julius Sabinus. Sabinus, claiming he was descended from Julius Caesar, declared himself Emperor of Gaul. The rebellion defeated and absorbed two Roman legions before it was suppressed by Vespasian's son-in-law, Quintus Petillius Cerialis, by the end of 70. In mid-70, Vespasian first went to Rome, dating his tribunician years from 1 July 69. Vespasian immediately embarked on a series of efforts to stay in power and prevent future revolts. He offered gifts to many in the military and much of the public. Soldiers loyal to Vitellius were dismissed or punished. Vespasian also restructured the Senatorial and Equestrian orders, removing his enemies and adding his allies. Regional autonomy of Greek provinces was repealed. We know from Suetonius that the "unexpected and still quite new emperor was lacking auctoritas [English: backing, support] and a certain maiestas [English: majesty]". Many modern historians note the increased amount of propaganda that appeared during Vespasian's reign. A component of the propaganda was the theology of victory, which legitimized the right to rule through successful conquest. This revolved around Vespasian's victory in Judea. Stories of a supernatural emperor who was destined to rule circulated in the empire. Nearly one-third of all coins minted in Rome under Vespasian celebrated military victory or peace. The word vindex was removed from coins so as not to remind the public of rebellious Vindex. Construction projects bore inscriptions praising Vespasian and condemning previous emperors. A temple of peace was constructed in the forum as well. Between 71 and 79, much of Vespasian's reign is a mystery. Historians report that Vespasian ordered the construction of several buildings in Rome. Additionally, he survived several conspiracies against him. Vespasian helped rebuild Rome after the civil war. He added the temple of Peace and the temple to the Deified Claudius. In 75, he erected a colossal statue of Apollo, begun under Nero, and he dedicated a stage of the theatre of Marcellus. He also began construction of the Colosseum, using funds from the spoils of the Jewish Temple after the Siege of Jerusalem. Suetonius claims that Vespasian was met with "constant conspiracies" against him. Only one conspiracy is known specifically, though. In 78 or 79, Eprius Marcellus and Aulus Caecina Alienus attempted to kill Vespasian. Why these men turned against Vespasian is not known. Agricola was appointed to the command of the Legio XX Valeria Victrix, stationed in Britain, in place of Marcus Roscius Coelius, who had stirred up a mutiny against the governor, Marcus Vettius Bolanus. Britain had revolted during the year of civil war, and Bolanus was a mild governor. Agricola reimposed discipline on the legion and helped to consolidate Roman rule. In 71, Bolanus was replaced by a more aggressive governor, Quintus Petillius Cerialis, and Agricola was able to display his talents as a commander in campaigns against the Brigantes in northern England. In his ninth consulship Vespasian had a slight illness in Campania and, returning at once to Rome, he left for Aquae Cutiliae and the country around Reate, where he spent every summer; however, his illness worsened and he developed severe diarrhea. With the feeling of death overwhelming him on his deathbed, he incited: "Vae, puto deus fio." ("Dear me, I think I'm becoming a god"). Then, according to Suetonius' The Twelve Caesars: Taken on a sudden with such an attack of diarrhoea that he all but swooned, he said: "An emperor ought to die standing," and while he was struggling to get on his feet, he died in the arms of those who tried to help him, on the ninth day before the Kalends of July [June 23], at the age of sixty-nine years, seven months and seven days. — Suetonius, Lives of the Twelve Caesars, "Life of Vespasian" §24 He died on June 23, 79 AD, and was succeeded by his sons Titus and then Domitian. Legacy Vespasian was known for his wit and his amiable manner alongside his commanding personality and military prowess. He could be liberal to impoverished Senators and equestrians and to cities and towns desolated by natural calamity. He was especially generous to men of letters and rhetors, several of whom he pensioned with salaries of as much as 1,000 gold pieces a year. Quintilian is said to have been the first public teacher who enjoyed this imperial favor. Pliny the Elder's work, the Natural History, was written during Vespasian's reign, and dedicated to Vespasian's son Titus. Vespasian distrusted philosophers in general. It was the talk of philosophers, who liked to glorify the Republic, that provoked Vespasian into reviving the obsolete penal laws against this profession as a precautionary measure.[citation needed] He was also noted for his benefactions to the people. Much money was spent on public works and the restoration and beautification of Rome: the Temple of Peace (also known as the Forum of Vespasian), new public baths and the great show piece, the Colosseum. Vespasian slightly debased the denarius during his reign, reducing the silver purity from 93.5% to 90%. The silver weight dropped from 2.97 grams to 2.87 grams. In modern Romance languages, urinals are named after him (for example, vespasiano in Italian, and vespasienne in French), probably in reference to a tax he placed on urine collection. Vespasian approved histories written under his reign, ensuring biases against him were removed. He also gave financial rewards to writers. The ancient historians who lived through the period such as Tacitus, Suetonius and Josephus speak suspiciously well of Vespasian while condemning the emperors who came before him. Tacitus admits that his status was elevated by Vespasian, Josephus identifies Vespasian as a patron and saviour. Meanwhile, Pliny the Elder dedicated his Natural Histories to Vespasian's son, Titus. Those who spoke against Vespasian were punished. A number of Stoic philosophers were accused of corrupting students with inappropriate teachings and were expelled from Rome. Helvidius Priscus, a pro-Republic philosopher, was executed for his teachings. Numerous other philosophers and writers had their works seized, destroyed and denounced for being deemed too critical of Vespasian's reign, some even posthumously. According to Suetonius' version of events, however, Vespasian "bore the frank language of his friends, the quips of pleaders, and the impudence of the philosophers with the greatest patience" as it was only Helvidius Priscus to be put to death after he repeatedly affronted the Emperor with studied insults which he initially tried to ignore; the philosopher Demetrius for example was banished to an island and when Vespasian heard that Demetrius was still criticizing him, sending the exiled philosopher the message: "You are doing everything to force me to kill you, but I do not slay a barking dog." Family and personal life His paternal grandfather, Titus Flavius Petro, became the first to distinguish himself, rising to the rank of centurion and fighting at Pharsalus for Pompey in 48 BC. Subsequently, he became a debt collector. Petro's son, Titus Flavius Sabinus, worked as a customs official in the province of Asia and became a moneylender on a small scale among the Helvetii. He earned a reputation as a scrupulous and honest "tax-farmer". Sabinus married up in status, to Vespasia Polla, whose father had risen to the rank of prefect of the camp and whose brother became a Senator. Sabinus and Vespasia had three children, the eldest of whom, a girl, died in infancy. The elder boy, Titus Flavius Sabinus, entered public life and pursued the cursus honorum. Vespasian on the other hand, seemed far less likely to be successful, initially not wishing to pursue high public office. He followed in his brother's footsteps when driven to it by his mother's taunting. During this period he married Flavia Domitilla, the daughter of Flavius Liberalis from Ferentium and formerly the mistress of Statilius Capella, a Roman equestrian from Sabratha in Africa. They had two sons, Titus Flavius Vespasianus (born 39) and Titus Flavius Domitianus (born 51), and a daughter, Domitilla (born c. 45). His wife Domitilla and his daughter Domitilla both died before Vespasian became Emperor in 69. After the death of his wife, Vespasian's long-standing mistress, Antonia Caenis, became his wife in all but formal status, a relationship that continued until she died in 75. Gallery Flavian family tree See also References This article incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). "Vespasian". Encyclopædia Britannica. Vol. 27 (11th ed.). Cambridge University Press. Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Chainer] | [TOKENS: 641] |
Contents Chainer Chainer is an open source deep learning framework written purely in Python on top of NumPy and CuPy Python libraries. The development is led by Japanese venture company Preferred Networks in partnership with IBM, Intel, Microsoft, and Nvidia. Chainer is notable for its early adoption of "define-by-run" scheme, as well as its performance on large scale systems. The first version was released in June 2015 and has gained large popularity in Japan since then. Furthermore, in 2017, it was listed by KDnuggets in top 10 open source machine learning Python projects. In December 2019, Preferred Networks announced the transition of its development effort from Chainer to PyTorch and it will only provide maintenance patches after releasing v7. Define-by-run Chainer was the first deep learning framework to introduce the define-by-run approach. The traditional procedure to train a network was in two phases: define the fixed connections between mathematical operations (such as matrix multiplication and nonlinear activations) in the network, and then run the actual training calculation. This is called the define-and-run or static-graph approach. Theano and TensorFlow are among the notable frameworks that took this approach. In contrast, in the define-by-run or dynamic-graph approach, the connection in a network is not determined when the training is started. The network is determined during the training as the actual calculation is performed. One of the advantages of this approach is that it is intuitive and flexible. If the network has complicated control flows such as conditionals and loops, in the define-and-run approach, specially designed operations for such constructs are needed. On the other hand, in the define-by-run approach, programming language's native constructs such as if statements and for loops can be used to describe such flow. This flexibility is especially useful to implement recurrent neural networks. Another advantage is ease of debugging. In the define-and-run approach, if an error (such as numeric error) has occurred in the training calculation, it is often difficult to inspect the fault, because the code written to define the network and the actual place of the error are separated. In the define-by-run approach, you can just suspend the calculation with the language's built-in debugger and inspect the data that flows on your code of the network. Define-by-run has gained popularity since the introduction by Chainer and is now implemented in many other frameworks, including PyTorch and TensorFlow. Extension libraries Chainer has four extension libraries, ChainerMN, ChainerRL, ChainerCV and ChainerUI. ChainerMN enables Chainer to be used on multiple GPUs with performance significantly faster than other deep learning frameworks. A supercomputer running Chainer on 1024 GPUs processed 90 epochs of ImageNet dataset on ResNet-50 network in 15 minutes, which is four times faster than the previous record held by Facebook. ChainerRL adds state of art deep reinforcement learning algorithms, and ChainerUI is a management and visualization tool. Applications Chainer is used as the framework for PaintsChainer, a service which does automatic colorization of black and white, line only, draft drawings with minimal user input. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Carbon_dioxide] | [TOKENS: 9928] |
Contents Carbon dioxide Carbon dioxide is a chemical compound with the chemical formula CO2. It is made up of molecules that each have one carbon atom covalently double bonded to two oxygen atoms. It is found in a gas state at room temperature and at normally-encountered concentrations it is odorless. As the source of carbon in the carbon cycle, atmospheric CO2 is the primary carbon source for life on Earth. In the air, carbon dioxide is transparent to visible light but absorbs infrared radiation, acting as a greenhouse gas. Carbon dioxide is soluble in water and is found in groundwater, lakes, ice caps, and seawater. It is a trace gas in Earth's atmosphere at 428 parts per million (ppm),[a] or about 0.043% (as of July 2025) having risen from pre-industrial levels of 280 ppm or about 0.028%. Burning fossil fuels is the main cause of these increased CO2 concentrations, which are the primary cause of climate change. Its concentration in Earth's pre-industrial atmosphere since late in the Precambrian was regulated by organisms and geological features. Plants, algae and cyanobacteria use energy from sunlight to synthesize carbohydrates from carbon dioxide and water in a process called photosynthesis, which produces oxygen as a waste product. In turn, oxygen is consumed and CO2 is released as waste by all aerobic organisms when they metabolize organic compounds to produce energy by respiration. CO2 is released from organic materials when they decay or combust, such as in forest fires. When carbon dioxide dissolves in water, it forms carbonate and mainly bicarbonate (HCO−3), which causes ocean acidification as atmospheric CO2 levels increase. Carbon dioxide is 53% more dense than dry air, but is long lived and thoroughly mixes in the atmosphere. About half of excess CO2 emissions to the atmosphere are absorbed by land and ocean carbon sinks. These sinks can become saturated and are volatile, as decay and wildfires result in the CO2 being released back into the atmosphere. CO2, or the carbon it holds, is eventually sequestered (stored for the long term) in rocks and organic deposits like coal, petroleum and natural gas. Nearly all CO2 produced by humans goes into the atmosphere. Less than 1% of CO2 produced annually is put to commercial use, mostly in the fertilizer industry and in the oil and gas industry for enhanced oil recovery. Other commercial applications include food and beverage production, metal fabrication, cooling, fire suppression and stimulating plant growth in greenhouses.: 3 Chemical and physical properties The symmetry of a carbon dioxide molecule is linear and centrosymmetric at its equilibrium geometry. The length of the carbon–oxygen bond in carbon dioxide is 116.3 pm, noticeably shorter than the roughly 140 pm length of a typical single C–O bond, and shorter than most other C–O multiply bonded functional groups such as carbonyls. Since it is centrosymmetric, the molecule has no electric dipole moment. As a linear triatomic molecule, CO2 has four vibrational modes as shown in the diagram. In the symmetric and the antisymmetric stretching modes, the atoms move along the axis of the molecule. There are two bending modes, which are degenerate, meaning that they have the same frequency and same energy, because of the symmetry of the molecule. When a molecule touches a surface or touches another molecule, the two bending modes can differ in frequency because the interaction is different for the two modes. Some of the vibrational modes are observed in the infrared (IR) spectrum: the antisymmetric stretching mode at wavenumber 2349 cm−1 (wavelength 4.25 μm) and the degenerate pair of bending modes at 667 cm−1 (wavelength 15.0 μm). The symmetric stretching mode does not create an electric dipole so is not observed in IR spectroscopy, but it is detected in Raman spectroscopy at 1388 cm−1 (wavelength 7.20 μm), with a Fermi resonance doublet at 1285 cm−1. In the gas phase, carbon dioxide molecules undergo significant vibrational motions and do not keep a fixed structure. However, in a Coulomb explosion imaging experiment, an instantaneous image of the molecular structure can be deduced. Such an experiment has been performed for carbon dioxide. The result of this experiment, and the conclusion of theoretical calculations based on an ab initio potential energy surface of the molecule, is that none of the molecules in the gas phase are ever exactly linear. This counter-intuitive result is trivially due to the fact that the nuclear motion volume element vanishes for linear geometries. This is so for all molecules except diatomic molecules. Carbon dioxide is soluble in water, in which it reversibly forms H2CO3 (carbonic acid), which is a weak acid, because its ionization in water is incomplete. The hydration equilibrium constant of carbonic acid is, at 25 °C: Hence, the majority of the carbon dioxide is not converted into carbonic acid, but remains as CO2 molecules, not affecting the pH. The relative concentrations of CO2, H2CO3, and the deprotonated forms HCO−3 (bicarbonate) and CO2−3(carbonate) depend on the pH. As shown in a Bjerrum plot, in neutral or slightly alkaline water (pH > 6.5), the bicarbonate form predominates (>50%) becoming the most prevalent (>95%) at the pH of seawater. In very alkaline water (pH > 10.4), the predominant (>50%) form is carbonate. The oceans, being mildly alkaline with typical pH = 8.2–8.5, contain about 120 mg of bicarbonate per liter. Being diprotic, carbonic acid has two acid dissociation constants, the first one for the dissociation into the bicarbonate (also called hydrogen carbonate) ion (HCO−3): This is the true first acid dissociation constant, defined as where the denominator includes only covalently bound H2CO3 and does not include hydrated CO2(aq). The much smaller and often-quoted value near 4.16 × 10−7 (or pKa1 = 6.38) is an apparent value calculated on the (incorrect) assumption that all dissolved CO2 is present as carbonic acid, so that Since most of the dissolved CO2 remains as CO2 molecules, Ka1(apparent) has a much larger denominator and a much smaller value than the true Ka1. The bicarbonate ion is an amphoteric species that can act as an acid or as a base, depending on pH of the solution. At high pH, it dissociates significantly into the carbonate ion (CO2−3): In organisms, carbonic acid production is catalysed by the enzyme known as carbonic anhydrase. In addition to altering its acidity, the presence of carbon dioxide in water also affects its electrical properties. When carbon dioxide dissolves in desalinated water, the electrical conductivity increases significantly from below 1 μS/cm to nearly 30 μS/cm. When heated, the water begins to gradually lose the conductivity induced by the presence of C O 2 {\displaystyle \mathrm {CO_{2}} } , especially noticeable as temperatures exceed 30 °C. The temperature dependence of the electrical conductivity of fully deionized water without CO2 saturation is comparably low in relation to these data. CO2 is a potent electrophile having an electrophilic reactivity that is comparable to benzaldehyde or strongly electrophilic α,β-unsaturated carbonyl compounds. However, unlike electrophiles of similar reactivity, the reactions of nucleophiles with CO2 are thermodynamically less favored and are often found to be highly reversible. The reversible reaction of carbon dioxide with amines to make carbamates is used in CO2 scrubbers and has been suggested as a possible starting point for carbon capture and storage by amine gas treating. Only very strong nucleophiles, like the carbanions provided by Grignard reagents and organolithium compounds react with CO2 to give carboxylates: In metal carbon dioxide complexes, CO2 serves as a ligand, which can facilitate the conversion of CO2 to other chemicals. The reduction of CO2 to CO is ordinarily a difficult and slow reaction: The redox potential for this reaction near pH 7 is about −0.53 V versus the standard hydrogen electrode. The nickel-containing enzyme carbon monoxide dehydrogenase catalyses this process. Photoautotrophs (i.e. plants and cyanobacteria) use the energy contained in sunlight to photosynthesize simple sugars from CO2 absorbed from the air and water: Carbon dioxide is colorless. At low concentrations, the gas is odorless; however, at sufficiently high concentrations, it has a sharp, acidic odor. At standard temperature and pressure, the density of carbon dioxide is around 1.98 kg/m3, about 1.53 times that of air. Carbon dioxide has no liquid state at pressures below 0.51795(10) MPa (5.11177(99) atm). At a pressure of 1 atm (0.101325 MPa), the gas deposits directly to a solid at temperatures below 194.6855(30) K (−78.4645(30) °C) and the solid sublimes directly to a gas above this temperature. In its solid state, carbon dioxide is commonly called dry ice. Liquid carbon dioxide forms only at pressures above 0.51795(10) MPa (5.11177(99) atm); the triple point of carbon dioxide is 216.592(3) K (−56.558(3) °C) at 0.51795(10) MPa (5.11177(99) atm) (see phase diagram). The critical point is 304.128(15) K (30.978(15) °C) at 7.3773(30) MPa (72.808(30) atm). Another form of solid carbon dioxide observed at high pressure is an amorphous glass-like solid. This form of glass, called carbonia, is produced by supercooling heated CO2 at extreme pressures (40–48 GPa, or about 400,000 atmospheres) in a diamond anvil. This discovery confirmed the theory that carbon dioxide could exist in a glass state similar to other members of its elemental family, like silicon dioxide (silica glass) and germanium dioxide. Unlike silica and germania glasses, however, carbonia glass is not stable at normal pressures and reverts to gas when pressure is released. At temperatures and pressures above the critical point, carbon dioxide behaves as a supercritical fluid known as supercritical carbon dioxide. Table of thermal and physical properties of saturated liquid carbon dioxide: Table of thermal and physical properties of carbon dioxide (CO2) at atmospheric pressure: Biological role Carbon dioxide is an end product of cellular respiration in organisms that obtain energy by breaking down sugars, fats and amino acids with oxygen as part of their metabolism. This includes all plants, algae and animals and aerobic fungi and bacteria. In vertebrates, the carbon dioxide travels in the blood from the body's tissues to the skin (e.g., amphibians) or the gills (e.g., fish), from where it dissolves in the water, or to the lungs from where it is exhaled. During active photosynthesis, plants can absorb more carbon dioxide from the atmosphere than they release in respiration. Carbon fixation is a biochemical process by which atmospheric carbon dioxide is incorporated by plants, algae and cyanobacteria into energy-rich organic molecules such as glucose, thus creating their own food by photosynthesis. Photosynthesis uses carbon dioxide and water to produce sugars from which other organic compounds can be constructed, and oxygen is produced as a by-product. Ribulose-1,5-bisphosphate carboxylase oxygenase, commonly abbreviated to RuBisCO, is the enzyme involved in the first major step of carbon fixation, the production of two molecules of 3-phosphoglycerate from CO2 and ribulose bisphosphate, as shown in the diagram at left. RuBisCO is thought to be the single most abundant protein on Earth. Phototrophs use the products of their photosynthesis as internal food sources and as raw material for the biosynthesis of more complex organic molecules, such as polysaccharides, nucleic acids, and proteins. These are used for their own growth, and also as the basis of the food chains and webs that feed other organisms, including animals such as ourselves. Some important phototrophs, the coccolithophores synthesise hard calcium carbonate scales. A globally significant species of coccolithophore is Emiliania huxleyi whose calcite scales have formed the basis of many sedimentary rocks such as limestone, where what was previously atmospheric carbon can remain fixed for geological timescales. Plants can grow as much as 50% faster in concentrations of 1,000 ppm CO2 when compared with ambient conditions, though this assumes no change in climate and no limitation on other nutrients. Elevated CO2 levels cause increased growth reflected in the harvestable yield of crops, with wheat, rice and soybean all showing increases in yield of 12–14% under elevated CO2 in FACE experiments. Increased atmospheric CO2 concentrations result in fewer stomata developing on plants which leads to reduced water usage and increased water-use efficiency. Studies using FACE have shown that CO2 enrichment leads to decreased concentrations of micronutrients in crop plants. This may have knock-on effects on other parts of ecosystems as herbivores will need to eat more food to gain the same amount of protein. The concentration of secondary metabolites such as phenylpropanoids and flavonoids can also be altered in plants exposed to high concentrations of CO2. Plants also emit CO2 during respiration, and so the majority of plants and algae, which use C3 photosynthesis, are only net absorbers during the day. Though a growing forest will absorb many tons of CO2 each year, a mature forest will produce as much CO2 from respiration and decomposition of dead specimens (e.g., fallen branches) as is used in photosynthesis in growing plants. Contrary to the long-standing view that they are carbon neutral, mature forests can continue to accumulate carbon and remain valuable carbon sinks, helping to maintain the carbon balance of Earth's atmosphere. Additionally, and crucially to life on earth, photosynthesis by phytoplankton consumes dissolved CO2 in the upper ocean and thereby promotes the absorption of CO2 from the atmosphere. Carbon dioxide content in fresh air (averaged between sea-level and 10 kPa level, i.e., about 30 km (19 mi) altitude) varies between 0.036% (360 ppm) and 0.041% (412 ppm), depending on the location. In humans, exposure to CO2 at concentrations greater than 5% causes the development of hypercapnia and respiratory acidosis. Concentrations of 7% to 10% (70,000 to 100,000 ppm) may cause suffocation, even in the presence of sufficient oxygen, manifesting as dizziness, headache, visual and hearing dysfunction, and unconsciousness within a few minutes to an hour. Concentrations of more than 10% may cause convulsions, coma, and death. CO2 levels of more than 30% act rapidly leading to loss of consciousness in seconds. Because it is heavier than air, in locations where the gas seeps from the ground (due to sub-surface volcanic or geothermal activity) in relatively high concentrations, without the dispersing effects of wind, it can collect in sheltered/pocketed locations below average ground level, causing animals located therein to be suffocated. Carrion feeders attracted to the carcasses are then also killed. Children have been killed in the same way near the city of Goma by CO2 emissions from the nearby volcano Mount Nyiragongo. The Swahili term for this phenomenon is mazuku. Adaptation to increased concentrations of CO2 occurs in humans, including modified breathing and kidney bicarbonate production, in order to balance the effects of blood acidification (acidosis). Several studies suggested that 2.0 percent inspired concentrations could be used for closed air spaces (e.g. a submarine) since the adaptation is physiological and reversible, as deterioration in performance or in normal physical activity does not happen at this level of exposure for five days. Yet, other studies show a decrease in cognitive function even at much lower levels. Also, with ongoing respiratory acidosis, adaptation or compensatory mechanisms will be unable to reverse the condition. There are few studies of the health effects of long-term continuous CO2 exposure on humans and animals at levels below 1%. Occupational CO2 exposure limits have been set in the United States at 0.5% (5000 ppm) for an eight-hour period. At this CO2 concentration, International Space Station crew experienced headaches, lethargy, mental slowness, emotional irritation, and sleep disruption. Studies in animals at 0.5% CO2 have demonstrated kidney calcification and bone loss after eight weeks of exposure. A study of humans exposed in 2.5 hour sessions demonstrated significant negative effects on cognitive abilities at concentrations as low as 0.1% (1000 ppm) CO2 likely due to CO2 induced increases in cerebral blood flow. Another study observed a decline in basic activity level and information usage at 1000 ppm, when compared to 500 ppm. However a review of the literature found that a reliable subset of studies on the phenomenon of carbon dioxide induced cognitive impairment to only show a small effect on high-level decision making (for concentrations below 5000 ppm). Most of the studies were confounded by inadequate study designs, environmental comfort, uncertainties in exposure doses and differing cognitive assessments used. Similarly a study on the effects of the concentration of CO2 in motorcycle helmets has been criticized for having dubious methodology in not noting the self-reports of motorcycle riders and taking measurements using mannequins. Further when normal motorcycle conditions were achieved (such as highway or city speeds) or the visor was raised the concentration of CO2 declined to safe levels (0.2%). Poor ventilation is one of the main causes of excessive CO2 concentrations in closed spaces, leading to poor indoor air quality. Carbon dioxide differential above outdoor concentrations at steady state conditions (when the occupancy and ventilation system operation are sufficiently long that CO2 concentration has stabilized) are sometimes used to estimate ventilation rates per person. Higher CO2 concentrations are associated with occupant health, comfort and performance degradation. ASHRAE Standard 62.1–2007 ventilation rates may result in indoor concentrations up to 2,100 ppm above ambient outdoor conditions. Thus if the outdoor concentration is 400 ppm, indoor concentrations may reach 2,500 ppm with ventilation rates that meet this industry consensus standard. Concentrations in poorly ventilated spaces can be found even higher than this (range of 3,000 or 4,000 ppm). Miners, who are particularly vulnerable to gas exposure due to insufficient ventilation, referred to mixtures of carbon dioxide and nitrogen as "blackdamp", "choke damp" or "stythe". Before more effective technologies were developed, miners would frequently monitor for dangerous levels of blackdamp and other gases in mine shafts by bringing a caged canary with them as they worked. The canary is more sensitive to asphyxiant gases than humans, and as it became unconscious would stop singing and fall off its perch. The Davy lamp could also detect high levels of blackdamp (which sinks, and collects near the floor) by burning less brightly, while methane, another suffocating gas and explosion risk, would make the lamp burn more brightly. In February 2020, three people died from suffocation at a party in Moscow when dry ice (frozen CO2) was added to a swimming pool to cool it down. A similar accident occurred in 2018 when a woman died from CO2 fumes emanating from the large amount of dry ice she was transporting in her car. Humans spend more and more time in a confined atmosphere (around 80-90% of the time in a building or vehicle). According to the French Agency for Food, Environmental and Occupational Health & Safety (ANSES) and various actors in France, the CO2 rate in the indoor air of buildings (linked to human or animal occupancy and the presence of combustion installations), weighted by air renewal, is "usually between about 350 and 2,500 ppm". In homes, schools, nurseries and offices, there are no systematic relationships between the levels of CO2 and other pollutants, and indoor CO2 is statistically not a good predictor of pollutants linked to outdoor road (or air, etc.) traffic. CO2 is the parameter that changes the fastest (with hygrometry and oxygen levels when humans or animals are gathered in a closed or poorly ventilated room). In poor countries, many open hearths are sources of CO2 and CO emitted directly into the living environment. Local concentrations of carbon dioxide can reach high values near strong sources, especially those that are isolated by surrounding terrain. At the Bossoleto hot spring near Rapolano Terme in Tuscany, Italy, situated in a bowl-shaped depression about 100 m (330 ft) in diameter, concentrations of CO2 rise to above 75% overnight, sufficient to kill insects and small animals. After sunrise the gas is dispersed by convection. High concentrations of CO2 produced by disturbance of deep lake water saturated with CO2 are thought to have caused 37 fatalities at Lake Monoun, Cameroon in 1984 and 1700 casualties at Lake Nyos, Cameroon in 1986. Human physiology The body produces approximately 2.3 pounds (1.0 kg) of carbon dioxide per day per person, containing 0.63 pounds (290 g) of carbon. In humans, this carbon dioxide is carried through the venous system and is breathed out through the lungs, resulting in lower concentrations in the arteries. The carbon dioxide content of the blood is often given as the partial pressure, which is the pressure which carbon dioxide would have had if it alone occupied the volume. In humans, the blood carbon dioxide contents are shown in the adjacent table. CO2 is carried in blood in three different ways. Exact percentages vary between arterial and venous blood. Hemoglobin, the main oxygen-carrying molecule in red blood cells, carries both oxygen and carbon dioxide. However, the CO2 bound to hemoglobin does not bind to the same site as oxygen. Instead, it combines with the N-terminal groups on the four globin chains. However, because of allosteric effects on the hemoglobin molecule, the binding of CO2 decreases the amount of oxygen that is bound for a given partial pressure of oxygen. This is known as the Haldane Effect, and is important in the transport of carbon dioxide from the tissues to the lungs. Conversely, a rise in the partial pressure of CO2 or a lower pH will cause offloading of oxygen from hemoglobin, which is known as the Bohr effect. Carbon dioxide is one of the mediators of local autoregulation of blood supply. If its concentration is high, the capillaries expand to allow a greater blood flow to that tissue. Bicarbonate ions are crucial for regulating blood pH. A person's breathing rate influences the level of CO2 in their blood. Breathing that is too slow or shallow causes respiratory acidosis, while breathing that is too rapid leads to hyperventilation, which can cause respiratory alkalosis. Although the body requires oxygen for metabolism, low oxygen levels normally do not stimulate breathing. Rather, breathing is stimulated by higher carbon dioxide levels. As a result, breathing low-pressure air or a gas mixture with no oxygen at all (such as pure nitrogen) can lead to loss of consciousness without ever experiencing air hunger. This is especially perilous for high-altitude fighter pilots. It is also why flight attendants instruct passengers, in case of loss of cabin pressure, to apply the oxygen mask to themselves first before helping others; otherwise, one risks losing consciousness. The respiratory centers try to maintain an arterial CO2 pressure of 40 mmHg. With intentional hyperventilation, the CO2 content of arterial blood may be lowered to 10–20 mmHg (the oxygen content of the blood is little affected), and the respiratory drive is diminished. This is why one can hold one's breath longer after hyperventilating than without hyperventilating. This carries the risk that unconsciousness may result before the need to breathe becomes overwhelming, which is why hyperventilation is particularly dangerous before free diving. Concentrations and role in the environment In the atmosphere of Earth, carbon dioxide (CO2) is a trace gas that plays an integral part in the greenhouse effect, carbon cycle, photosynthesis, and oceanic carbon cycle. It is one of three main greenhouse gases in the atmosphere of Earth. In 2024, the concentration of carbon dioxide in the atmosphere reached 427 ppm or 0.0427% (on a molar basis), representing a mass of 3341 gigatonnes. This is an increase of 50% since the start of the Industrial Revolution, up from 280 ppm during the 10,000 years prior to the mid-18th century. The increase is due to human activity. The current increase in CO2 concentrations is primarily driven by the burning of fossil fuels. Other significant human activities that emit CO2 include cement production, deforestation, and biomass burning. The increase in atmospheric concentrations of CO2 and other long-lived greenhouse gases such as methane increase the absorption and emission of infrared radiation by the atmosphere. This has led to a rise in average global temperature and ocean acidification. Another direct effect is the CO2 fertilization effect. The increase in atmospheric concentrations of CO2 causes a range of further effects of climate change on the environment and human living conditions. Carbon dioxide is a greenhouse gas. It absorbs and emits infrared radiation at its two infrared-active vibrational frequencies. The two wavelengths are 4.26 μm (2,347 cm−1) (antisymmetric stretching vibrational mode) and 14.99 μm (667 cm−1) (bending vibrational mode). CO2 plays a significant role in influencing Earth's surface temperature through the greenhouse effect. Light emission from the Earth's surface is most intense in the infrared region between 200 and 2500 cm−1, as opposed to light emission from the much hotter Sun which is most intense in the visible region. Absorption of infrared light at the vibrational frequencies of atmospheric CO2 traps energy near the surface, warming the surface of Earth and its lower atmosphere. Less energy reaches the upper atmosphere, which is therefore cooler because of this absorption. The present atmospheric concentration of CO2 is the highest for 14 million years. Concentrations of CO2 in the atmosphere were as high as 4,000 ppm during the Cambrian period about 500 million years ago, and as low as 180 ppm during the Quaternary glaciation of the last two million years. Reconstructed temperature records for the last 420 million years indicate that atmospheric CO2 concentrations peaked at approximately 2,000 ppm. This peak happened during the Devonian period (400 million years ago). Another peak occurred in the Triassic period (220–200 million years ago). Carbon dioxide dissolves in the ocean to form carbonic acid (H2CO3), bicarbonate (HCO−3), and carbonate (CO2−3). There is about fifty times as much carbon dioxide dissolved in the oceans as exists in the atmosphere. The oceans act as an enormous carbon sink, and have taken up about a third of CO2 emitted by human activity. Ocean acidification is the ongoing decrease in the pH of the Earth's ocean. Between 1950 and 2020, the average pH of the ocean surface fell from approximately 8.15 to 8.05. Carbon dioxide emissions from human activities are the primary cause of ocean acidification, with atmospheric carbon dioxide (CO2) levels exceeding 422 ppm (as of 2024[update]). CO2 from the atmosphere is absorbed by the oceans. This chemical reaction produces carbonic acid (H2CO3) which dissociates into a bicarbonate ion (HCO−3) and a hydrogen ion (H+). The presence of free hydrogen ions (H+) lowers the pH of the ocean, increasing acidity (this does not mean that seawater is acidic yet; it is still alkaline, with a pH higher than 8). Marine calcifying organisms, such as mollusks and corals, are especially vulnerable because they rely on calcium carbonate to build shells and skeletons. A change in pH by 0.1 represents a 26% increase in hydrogen ion concentration in the world's oceans (the pH scale is logarithmic, so a change of one in pH units is equivalent to a tenfold change in hydrogen ion concentration). Sea-surface pH and carbonate saturation states vary depending on ocean depth and location. Colder and higher latitude waters are capable of absorbing more CO2. This can cause acidity to rise, lowering the pH and carbonate saturation levels in these areas. There are several other factors that influence the atmosphere-ocean CO2 exchange, and thus local ocean acidification. These include ocean currents and upwelling zones, proximity to large continental rivers, sea ice coverage, and atmospheric exchange with nitrogen and sulfur from fossil fuel burning and agriculture. Changes in ocean chemistry can have extensive direct and indirect effects on organisms and their habitats. One of the most important repercussions of increasing ocean acidity relates to the production of shells out of calcium carbonate (CaCO3). This process is called calcification and is important to the biology and survival of a wide range of marine organisms. Calcification involves the precipitation of dissolved ions into solid CaCO3 structures, structures for many marine organisms, such as coccolithophores, foraminifera, crustaceans, mollusks, etc. After they are formed, these CaCO3 structures are vulnerable to dissolution unless the surrounding seawater contains saturating concentrations of carbonate ions (CO2−3). Very little of the extra carbon dioxide that is added into the ocean remains as dissolved carbon dioxide. The majority dissociates into additional bicarbonate and free hydrogen ions. The increase in hydrogen is larger than the increase in bicarbonate, creating an imbalance in the reaction: To maintain chemical equilibrium, some of the carbonate ions already in the ocean combine with some of the hydrogen ions to make further bicarbonate. Thus the ocean's concentration of carbonate ions is reduced, removing an essential building block for marine organisms to build shells, or calcify: Carbon dioxide is also introduced into the oceans through hydrothermal vents. The Champagne hydrothermal vent, found at the Northwest Eifuku volcano in the Mariana Trench, produces almost pure liquid carbon dioxide, one of only two known sites in the world as of 2004, the other being in the Okinawa Trough. The finding of a submarine lake of liquid carbon dioxide in the Okinawa Trough was reported in 2006. Sources The burning of fossil fuels for energy produces 36.8 billion tonnes of CO2 per year as of 2023. Nearly all of this goes into the atmosphere, where approximately half is subsequently absorbed into natural carbon sinks. Less than 1% of CO2 produced annually is put to commercial use.: 3 Carbon dioxide is a by-product of the fermentation of sugar in the brewing of beer, whisky and other alcoholic beverages and in the production of bioethanol. Yeast metabolizes sugar to produce CO2 and ethanol, also known as alcohol, as follows: All aerobic organisms produce CO2 when they oxidize carbohydrates, fatty acids, and proteins. The large number of reactions involved are exceedingly complex and not described easily. Refer to cellular respiration, anaerobic respiration and photosynthesis. The equation for the respiration of glucose and other monosaccharides is: Anaerobic organisms decompose organic material producing methane and carbon dioxide together with traces of other compounds. Regardless of the type of organic material, the production of gases follows well defined kinetic pattern. Carbon dioxide comprises about 40–45% of the gas that emanates from decomposition in landfills (termed "landfill gas"). Most of the remaining 50–55% is methane. The combustion of all carbon-based fuels, such as methane (natural gas), petroleum distillates (gasoline, diesel, kerosene, propane), coal, wood and generic organic matter produces carbon dioxide and, except in the case of pure carbon, water. As an example, the chemical reaction between methane and oxygen: Iron is reduced from its oxides with coke in a blast furnace, producing pig iron and carbon dioxide: Carbon dioxide is a byproduct of the industrial production of hydrogen by steam reforming and the water gas shift reaction in ammonia production. These processes begin with the reaction of water and natural gas (mainly methane). It is produced by thermal decomposition of limestone, CaCO3 by heating (calcining) at about 850 °C (1,560 °F), in the manufacture of quicklime (calcium oxide, CaO), a compound that has many industrial uses: Acids liberate CO2 from most metal carbonates. Consequently, it may be obtained directly from natural carbon dioxide springs, where it is produced by the action of acidified water on limestone or dolomite. The reaction between hydrochloric acid and calcium carbonate (limestone or chalk) is shown below: The carbonic acid (H2CO3) then decomposes to water and CO2: Such reactions are accompanied by foaming or bubbling, or both, as the gas is released. They have widespread uses in industry because they can be used to neutralize waste acid streams. Commercial uses Around 230 Mt of CO2 are used each year, mostly in the fertiliser industry for urea production (130 million tonnes) and in the oil and gas industry for enhanced oil recovery (70 to 80 million tonnes).: 3 Other commercial applications include food and beverage production, metal fabrication, cooling, fire suppression and stimulating plant growth in greenhouses.: 3 Technology exists to capture CO2 from industrial flue gas or from the air. Research is ongoing on ways to use captured CO2 in products and some of these processes have been deployed commercially. However, the potential to use products is very small compared to the total volume of CO2 that could foreseeably be captured. The vast majority of captured CO2 is considered a waste product and sequestered in underground geologic formations. In the chemical industry, carbon dioxide is mainly consumed as an ingredient in the production of urea, with a smaller fraction being used to produce methanol and a range of other products. Some carboxylic acid derivatives such as sodium salicylate are prepared using CO2 by the Kolbe–Schmitt reaction. Captured CO2 could be to produce methanol or electrofuels. To be carbon-neutral, the CO2 would need to come from bioenergy production or direct air capture.: 21–24 Carbon dioxide is used in enhanced oil recovery where it is injected into or adjacent to producing oil wells, usually under supercritical conditions, when it becomes miscible with the oil. This approach can increase original oil recovery by reducing residual oil saturation by 7–23% additional to primary extraction. It acts as both a pressurizing agent and, when dissolved into the underground crude oil, significantly reduces its viscosity, and changing surface chemistry enabling the oil to flow more rapidly through the reservoir to the removal well. Most CO2 injected in CO2-EOR projects comes from naturally occurring underground CO2 deposits. Some CO2 used in EOR is captured from industrial facilities such as natural gas processing plants, using carbon capture technology and transported to the oilfield in pipelines. Plants require carbon dioxide to conduct photosynthesis. The atmospheres of greenhouses may (if of large size, must) be enriched with additional CO2 to sustain and increase the rate of plant growth. At very high concentrations (100 times atmospheric concentration, or greater), carbon dioxide can be toxic to animal life, so raising the concentration to 10,000 ppm (1%) or higher for several hours will eliminate pests such as whiteflies and spider mites in a greenhouse. Some plants respond more favorably to rising carbon dioxide concentrations than others, which can lead to vegetation regime shifts like woody plant encroachment. Carbon dioxide is a food additive used as a propellant and acidity regulator in the food industry. It is approved for usage in the EU (listed as E number E290), US, Australia and New Zealand (listed by its INS number 290). A candy called Pop Rocks is pressurized with carbon dioxide gas at about 4,000 kPa (40 bar; 580 psi). When placed in the mouth, it dissolves (just like other hard candy) and releases the gas bubbles with an audible pop. Leavening agents cause dough to rise by producing carbon dioxide. Baker's yeast produces carbon dioxide by fermentation of sugars within the dough, while chemical leaveners such as baking powder and baking soda release carbon dioxide when heated or if exposed to acids. Carbon dioxide is used to produce carbonated soft drinks and soda water. Traditionally, the carbonation of beer and sparkling wine came about through natural fermentation, but many manufacturers carbonate these drinks with carbon dioxide recovered from the fermentation process. In the case of bottled and kegged beer, the most common method used is carbonation with recycled carbon dioxide. With the exception of British real ale, draught beer is usually transferred from kegs in a cold room or cellar to dispensing taps on the bar using pressurized carbon dioxide, sometimes mixed with nitrogen. The taste of soda water (and related taste sensations in other carbonated beverages) is an effect of the dissolved carbon dioxide rather than the bursting bubbles of the gas. Carbonic anhydrase 4 converts carbon dioxide to carbonic acid leading to a sour taste, and also the dissolved carbon dioxide induces a somatosensory response. Carbon dioxide in the form of dry ice is often used during the cold soak phase in winemaking to cool clusters of grapes quickly after picking to help prevent spontaneous fermentation by wild yeast. The main advantage of using dry ice over water ice is that it cools the grapes without adding any additional water that might decrease the sugar concentration in the grape must, and thus the alcohol concentration in the finished wine. Carbon dioxide is also used to create a hypoxic environment for carbonic maceration, the process used to produce Beaujolais wine. Carbon dioxide is sometimes used to top up wine bottles or other storage vessels such as barrels to prevent oxidation, though it has the problem that it can dissolve into the wine, making a previously still wine slightly fizzy. For this reason, other gases such as nitrogen or argon are preferred for this process by professional wine makers. Carbon dioxide is often used to "stun" animals before slaughter. "Stunning" may be a misnomer, as the animals are not knocked out immediately and may suffer distress. Carbon dioxide is one of the most commonly used compressed gases for pneumatic (pressurized gas) systems in portable pressure tools. Carbon dioxide is also used as an atmosphere for welding, although in the welding arc, it reacts to oxidize most metals. Use in the automotive industry is common despite significant evidence that welds made in carbon dioxide are more brittle than those made in more inert atmospheres. When used for MIG welding, CO2 use is sometimes referred to as MAG welding, for Metal Active Gas, as CO2 can react at these high temperatures. It tends to produce a hotter puddle than truly inert atmospheres, improving the flow characteristics. Although, this may be due to atmospheric reactions occurring at the puddle site. This is usually the opposite of the desired effect when welding, as it tends to embrittle the site, but may not be a problem for general mild steel welding, where ultimate ductility is not a major concern. Carbon dioxide is used in many consumer products that require pressurized gas because it is inexpensive and nonflammable, and because it undergoes a phase transition from gas to liquid at room temperature at an attainable pressure of approximately 60 bar (870 psi; 59 atm), allowing far more carbon dioxide to fit in a given container than otherwise would. Life jackets often contain canisters of pressured carbon dioxide for quick inflation. Aluminium capsules of CO2 are also sold as supplies of compressed gas for air guns, paintball markers/guns, inflating bicycle tires, and for making carbonated water. High concentrations of carbon dioxide can also be used to kill pests. Liquid carbon dioxide is used in supercritical drying of some food products and technological materials, in the preparation of specimens for scanning electron microscopy and in the decaffeination of coffee beans. Carbon dioxide can be used to extinguish flames by flooding the environment around the flame with the gas. It does not itself react to extinguish the flame, but starves the flame of oxygen by displacing it. Some fire extinguishers, especially those designed for electrical fires, contain liquid carbon dioxide under pressure. Carbon dioxide extinguishers work well on small flammable liquid and electrical fires, but not on ordinary combustible fires, because they do not cool the burning substances significantly, and when the carbon dioxide disperses, they can catch fire upon exposure to atmospheric oxygen. They are mainly used in server rooms. Carbon dioxide has also been widely used as an extinguishing agent in fixed fire-protection systems for local application of specific hazards and total flooding of a protected space. International Maritime Organization standards recognize carbon dioxide systems for fire protection of ship holds and engine rooms. Carbon dioxide-based fire-protection systems have been linked to several deaths, because it can cause suffocation in sufficiently high concentrations. A review of CO2 systems identified 51 incidents between 1975 and the date of the report (2000), causing 72 deaths and 145 injuries. Liquid carbon dioxide is a good solvent for many lipophilic organic compounds and is used to decaffeinate coffee. Carbon dioxide has attracted attention in the pharmaceutical and other chemical processing industries as a less toxic alternative to more traditional solvents such as organochlorides. It is also used by some dry cleaners for this reason. It is used in the preparation of some aerogels because of the properties of supercritical carbon dioxide. Liquid and solid carbon dioxide are important refrigerants, especially in the food industry, where they are employed during the transportation and storage of ice cream and other frozen foods. Solid carbon dioxide is called "dry ice" and is used for small shipments where refrigeration equipment is not practical. Solid carbon dioxide is always below −78.5 °C (−109.3 °F) at regular atmospheric pressure, regardless of the air temperature. Liquid carbon dioxide (industry nomenclature R744 or R-744) was used as a refrigerant prior to the use of dichlorodifluoromethane (R12, a chlorofluorocarbon (CFC) compound). CO2 might enjoy a renaissance because one of the main substitutes to CFCs, 1,1,1,2-tetrafluoroethane (R134a, a hydrofluorocarbon (HFC) compound) contributes to climate change more than CO2 does. CO2 physical properties are highly favorable for cooling, refrigeration, and heating purposes, having a high volumetric cooling capacity. Due to the need to operate at pressures of up to 130 bars (1,900 psi; 13,000 kPa), CO2 systems require highly mechanically resistant reservoirs and components that have already been developed for mass production in many sectors. In automobile air conditioning, in more than 90% of all driving conditions for latitudes higher than 50°, CO2 (R744) operates more efficiently than systems using HFCs (e.g., R134a). Its environmental advantages (GWP of 1, non-ozone depleting, non-toxic, non-flammable) could make it the future working fluid to replace current HFCs in cars, supermarkets, and heat pump water heaters, among others. Coca-Cola has fielded CO2-based beverage coolers and the U.S. Army is interested in CO2 refrigeration and heating technology. Carbon dioxide is the lasing medium in a carbon-dioxide laser, which is one of the earliest type of lasers. Carbon dioxide can be used as a means of controlling the pH of swimming pools, by continuously adding gas to the water, thus keeping the pH from rising. Among the advantages of this is the avoidance of handling (more hazardous) acids. Similarly, it is also used in the maintaining reef aquaria, where it is commonly used in calcium reactors to temporarily lower the pH of water being passed over calcium carbonate in order to allow the calcium carbonate to dissolve into the water more freely, where it is used by some corals to build their skeleton. Used as the primary coolant in the British advanced gas-cooled reactor for nuclear power generation. Carbon dioxide induction is commonly used for the euthanasia of laboratory research animals. Methods to administer CO2 include placing animals directly into a closed, prefilled chamber containing CO2, or exposure to a gradually increasing concentration of CO2. The American Veterinary Medical Association's 2020 guidelines for carbon dioxide induction state that a displacement rate of 30–70% of the chamber or cage volume per minute is optimal for the humane euthanasia of small rodents.: 5, 31 Percentages of CO2 vary for different species, based on identified optimal percentages to minimize distress.: 22 Carbon dioxide is also used in several related cleaning and surface-preparation techniques. History of discovery Carbon dioxide was the first gas to be described as a discrete substance. In about 1640, the Flemish chemist Jan Baptist van Helmont observed that when he burned charcoal in a closed vessel, the mass of the resulting ash was much less than that of the original charcoal. His interpretation was that the rest of the charcoal had been transmuted into an invisible substance he termed a "gas" (from Greek "chaos") or "wild spirit" (spiritus sylvestris). The properties of carbon dioxide were further studied in the 1750s by the Scottish physician Joseph Black. He found that limestone (calcium carbonate) could be heated or treated with acids to yield a gas he called "fixed air". He observed that the fixed air was denser than air and supported neither flame nor animal life. Black also found that when bubbled through limewater (a saturated aqueous solution of calcium hydroxide), it would precipitate calcium carbonate. He used this phenomenon to illustrate that carbon dioxide is produced by animal respiration and microbial fermentation. In 1772, English chemist Joseph Priestley published a paper entitled Impregnating Water with Fixed Air in which he described a process of dripping sulfuric acid (or oil of vitriol as Priestley knew it) on chalk in order to produce carbon dioxide, and forcing the gas to dissolve by agitating a bowl of water in contact with the gas. Carbon dioxide was first liquefied (at elevated pressures) in 1823 by Humphry Davy and Michael Faraday. The earliest description of solid carbon dioxide (dry ice) was given by the French inventor Adrien-Jean-Pierre Thilorier, who in 1835 opened a pressurized container of liquid carbon dioxide, only to find that the cooling produced by the rapid evaporation of the liquid yielded a "snow" of solid CO2. Carbon dioxide in combination with nitrogen was known from earlier times as Blackdamp, stythe or choke damp.[b] Along with the other types of damp it was encountered in mining operations and well sinking. Slow oxidation of coal and biological processes replaced the oxygen to create a suffocating mixture of nitrogen and carbon dioxide. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/United_States_Department_of_Defense] | [TOKENS: 4379] |
Contents United States Department of Defense The United States Department of Defense (DoD), also referred to as the Department of War (DOW),[a] is an executive department of the U.S. federal government charged with coordinating and supervising the U.S. Armed Forces—the Army, Navy, Marines, Air Force, Space Force, and, for some purposes, the Coast Guard—and related functions and agencies. Headquartered at the Pentagon in Arlington County, Virginia, just outside Washington, D.C., the stated mission of the Department of Defense is "to provide the military forces needed to deter war and ensure our nation's security". The Department of Defense is headed by the secretary of defense, a cabinet-level official who reports directly to the president of the United States. The president is commander-in-chief of the U.S. Armed Forces. Within the Department of Defense are three subordinate military departments: the Department of the Army, the Department of the Navy, and the Department of the Air Force. In addition, four national intelligence services are part of the Department of Defense: the Defense Intelligence Agency, National Security Agency (NSA), National Geospatial-Intelligence Agency, and National Reconnaissance Office. Other agencies in the Department of Defense include the Defense Advanced Research Projects Agency (DARPA), Defense Logistics Agency, Missile Defense Agency, Defense Health Agency, Defense Threat Reduction Agency, Defense Counterintelligence and Security Agency, Space Development Agency and Pentagon Force Protection Agency. Additionally, the Defense Contract Management Agency is responsible for administering contracts for the Department of Defense. Military operations are managed by eleven regional or functional unified combatant commands. The Department of Defense also operates several joint services schools, including the Eisenhower School and the National War College. In 2026, it was announced that the Pentagon would cut ties with Harvard University. As of November 2022,[update] the department has over 1.4 million active-duty uniformed personnel in the six armed services, and over 747,000 civilian employees. It also supervises over 778,000 National Guard and reservist personnel. Name By the National Security Act of 1947, the Department of the Navy and the Department of War had a new single Secretary imposed over the top of their two previously independent Cabinet secretaries. The War Department also changed its name to the Department of the Army and split off the Department of the Air Force. The new Cabinet-level department was initially designated the National Military Establishment (NME). On 10 August 1949, the National Security Act of 1947 was amended; the amendment renamed the NME the Department of Defense. The renaming is alleged to be due to the NME's pronunciation as "enemy". On 5 September 2025, President Donald Trump signed an executive order authorizing "Department of War" and "secretary of war" as secondary titles to the main titles of "Department of Defense" and "secretary of defense." The terms must be accommodated by federal agencies and are permitted in executive branch communications, ceremonial settings, and non-statutory documents. However, only an act of Congress can legally and formally change the department's name and secretary's title, so "Department of Defense" and "secretary of defense" remain legally official. Trump described his rebranding as an effort to project a stronger and more bellicose name and said the "defense" names were "woke". History Faced with rising tensions between the Thirteen Colonies and the British government, one of the first actions taken by the First Continental Congress in September 1774 was to recommend that the colonies begin defensive military preparations. In mid-June 1775, after the outbreak of the Revolutionary War, the Second Continental Congress, recognizing the necessity of having a national army that could move about and fight beyond the boundaries of any particular colony, organized the Continental Army on 14 June 1775. Later that year, Congress would charter the Continental Navy on 13 October, and the Continental Marines on 10 November. Upon the seating of the 1st U.S. Congress on 4 March 1789, legislation to create a military defense force stagnated as they focused on other concerns relevant to setting up the new government. President George Washington went to Congress to remind them of their duty to establish a military twice during this time. Finally, on the last day of the session, 29 September 1789, Congress created the War Department. The War Department handled naval affairs until Congress created the Navy Department in 1798. The secretaries of each department reported directly to the president as cabinet-level advisors until 1949, when all military departments became subordinate to the Secretary of Defense. After the end of World War II, President Harry Truman proposed the creation of a unified department of national defense. In a special message to the Congress on 19 December 1945, the president cited wasteful military spending and interdepartmental conflicts. Deliberations in Congress went on for months focusing heavily on the role of the military in society and the threat of granting too much military power to the executive. On 26 July 1947, Truman signed the National Security Act of 1947, which established the National Military Establishment (NME) and created the National Security Council, National Security Resources Board, United States Air Force, and the Joint Chiefs of Staff. The NME was placed under the control of the new post of secretary of defense. The National Military Establishment formally began operations on 18 September, the day after the Senate confirmed James V. Forrestal as the first secretary of defense. The National Military Establishment was renamed the "Department of Defense" on 10 August 1949, and absorbed the three cabinet-level military departments, in an amendment to the original 1947 law. Under the Department of Defense Reorganization Act of 1958 (Pub. L. 85–599), channels of authority within the department were streamlined while still maintaining the ordinary jurisdiction of the Military Departments to organize, train, and equip their associated forces. The Act clarified the overall decision-making authority of the secretary of defense concerning these subordinate military departments. It more clearly defined the operational chain of command over U.S. military forces (created by the military departments) as running from the president to the secretary of defense, the service chief of the unified combatant commanders, and then to the unified combatant commanders. Also provided in this legislation was a centralized research authority, the Advanced Research Projects Agency, eventually known as DARPA. The act was written and promoted by the Eisenhower administration and was signed into law on 6 August 1958. Organizational structure The secretary of defense, appointed by the president with the advice and consent of the Senate, is by federal law (10 U.S.C. § 113) the head of the Department of Defense, "the principal assistant to the President in all matters relating to Department of Defense", and has "authority, direction, and control over the Department of Defense". Because the Constitution vests all military authority in Congress and the president, the statutory authority of the secretary of defense is derived from their constitutional authority. Since it is impractical for either Congress or the president to participate in every piece of Department of Defense affairs, the secretary of defense and the secretary's subordinate officials generally exercise military authority. The Department of Defense is composed of the Office of the Secretary of Defense, Joint Chiefs of Staff and Joint Staff, Office of the Inspector General, Combatant Commands, Military Departments (Department of the Army, Department of the Navy and Department of the Air Force), Defense Agencies and Department of Defense Field Activities, National Guard Bureau, and such other offices, agencies, activities, organizations, and commands established or designated by law, or by the president or by the secretary of defense. Department of Defense Directive 5100.01 describes the organizational relationships within the department and is the foundational issuance for delineating the major functions of the department. The latest version, signed by then–secretary of defense Robert Gates in December 2010, is the first major re-write since 1987. The Office of the Secretary of Defense (OSD) is the secretary and their deputies, including predominantly civilian staff. OSD is the principal staff element of the Secretary of Defense in the exercise of policy development, planning, resource management, fiscal and program evaluation and oversight, and interface and exchange with other U.S. federal government departments and agencies, foreign governments, and international organizations, through formal and informal processes. OSD also performs oversight and management of the Defense Agencies, Department of Defense Field Activities, and specialized Cross Functional Teams. OSD is a parent agency of the following defense agencies: Several defense agencies are members of the United States Intelligence Community. These are national-level intelligence services that operate under the Department of Defense jurisdiction but simultaneously fall under the authorities of the Office of the Director of National Intelligence. They fulfill the requirements of national policymakers and war planners, serve as Combat Support Agencies, and also assist and deploy alongside non-Department of Defense intelligence or law enforcement services such as the Central Intelligence Agency and the Federal Bureau of Investigation. The military services each have their intelligence elements that are distinct from but subject to coordination by national intelligence agencies under the Department of Defense. Department of Defense manages the nation's coordinating authorities and assets in disciplines of signals intelligence, geospatial intelligence, and measurement and signature intelligence, and also builds, launches, and operates the Intelligence Community's satellite assets. Department of Defense also has its own human intelligence service, which contributes to the CIA's human intelligence efforts while also focusing on military human intelligence priorities. These agencies are directly overseen by the under secretary of defense for intelligence and security. The Joint Chiefs of Staff is a body of senior uniformed leaders in the Department of Defense who advise the secretary of defense, the Homeland Security Council, the National Security Council and the president on military matters. The composition of the Joint Chiefs of Staff is defined by statute and consists of the chairman of the Joint Chiefs of Staff, vice chairman of the Joint Chiefs of Staff, senior enlisted advisor to the chairman, the Military Service chiefs from the Army, Marine Corps, Navy, Air Force, and Space Force, in addition to the chief of National Guard Bureau, all appointed by the president following U.S. Senate confirmation. Each of the individual military service chiefs, outside their Joint Chiefs of Staff obligations, works directly for the secretary of the military department concerned: the secretary of the Army, secretary of the Navy, and secretary of the Air Force. Following the Goldwater–Nichols Act in 1986, the Joint Chiefs of Staff no longer maintained operational command authority individually or collectively. The act designated the chairman of the Joint Chiefs of Staff (CJCS) as the "principal military adviser to the president, the National Security Council, the Homeland Security Council, and the Secretary of Defense". The remaining Joint Chiefs of Staff may only have their advice relayed to the president, National Security Council, the Homeland Security Council, or the secretary of defense after submitting it to the CJCS. By law, the chairman has to present that advice whenever he is presenting his own. The chain of command goes from the president to the secretary of defense to the commanders of the Combatant Commands. Goldwater–Nichols also created the office of vice-chairman, and the chairman is now designated as the principal military adviser to the secretary of defense, the Homeland Security Council, the National Security Council and to the president. The Joint Staff is a headquarters staff at the Pentagon made up of personnel from all five services that assist the chairman and vice chairman in discharging their duties. It is managed by the director of the Joint Staff who is a lieutenant general or vice admiral. There are three military departments within the Department of Defense: The military departments are each headed by their secretary (i.e., Secretary of the Army, Secretary of the Navy and Secretary of the Air Force), appointed by the president, with the advice and consent of the Senate. They have the legal authority under Title 10 of the United States Code to conduct all the affairs of their respective departments within which the military services are organized. The secretaries of the military departments are (by law) subordinate to the secretary of defense and (by SecDef delegation) to the deputy secretary of defense. Secretaries of military departments, in turn, normally exercise authority over their forces by delegation through their respective service chiefs (i.e., Chief of Staff of the Army, Commandant of the Marine Corps, Chief of Naval Operations, Chief of Staff of the Air Force, and Chief of Space Operations) over forces not assigned to a Combatant Command. Military departments are tasked solely with "the training, provision of equipment, and administration of troops." The Defense Reorganization Act of 1958 removed the power of command over troops from secretaries of military departments and service chiefs. A unified combatant command is a military command composed of personnel/equipment from at least two Military Departments, which has a broad, continuing mission. They are responsible for the operational command of forces. Almost all operational U.S. forces are under the authority of a Unified Command. The DoD Unified Command Plan lays out combatant commands' missions, geographical/functional responsibilities, and force structure. During military operations, the chain of command runs from the president to the secretary of defense to the combatant commanders of the Combatant Commands. As of 2019[update], the United States has eleven Combatant Commands, organized either on a geographical basis (known as "area of responsibility", AOR) or on a global, functional basis: Budget Department of Defense spending in 2017 was 3.15% of GDP and accounted for about 38% of the budgeted global military spending – more than the next 7 largest militaries combined. By 2019, the 27th secretary of defense had begun a line-by-line review of the defense budget; in 2020 the secretary identified items amounting to $5.7 billion, out of a $106 billion subtotal (the so-called "fourth estate" agencies such as missile defense, and defense intelligence, amounting to 16% of the defense budget), He will re-deploy to the modernization of hypersonics, artificial intelligence, and missile defense. Beyond 2021 the 27th secretary of defense is projecting the need for yearly budget increases of 3 to 5 percent to modernize. The Department of Defense accounts for the majority of federal discretionary spending. In FY2017 (U.S. fiscal year 2017), the Department of Defense budgeted spending accounted for 15% of the U.S. federal budget, and 49% of federal discretionary spending, which represents funds not accounted for by pre-existing obligations. However, this does not include many military-related expenses that fall outside the Department of Defense budget, such as nuclear weapons research, maintenance, cleanup, and production, which are covered in the Department of Energy budget; Veterans Affairs expenses; payments from the Treasury Department for military retirees, widows, and their families; interest on debts incurred from past wars; or State Department financing for foreign arms sales and military-related development assistance. Additionally, it does not account for defense spending outside of military operations, including expenditures by the Department of Homeland Security, counter-terrorism funding by the FBI, and intelligence-gathering spending by the NSA. In the 2010 United States federal budget, the Department of Defense was allocated a base budget of $533.7 billion, with a further $75.5 billion adjustment in respect of 2009, and $130 billion for overseas contingencies. The subsequent 2010 Department of Defense Financial Report shows the total budgetary resources for FY2010 were $1.2 trillion. Of these resources, $1.1 trillion were obligated and $994 billion were disbursed, with the remaining resources relating to multi-year modernization projects requiring additional time to procure. After over a decade of non-compliance, as part of the National Defense Authorization Act for Fiscal Year 2010, Congress established a deadline of FY2017 for the Department of Defense to achieve audit readiness, although this did not end up occurring. In 2015 the allocation for the Department of Defense was $585 billion, the highest level of budgetary resources among all federal agencies, and this amounts to more than one-half of the annual federal expenditures in the United States federal budget discretionary budget. On 28 September 2018, President Donald Trump signed the Department of Defense and Labor, Health and Human Services, and Education Appropriations Act, 2019, and Continuing Appropriations Act, 2019 (H.R.6157) into law. On 30 September 2018, the FY2018 Budget expired and the FY2019 budget came into effect. The FY2019 Budget for the Department of Defense is approximately $686,074,048,000 (including Base + Overseas Contingency Operations + Emergency Funds) in discretionary spending and $8,992,000,000 in mandatory spending totaling $695,066,000,000. Undersecretary of Defense (Comptroller) David L. Norquist said in a hearing regarding the FY 2019 budget: "The overall number you often hear is $716 billion. That is the amount of funding for national defense, the accounting code is 050 and includes more than simply the Department of Defense. It includes, for example, the Department of Energy and others. That large a number, if you back out the $30 billion for non-defense agencies, you get to $686 billion. That is the funding for the Department of Defense, split between $617 billion in base and $69 billion in overseas contingency". The Department of Defense budget encompasses the majority of the National Defense Budget of approximately $716.0 billion in discretionary spending and $10.8 billion in mandatory spending for a $726.8 billion total. Of the total, $708.1 billion falls under the jurisdiction of the House Committee on Armed Services and Senate Armed Services Committee and is subject to authorization by the annual National Defense Authorization Act (NDAA). The remaining $7.9 billion falls under the jurisdiction of other congressional committees. The Department of Defense is unique because it is one of the few federal entities where the majority of its funding falls into the discretionary category. The majority of the entire federal budget is mandatory, and much of the discretionary funding in the budget consists of DoD dollars. * Numbers may not add due to rounding As of 10 March 2023[update] the FY2024 presidential budget request was $842 billion.[b] In January 2023, Treasury Secretary Janet Yellen announced that the U.S. government would hit its $31.4 trillion debt ceiling on 19 January 2023; the date on which the U.S. government would no longer be able to use extraordinary measures such as issuance of Treasury securities is estimated to be in June 2023. On 3 June 2023, the debt ceiling was suspended until 2025. The $886 billion National Defense Authorization Act is facing reconciliation of the House and Senate bills after passing both houses 27 July 2023; the conferees have to be chosen, next. As of September 2023, a continuing resolution was needed to prevent a government shutdown. A shutdown was avoided on 30 September for 45 days (until 17 November 2023), with passage of the NDAA on 14 December 2023. The Senate will next undertake negotiations on supplemental spending for 2024. A government shutdown was averted on 23 March 2024 with the signing of a $1.2 trillion bill to cover FY2024. A 2013 Reuters investigation concluded that Defense Finance and Accounting Service, the primary financial management arm of the Department of Defense, implements monthly "unsubstantiated change actions"—illegal, inaccurate "plugs"—that forcibly make DoD's books match Treasury's books. Reuters reported that the Pentagon was the only federal agency that had not released annual audits as required by a 1992 law. According to Reuters, the Pentagon "annually reports to Congress that its books are in such disarray that an audit is impossible". In 2015, a Pentagon consulting firm performed an audit on the department's budget. It found that there was $125 billion in wasteful spending that could be saved over the next five years without layoffs or reduction in military personnel. In 2016, The Washington Post uncovered that rather than taking the advice of the auditing firm, senior defense officials suppressed and hid the report from the public to avoid political scrutiny. In June 2016, the Office of the Inspector General released a report stating that the Army made $6.5 trillion in wrongful adjustments to its accounting entries in 2015. The Department of Defense failed its fifth audit in 2022, and could not account for more than 60% of its $3.5 trillion in assets. In the latest Center for Effective Government analysis of 15 federal agencies which receive the most Freedom of Information Act requests, published in 2015 (using 2012 and 2013 data, the most recent years available), the DoD earned 61 out of a possible 100 points, a D− grade. While it had improved from a failing grade in 2013, it still had low scores in processing requests (55%) and disclosure rules (42%). Related legislation The organization and functions of the Department of Defense are in Title 10 of the United States Code. Other significant legislation related to the Department of Defense includes: See also Notes References Further reading External links |
======================================== |
[SOURCE: https://techcrunch.com/video/spacex-is-coming-to-the-public-markets-and-secondaries-are-already-on-fire/] | [TOKENS: 734] |
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us SpaceX is coming to the public markets, and secondaries are already on fire Loading the player… paceX is reportedly lining up four major Wall Street banks for a 2026 IPO that could provide the reset the market needs. The company just completed a tender offer at an $800 billion valuation, and secondary market demand is through the roof. If SpaceX goes public anywhere near its rumored $1.5 trillion valuation, it could trigger an IPO cascade for other late-stage unicorns like OpenAI, Stripe, and Databricks. Watch as Equity host Rebecca Bellan chats with Greg Martin, Managing Director at Rainmaker Securities, about why this IPO feels different, how tech employees are cashing out through secondary markets before companies go public, and what investors are actually looking for in pre-IPO shares. Subscribe to Equity on YouTube, Apple Podcasts, Overcast, Spotify and all the casts. You also can follow Equity on X and Threads, at @EquityPod. Topics Audio Producer Theresa Loconsolo is an audio producer at TechCrunch focusing on Equity, the network’s flagship podcast. Before joining TechCrunch in 2022, she was one of 2 producers at a four-station conglomerate where she wrote, recorded, voiced and edited content, and engineered live performances and interviews from guests like lovelytheband. Theresa is based in New Jersey and holds a bachelors degree in Communication from Monmouth University. You can contact or verify outreach from Theresa by emailing theresa.loconsolo@techcrunch.com. You can contact or verify outreach from Theresa by emailing theresa.loconsolo@techcrunch.com. You can contact or verify outreach from Theresa by emailing theresa.loconsolo@techcrunch.com. Save up to $680 on your pass before February 27.Meet investors. Discover your next portfolio company. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what’s next. Don’t miss these one-time savings. Most Popular FBI says ATM ‘jackpotting’ attacks are on the rise, and netting hackers millions in stolen cash Meta’s own research found parental supervision doesn’t really help curb teens’ compulsive social media use How Ricursive Intelligence raised $335M at a $4B valuation in 4 months After all the hype, some AI experts don’t think OpenClaw is all that exciting OpenClaw creator Peter Steinberger joins OpenAI Hollywood isn’t happy about the new Seedance 2.0 video generator The great computer science exodus (and where students are going instead) Subscribe for the industry’s biggest tech news Every weekday and Sunday, you can get the best of TechCrunch’s coverage. TechCrunch's AI experts cover the latest news in the fast-moving field. Every Monday, gets you up to speed on the latest advances in aerospace. Startups are the core of TechCrunch, so get our best coverage delivered weekly. By submitting your email, you agree to our Terms and Privacy Notice. Related © 2025 TechCrunch Media LLC. |
======================================== |
[SOURCE: https://he.wikipedia.org/wiki/Grand_Theft_Auto_VI] | [TOKENS: 1989] |
תוכן עניינים Grand Theft Auto VI Grand Theft Auto VI (בתרגום חופשי לעברית: "גניבת כלי רכב 6", מוכר גם בראשי התיבות GTA VI) הוא משחק וידאו מסוגת פעולה והרפתקה שנמצא בפיתוח על ידי Rockstar Games. המשחק הוא השמיני (מבחינה עלילתית) בסדרת משחקי Grand Theft Auto, וקודמו הוא Grand Theft Auto V משנת 2013. עלילת המשחק מתרחשת בעולם הפתוח הבדיוני של ליאונידה, שמבוססת על מדינת פלורידה, ועוקבת אחר צמד הפושעים, לוסיה ובן זוגה ג'ייסון. לאחר שנים של ספקולציות והדלפות, רוקסטאר אישרה בפברואר 2022 שהמשחק נמצא בפיתוח. צילומים מגרסאות לא גמורות הודלפו ברשת החברתית ועיתונאים רבים תיארו זאת כאחת ההדלפות הגדולות ביותר בהיסטוריה של תעשיית משחקי הווידאו. המשחק נחשף רשמית ב-6 בדצמבר 2023 ותוכנן לצאת בסתיו 2025 לקונסולות המשחקים פלייסטיישן 5 ו-Xbox Series X/S. במאי 2025 רוקסטאר הודיעה שהמשחק יצא ב-26 במאי 2026. בנובמבר 2025 רוקסטאר הודיעה שהמשחק נדחה ל-19 בנובמבר 2026. הגדרה ורקע עלילת המשחק מתרחשת בעולם הפתוח הבדיוני של ליאונידה, שמתבססת על מדינת פלורידה, ובפרט בעיר וייס סיטי, שמבוססת על מיאמי ועל מיאמי ביץ'. העלילה עוקבת אחר צמד הפושעים, לוסיה קמינוס, וג'ייסון דובאל; קדימון המשחק הראשון מתאר את לוסיה כאסירה. קדימון המשחק השני מתאר את הזוגיות של לוסיה וג׳ייסון. הקדימון השני יצא ב-6 במאי 2025. פיתוח לאחר יציאתו לאור של המשחק Grand Theft Auto V משנת 2013, הנשיא של Rockstar North דאז, לסלי בנציס, אמר שלחברה יש "כמה רעיונות" למשחק הבא בסדרה. בשנת 2018, "The Know" דיווח שהמשחק בשמו הקוד "Project Americas", יתרחש בעיקר בוייס סיטי ובחלקו בדרום אמריקה עם הדמות ראשית נשית. בשנת 2020, העיתונאי ג'ייסון שרייר דיווח שהמשחק בפיתוח מוקדם כפרויקט בגודל בינוני שיתרחב עם הזמן. בשנת 2021, טום הנדרסון טען שמפת המשחק עשויה להתפתח בדומה למשחק הבאטל רויאל פורטנייט. בשנת 2022, שרייר דיווח שהמשחק היה בפיתוח כבר בשנת 2014 ויכלול שתי דמויות ראשיות. המשחק זכה לציפייה רבה לפני הכרזתו, ועיתונאים ציינו שמעריצים התאכזבו מהשתיקה המתמשכת של Rockstar Games, במיוחד לאחר שהכריזו על שחרור מחדש של Grand Theft Auto V ב-2020. ב-4 בפברואר 2022, רוקסטאר אישרה שהפיתוח בתהליכים. ביולי, רוקסטאר הכריזה שרד דד אונליין לא יקבל עדכונים גדולים יותר מאחר שמשאבי הפיתוח נסוגו כדי להתמקד במשחק; מקורות בתעשייה ציינו שרוקסטאר הקצתה מחדש משאבים לאחר שהגרסאות המחודשות המתוכננות של Grand Theft Auto IV ו-Red Dead Redemption הושעו עקב תגובות שליליות שקיבלו משחקי הטרילוגיה הקלאסית המחודשת, Grand Theft Auto: The Trilogy – The Definitive Edition. קישורים חיצוניים הערות שוליים |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ferdinand_T%C3%B6nnies] | [TOKENS: 1988] |
Contents Ferdinand Tönnies Ferdinand Tönnies (German: [ˈtœniːs]; 26 July 1855 – 8 April 1936) was a German sociologist, economist, and philosopher. He was a significant contributor to sociological theory and field studies, best known for distinguishing between two types of social groups, Gemeinschaft and Gesellschaft (community and society). He co-founded the German Sociological Association together with Max Weber and Georg Simmel and many other founders. He was president of the society from 1909 to 1933, after which he was ousted for having criticized the Nazis. Tönnies was regarded as the first proper German sociologist and published over 900 works, contributing to many areas of sociology and philosophy. Tönnies, Max Weber, and Georg Simmel are considered the founding fathers of classical German sociology. Though there has been a resurgence of interest in Weber and Simmel, Tönnies has not drawn as much attention. Biography Ferdinand Tönnies was born on 26 July 1855 on the Haubarg "De Reap", Oldenswort on the Eiderstedt Peninsula into a wealthy farmer's family in North Frisia, Duchy of Schleswig, then under Danish rule. Tönnies was the only sociologist of his generation who came from the countryside. He was the third child of church chief and farmer August Ferdinand Tönnies (1822–1883), and his wife Ida Frederica (born Mau, 1826–1915), came from a theological family from East Holstein. His father, of Frisian ancestry, was a successful farmer and cattle rancher, while his mother hailed from a line of Lutheran ministers. The two had seven children, four sons and three daughters. On the day he was born, Ferdinand Tönnies received the baptismal name of Ferdinand Julius and moved to Husum, on the North Sea, after his father retired in 1864.[citation needed] Tönnies enrolled at the University of Strasbourg after graduating from high school in 1872. He took the time to utilize his freedom to travel, exploring the academic fields of the Universities of Jena, Bonn, Leipzig, Berlin, and Tübingen. At age 22, he received a doctorate in philology at the University of Tübingen in 1877 (with a Latin thesis on the ancient Siwa Oasis). However, by this time, his main interests had switched to political philosophy and social issues. After completing postdoctoral work at the University of Berlin, he traveled to London to continue his studies on the seventeenth-century English political thinker Thomas Hobbes. Tönnies earned a Privatdozent in philosophy at the University of Kiel from 1909 to 1933 after submitting a draft of his major book, Gemeinschaft und Gesellschaft, as his Habilitationsschrift in 1881. He held this post at the University of Kiel for only three years. Because he sympathized with the Hamburg dockers' strike of 1896, the conservative Prussian government considered him to be a social democrat, and Tönnies would not be called to a professorial chair until 1913. He returned to Kiel as a professor emeritus in 1921 where he took on a teaching position in sociology and taught until 1933 when he was ousted by the Nazis, due to earlier publications in which he had criticized them and had endorsed the Social Democratic Party. Remaining in Kiel, he died three years later in isolation in his home in 1936.[citation needed] Many of his writings on sociological theories furthered pure sociology, including Gemeinschaft and Gesellschaft (1887). He coined the metaphysical term Voluntarism. Tönnies also contributed to the study of social change, particularly on public opinion, customs and technology, crime, and suicide. He also had a vivid interest in methodology, especially statistics, and sociological research, inventing his own technique of statistical association. After publishing Gemeinschaft und Gesellschaft, Tönnies focused aspects of the social life such as morals, folkways, and public opinion. However he is best known for his published work on Gesellschaft and Gesellschaft because his later works applied those concepts to aspects of social life. Gemeinschaft and Gesellschaft Tönnies distinguished between two types of social groupings. Gemeinschaft—often translated as community (or left untranslated)—refers to groups based on feelings of togetherness and mutual bonds, which are felt like a goal to be kept up, their members being means for this goal. Gesellschaft—often translated as society—on the other hand, refers to groups that are sustained by it being instrumental for their members' aims and goals. The equilibrium in Gemeinschaft is achieved through means of social control, such as morals, conformism, and exclusion, while Gesellschaft keeps its balance through police, laws, tribunals, and prisons. Amish and Hasidic communities are examples of Gemeinschaft, while states are types of Gesellschaft. Rules in Gemeinschaft are implicit, while Gesellschaft has explicit rules (written laws).[citation needed] Gemeinschaft may be exemplified historically by a family or a neighborhood in a pre-modern (rural) society; Gesellschaft by a joint-stock company or a state in a modern society, i.e. the society when Tönnies lived. Gesellschaft relationships arose in an urban and capitalist setting, characterized by individualism and impersonal monetary connections between people. Social ties were often instrumental and superficial, with self-interest and exploitation increasingly the norm. Examples are corporations, states, or voluntary associations. In his book Einteilung der Soziologie (Classification of Sociology) he distinguished between three disciplines of sociology, being Pure or Theoretical (reine, theoretische) Sociology, Applied (angewandte) Sociology, and Empirical (emprische) Sociology.[citation needed] His distinction between social groupings is based on the assumption that there are only two primary forms of an actor's will to approve of other men. For Tönnies, such approval is by no means self-evident; he is pretty influenced by Thomas Hobbes. Following his "essential will" ("Wesenwille"), an actor will see himself as a means to serve the goals of social grouping; very often, it is an underlying, subconscious force. Groupings formed around an essential will are called a Gemeinschaft. The other will is the "arbitrary will" ("Kürwille"): An actor sees a social grouping as a means to further his individual goals, so it is purposive and future-oriented. Groupings around the latter are called Gesellschaft. Whereas the membership in a Gemeinschaft is self-fulfilling, a Gesellschaft is instrumental for its members. In pure sociology—theoretically—these two standard types of will are to be strictly separated; in applied sociology—empirically—they are always mixed.Recent scholarly discourse suggests that this intersection of wills is rooted in the individual's biological architecture, where the 'knowing of man'—from neural events to hormonal drives—serves as the primary source for understanding these social expressions. Gender Polarity in "Gemeinschaft und Gesellschaft" What is less well-known when discussing the work of Tönnies is that he frequently uses gender concepts to explain his main ideas. Essential will-arbitrary will, Gemeinschaft-Gesellschaft, are all thought of in terms of the polarity of feminine-masculine. Gemeinschaft, for example, is feminine: "the eternal-feminine," since motherliness is the basis of all being together. Essential will is also feminine, whereas Gesellschaft and arbitrary choice are masculine. Tönnies' theory appears to consign him to a nineteenth-century view of the public world belonging to males, while women are relegated to the private realm, as it links together Gemeinschaft/home/woman as opposed to Gesellschaft/marketplace/man.[citation needed] Views on Family In his article "Funfzehn Thesen zur Erneuerung eines Familienlebens," published in 1893, he claims that the dissolution of family life has tainted modern society's blood. Tonnies believed that one of the most important ways to resurrect Gemeinschaft in the modern world would be to improve and prolong family life. The demise of the family is caused by modern capitalism and its consequences: low pay, excessive hours of labor for men and women alike, and terrible living conditions. He believes family life has to be revitalized since it is the foundation of all culture and morals. In this case, he proposed two solutions that revolved around the idea of unions devoted to aid and nurture, as he would claim, "the family spirit."[citation needed] Two Solutions Criticisms Tönnies' distinction between Gemeinschaft and Gesellschaft, like others between tradition and modernity, has been criticized for over-generalizing differences between societies and implying that all societies were following a similar evolutionary path, an argument which Tönnies himself never actually proclaimed. Legacy The impact that Ferdinand Tönnies left on sociology was the division of groups unconsciously and consciously. His contribution to sociology included fundamental dichotomy, community and society—where structural forms are being made through social life. He separates the idea that individual consciousness vs community consciousness by indicating that community is built through beliefs and society is built through power and a separation of classes.[citation needed] Published works (selection) See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Hypertext] | [TOKENS: 3490] |
Contents Hypertext Hypertext is text displayed on a computer display or other electronic devices with references (hyperlinks) to other text that the reader can immediately access. Hypertext documents are interconnected by hyperlinks, which are typically activated by a mouse click, keypress set, or screen touch. Apart from text, the term "hypertext" is also used to describe tables, images, and other presentational materials with integrated hyperlinks. Hypertext is one of the key underlying concepts of the World Wide Web, where Web pages are often written in the Hypertext Markup Language (HTML). As implemented on the Web, hypertext enables the easy-to-use publication of information over the Internet. Etymology "(...)'Hypertext' is a recent coinage. 'Hyper-' is used in the mathematical sense of extension and generality (as in 'hyperspace,' 'hypercube') rather than the medical sense of 'excessive' ('hyperactivity'). There is no implication about size— a hypertext could contain only 500 words or so. 'Hyper-' refers to structure and not size." — Theodor H. Nelson, Brief Words on the Hypertext, 23 January 1967 The English prefix "hyper-" comes from the Greek prefix "ὑπερ-" and means "over" or "beyond"; it has a common origin with the prefix "super-" which comes from Latin. It signifies the overcoming of the previous linear constraints of written text. The term "hypertext" is often used where the term "hypermedia" might seem appropriate. In 1992, author Ted Nelson – who coined both terms in 1965 – wrote: By now the word "hypertext" has become generally accepted for branching and responding text, but the corresponding word "hypermedia", meaning complexes of branching and responding graphics, movies and sound – as well as text – is much less used. Instead they use the strange term "interactive multimedia": this is four syllables longer, and does not express the idea of extending hypertext. — Nelson, Literary Machines, 1992 Types and uses of hypertext Hypertext documents can either be static (prepared and stored in advance) or dynamic (continually changing in response to user input, such as dynamic web pages). Static hypertext can be used to cross-reference collections of data in documents, software applications, or books on CDs. A well-constructed system can also incorporate other user-interface conventions, such as menus and command lines. Links used in a hypertext document usually replace the current piece of hypertext with the destination document. A lesser known feature is StretchText, which expands or contracts the content in place, thereby giving more control to the reader in determining the level of detail of the displayed document. Some implementations support transclusion, where text or other content is included by reference and automatically rendered in place. Hypertext can be used to support very complex and dynamic systems of linking and cross-referencing. The most famous implementation of hypertext is the World Wide Web, written in the final months of 1990 and released on the Internet in 1991. History In 1941, Jorge Luis Borges published "The Garden of Forking Paths", a short story that is often considered an inspiration for the concept of hypertext. In 1945, Vannevar Bush wrote an article in The Atlantic Monthly called "As We May Think", about a futuristic proto-hypertext device he called a Memex. A Memex would hypothetically store — and record — content on reels of microfilm, using electric photocells to read coded symbols recorded next to individual microfilm frames while the reels spun at high speed, and stopping on command. The coded symbols would enable the Memex to index, search, and link content to create and follow associative trails. Because the Memex was never implemented and could only link content in a relatively crude fashion — by creating chains of entire microfilm frames — the Memex is regarded only as a proto-hypertext device, but it is fundamental to the history of hypertext because it directly inspired the invention of hypertext by Ted Nelson and Douglas Engelbart. In 1965, Ted Nelson coined the terms 'hypertext' and 'hypermedia' as part of a model he developed for creating and using linked content (first published reference 1965). He later worked with Andries van Dam to develop the Hypertext Editing System (text editing) in 1967 at Brown University. It was implemented using the terminal IBM 2250 with a light pen which was provided as a pointing device. By 1976, its successor FRESS was used in a poetry class in which students could browse a hyperlinked set of poems and discussion by experts, faculty and other students, in what was arguably the world's first online scholarly community which van Dam says "foreshadowed wikis, blogs and communal documents of all kinds". Ted Nelson said in the 1960s that he began implementation of a hypertext system he theorized, which was named Project Xanadu, but his first and incomplete public release was finished much later, in 1998. During this period, Nelson also proposed using Vladimir Nabokov's 1962 novel Pale Fire as part of a demonstration to IBM, intending to show how hypertext could support complex, non-linear forms of literary analysis. The novel, structured as a long poem with an extensive, self-referential commentary and index, embodied the principles of associative linking and user-directed navigation that Nelson believed defined hypertext. Its layered design enabled readers to follow multiple interpretive paths through the text, resembling the branching structures later implemented in digital hypertext systems. However, IBM chose a more technically conventional presentation, and the literary demonstration was never realized. Douglas Engelbart independently began working on his NLS system in 1962 at Stanford Research Institute, although delays in obtaining funding, personnel, and equipment meant that its key features were not completed until 1968. In December of that year, Engelbart demonstrated a 'hypertext' (meaning editing) interface to the public for the first time, in what has come to be known as "The Mother of All Demos". In 1971 a system called Scrapbook, produced by David Yates and his team at the UK's National Physical Laboratory, went live. It was an information storage and retrieval system that included what would now be called word processing, e-mail and hypertext. ZOG, an early hypertext system, was developed at Carnegie Mellon University during the 1970s, used for documents on Nimitz class aircraft carriers, and later evolving as KMS (Knowledge Management System). The first hypermedia application is generally considered to be the Aspen Movie Map, implemented in 1978. The Movie Map allowed users to arbitrarily choose which way they wished to drive in a virtual cityscape, in two seasons (from actual photographs) as well as 3-D polygons. In France, the launch of the Minitel system in 1982 provided widespread public access to interactive digital content via telephone lines and videotex terminals. Minitel allowed users to search directories, make purchases, read news, and access databases using a system of on-screen menus and numbered links. Although it was based on videotex rather than the dynamic linking protocols of later hypertext systems, Minitel introduced many users to the practice of navigating non-linear networks of information. Its use of branching menus and user-selected paths anticipated key aspects of hypertext interaction, particularly the idea of browsing through interconnected data by following associative or logical links. As one of the earliest large-scale deployments of an online information service, Minitel helped familiarize the public with interactive computing and laid cultural groundwork for the broader adoption of hypertext and web technologies in the 1990s. Between 1984 and 1987 Frank Halasz, Randall Trigg, and Thomas Moran developed NoteCards at Xerox PARC. This early hypertext system was designed to support information analysis and idea processing, employing a central metaphor of "notecards" which operated as discrete units of information that could contain text or graphics. These notecards could be interconnected through typed, directional links, enabling users to create semantically distinct relationships. A key component of NoteCards was the "Browser card," which provided a graphical overview of the structure of linked notecards, facilitating navigation within complex information networks. Operating on Xerox Lisp machines, NoteCards' primary impact was within the research community rather than as a commercial product. Its most significant contribution to the field of hypertext is often attributed to the insights gained from its use, Halasz identified critical challenges such as search and query in large hypertexts, composite structures, versioning, and collaborative work. In 1980, Tim Berners-Lee created ENQUIRE, an early hypertext database system somewhat like a wiki but without hypertext punctuation, which was not invented until 1987. The early 1980s also saw a number of experimental "hyperediting" functions in word processors and hypermedia programs, many of whose features and terminology were later analogous to the World Wide Web. Guide, the first significant hypertext system for personal computers, was developed by Peter J. Brown at the University of Kent in 1982. In 1980, Roberto Busa, an Italian Jesuit priest and one of the pioneers in the usage of computers for linguistic and literary analysis, published the Index Thomisticus, as a tool for performing text searches within the massive corpus of Aquinas's works. Sponsored by the founder of IBM, Thomas J. Watson, the project lasted about 30 years (1949–1980), and eventually produced the 56 printed volumes of the Index Thomisticus the first important hypertext work about Saint Thomas Aquinas books and of a few related authors. In 1983, Ben Shneiderman at the University of Maryland Human - Computer Interaction Lab led a group that developed the HyperTies system that was commercialized by Cognetics Corporation. They studied many designs before adopting the blue color for links. Hyperties was used to create the July 1988 issue of the Communications of the ACM as a hypertext document and then the first commercial electronic book Hypertext Hands-On!. In 1985, Grolier's Academic American Encyclopedia on CD-ROM was developed by Activenture. The encyclopedia could be navigated through hypertext links, a full text search engine, and a traditional bookshelf interface. In August 1987, Apple Computer released HyperCard for the Macintosh line at the MacWorld convention. Its impact, combined with interest in Peter J. Brown's GUIDE (marketed by OWL and released earlier that year) and Brown University's Intermedia, led to broad interest in and enthusiasm for hypertext, hypermedia, databases, and new media in general. The first ACM Hypertext (hyperediting and databases) academic conference took place in November 1987, in Chapel Hill NC, where many other applications, including the branched literature writing software Storyspace, were also demonstrated. Meanwhile, Nelson (who had been working on and advocating his Xanadu system for over two decades) convinced Autodesk to invest in his revolutionary ideas. The project continued at Autodesk for four years, but no product was released. In 1989, Tim Berners-Lee, then a scientist at CERN, proposed and later prototyped a new hypertext project in response to a request for a simple, immediate, information-sharing facility, to be used among physicists working at CERN and other academic institutions. He called the project "WorldWideWeb". HyperText is a way to link and access information of various kinds as a web of nodes in which the user can browse at will. Potentially, HyperText provides a single user-interface to many large classes of stored information, such as reports, notes, data-bases, computer documentation and on-line systems help. We propose the implementation of a simple scheme to incorporate several different servers of machine-stored information already available at CERN, including an analysis of the requirements for information access needs by experiments... A program which provides access to the hypertext world we call a browser. ― T. Berners-Lee, R. Cailliau, 12 November 1990, CERN In 1992, Lynx was born as an early Internet web browser. Its ability to provide hypertext links within documents that could reach into documents anywhere on the Internet began the creation of the Web on the Internet. As new web browsers were released, traffic on the World Wide Web quickly exploded from only 500 known web servers in 1993 to over 10,000 in 1994. As a result, all previous hypertext systems were overshadowed by the success of the Web, even though it lacked many features of those earlier systems, such as integrated browsers/editors (a feature of the original WorldWideWeb browser, which was not carried over into most of the other early Web browsers). Implementations Besides the already mentioned Project Xanadu, Hypertext Editing System, NLS, HyperCard, and World Wide Web, there are other noteworthy early implementations of hypertext, with different feature sets: Academic conferences Among the top academic conferences for new research in hypertext is the annual ACM Conference on Hypertext and Social Media. The Electronic Literature Organization hosts annual conferences discussing hypertext fiction, poetry and other forms of electronic literature. Although not exclusively about hypertext, the World Wide Web series of conferences, organized by IW3C2, also include many papers of interest. There is a list on the Web with links to all conferences in the series. Hypertext fiction Hypertext writing has developed its own style of fiction, coinciding with the growth and proliferation of hypertext development software and the emergence of electronic networks. Hypertext fiction is one of earliest genres of electronic literature, or literary works that are designed to be read in digital media. Two software programs specifically designed for literary hypertext, Storyspace and Intermedia, became available in the 1990s. Judy Malloy's Uncle Roger (1986) and Michael Joyce's afternoon, a story (1987) are generally considered the first works of hypertext fiction. An advantage of writing a narrative using hypertext technology is that the meaning of the story can be conveyed through a sense of spatiality and perspective that is arguably unique to digitally networked environments. An author's creative use of nodes, the self-contained units of meaning in a hypertextual narrative, can play with the reader's orientation and add meaning to the text. One of the most successful computer games, Myst, was first written in HyperCard. The game was constructed as a series of Ages, each Age consisting of a separate HyperCard stack. The full stack of the game consists of over 2500 cards. In some ways, Myst redefined interactive fiction, using puzzles and exploration as a replacement for hypertextual narrative. Critics of hypertext claim that it inhibits the old, linear, reader experience by creating several different tracks to read on. This can also been seen as contributing to a postmodernist fragmentation of worlds. In some cases, hypertext may be detrimental to the development of appealing stories (in the case of hypertext Gamebooks), where ease of linking fragments may lead to non-cohesive or incomprehensible narratives. However, they do see value in its ability to present several different views on the same subject in a simple way. This echoes the arguments of 'medium theorists' like Marshall McLuhan who look at the social and psychological impacts of the media. New media can become so dominant in public culture that they effectively create a "paradigm shift" as people have shifted their perceptions, understanding of the world, and ways of interacting with the world and each other in relation to new technologies and media. So hypertext signifies a change from linear, structured and hierarchical forms of representing and understanding the world into fractured, decentralized and changeable media based on the technological concept of hypertext links. In the 1990s, women and feminist artists took advantage of hypertext and produced dozens of works. Linda Dement's Cyberflesh Girlmonster a hypertext CD-ROM that incorporates images of women's body parts and remixes them to create new monstrous yet beautiful shapes. Caitlin Fisher's award-winning online hypertext novella These Waves of Girls (2001) is set in three time periods of the protagonist exploring polymorphous perversity enacted in her queer identity through memory. The story is written as a reflection diary of the interconnected memories of childhood, adolescence, and adulthood. It consists of an associated multi-modal collection of nodes includes linked text, still and moving images, manipulable images, animations, and sound clips. Adrienne Eisen (pen name for Penelope Trunk) wrote hypertexts that were subversive narrative journeys into the mind of a woman whose erotic encounters were charged with a post-feminist satirical edge that cuts deep into the American psyche. There are various forms of hypertext fiction, each of which is structured differently. Below are four: See also References Documentary film Bibliography Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/List_of_countries_by_Human_Development_Index] | [TOKENS: 779] |
Contents List of countries by Human Development Index Page version status This is an accepted version of this page The HDI is the most widely used indicator of human development and has changed how people view the concept. However, several aspects of the index have received criticism. Some scholars have criticized how the factors are weighed, in particular how an additional year of life expectancy is valued differently between countries; and the limited factors it considers, noting the omission of factors such as the levels of distributional and gender inequality. In response to the former, the UNDP introduced the inequality-adjusted Human Development Index (IHDI) in its 2010 report, and in response to the latter the Gender Development Index (GDI) was introduced in the 1995 report. Others have criticized the perceived oversimplification of using a single number per country. To reflect developmental differences within countries, a subnational HDI (SHDI) featuring data for more than 1,600 regions was introduced in 2018 by the Global Data Lab at Radboud University in the Netherlands. In 2020, the UNDP introduced another index, the planetary pressures–adjusted Human Development Index (PHDI), which decreases the scores of countries with a higher ecological footprint. Dimensions and indicators The HDI was first published in 1990 with the goal of being a more comprehensive measure of human development than purely economic measures such as gross domestic product. The index incorporates three dimensions of human development: a long and healthy life, knowledge, and decent living standards. Various indicators are used to quantify how countries perform on each dimension. The indicators used in the 2022 report were life expectancy at birth; expected years of schooling for children; mean years of schooling for adults; and gross national income per capita. The indicators are used to create a health index, an education index and an income index, each with a value between 0 and 1. The geometric mean of the three indices—that is, the cube root of the product of the indices—is the human development index. A value above 0.800 is classified as very high, between 0.700 and 0.799 as high, 0.550 to 0.699 as medium, and below 0.550 as low. The data used to calculate HDI comes mostly from United Nations agencies and international institutions, such as United Nations Educational, Scientific and Cultural Organization (UNESCO), United Nations Department of Economic and Social Affairs, the World Bank, International Monetary Fund and Organisation for Economic Co-operation and Development (OECD). Rarely, when one of the indicators is missing, cross-country regression models are used. Due to improved data and methodology updates, HDI values are not comparable across human development reports; instead, each report recalculates the HDI for some previous years. List The Human Development Report includes data for all 193 member states of the United Nations, as well as Hong Kong SAR and Palestine. However, the Human Development Index is not calculated for two UN member states: Monaco and North Korea, only some components of the index are calculated for these two countries. The Cook Islands, the Holy See (Vatican City), and Niue are the only three state parties within the United Nations System which are not included in the report. In total, the HDI is available for 192 countries and one territory. Data is for the year 2023. Regions and groups The Human Development Report also reports the HDI for various groups of countries. These include regional groupings based on the UNDP regional classifications, HDI groups including the countries currently falling into a given HDI bracket, OECD members and various other UN groupings. The aggregate HDI values are calculated in the same way as for individual countries with the input data being the weighted average for all countries with available data in the group. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/CLIST] | [TOKENS: 336] |
Contents CLIST CLIST (Command List; pronounced "C-List") is a procedural programming language for Time Sharing Option (TSO) in SVS and MVS systems. It originated in OS/360 Release 20 and has assumed a secondary role since the availability of Rexx in TSO/E Version 2. In its basic form, a CLIST program is a list of commands to be executed in strict sequence (like a DOS batch file (*.bat) file). OS/VS2 R3.6 (MVS) added If-Then-Else logic and loop constructs to CLIST. The term CLIST is also used for command lists written by users of NetView. CLIST is an interpreted language. That is, the computer must translate a CLIST every time the program is executed. CLISTs therefore tend to be slower than programs written in compiled languages such as COBOL, FORTRAN, or PL/1. (A program written in a compiled language is translated once to create a "load module" or executable.) CLIST can read/write MVS files and read/write from/to a TSO terminal. It can read parameters from the caller and also features a function to hold global variables and pass them between CLISTs. A CLIST can also call an MVS application program (written in COBOL or PL/I, for example). CLISTs can be run in background[ii][iii]. CLISTs can display TSO I/O screens and menus by using ISPF dialog services. Compare the function of CLIST with that provided by REXX. Example programs Adding If-Then-Else logic: Footnotes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Georg_Simmel] | [TOKENS: 3978] |
Contents Georg Simmel Georg Simmel (/ˈzɪməl/; German: [ˈzɪml̩]; 1 March 1858 – 26 September 1918) was a German sociologist, philosopher, and critic. A founding figure of sociology, his neo-Kantian approach helped establish sociological antipositivism, asking "What is society?" in analogy to Kant's "What is nature?". He pioneered analyses of individuality and social fragmentation. Simmel discussed social and cultural phenomena in terms of "forms" and "contents" with a transient relationship, wherein form becomes content, and vice versa dependent on context. In this sense, Simmel was a forerunner to structuralist styles of reasoning in the social sciences. Through "The Metropolis and Mental Life" Simmel was a precursor of urban sociology, symbolic interactionism, and social network analysis. An acquaintance of Max Weber, Simmel wrote on the topic of personal character in a manner reminiscent of the sociological ideal type. He broadly rejected academic standards, however, philosophically covering topics such as emotion and romantic love. Both Simmel and Weber's nonpositivist theory informed the eclectic critical theory of the Frankfurt School. Biography Georg Simmel was born in Berlin, Germany, as the youngest of seven children to an assimilated Jewish family. His father, Eduard Simmel (1810–1874), a prosperous businessman and convert to Roman Catholicism, had founded a confectionery store called "Felix & Sarotti" that would later be taken over by a chocolate manufacturer. His mother Flora Bodstein (1818–1897) came from a Jewish family who had converted to Lutheranism. Georg, himself, was baptized as a Protestant when he was a child. His father died in 1874, when Georg was 16, leaving a sizable inheritance. Georg was then adopted by Julius Friedländer, the founder of an international music publishing house known as Peters Verlag, who endowed him with the large fortune that enabled him to become a scholar. Beginning in 1876, Simmel studied philosophy and history at the Humboldt University of Berlin, going on to receive his doctorate in 1881 for his thesis on Kantian philosophy of matter, titled "Das Wesen der Materie nach Kants Physischer Monadologie" ("The Nature of Matter According to Kant's Physical Monadology"). In 1885, Simmel became a privatdozent at the University of Berlin, officially lecturing in philosophy but also in ethics, logic, pessimism, art, psychology and sociology. His lectures were not only popular inside the university, but attracted the intellectual elite of Berlin as well. Although his applications for vacant chairs at German universities were supported by Max Weber, Simmel remained an academic outsider. However, with the support of an inheritance from his guardian, he was able to pursue his scholarly interests for many years without needing a salaried position. Simmel had a hard time gaining acceptance in the academic community despite the support of well known associates, such as Max Weber, Rainer Maria Rilke, Stefan George and Edmund Husserl. This was partly because he was seen as a Jew during an era of anti-Semitism, but also simply because his articles were written for a general audience rather than academic sociologists. This led to dismissive judgements from other professionals. Simmel nevertheless continued his intellectual and academic work, as well as taking part in artistic circles.[citation needed] In 1890, Georg married Gertrud Kinel, a philosopher who published under the pseudonym Marie-Luise Enckendorf, and under her own name. They lived a sheltered and bourgeois life, their home becoming a venue for cultivated gatherings in the tradition of the salon. They had one son, Hans Eugen Simmel, who became a medical doctor. Georg and Gertrud's granddaughter was the psychologist Marianne Simmel. Simmel also had a secret affair with his assistant Gertrud Kantorowicz, who bore him a daughter in 1907, though this fact was hidden until after Simmel's death. In 1909, Simmel, Ferdinand Tönnies, Max Weber, and others, co-founded the German Society for Sociology. He served as a member of its first executive body. In 1914, Simmel received an ordinary professorship with chair, at the then German University of Strassburg, but did not feel at home there. Because World War I broke out, all academic activities and lectures were halted and lecture halls were converted to military hospitals. In 1915 he applied – without success – for a chair at the Heidelberg University. In 1917, Simmel stopped reading the newspapers and withdrew to the Black Forest to finish the book The View of Life (Lebensanschauung). Shortly before the end of the war in 1918, he died from liver cancer in Strasbourg. Theory There are four basic levels of concern in Simmel's work: A dialectical approach is a multicausal and multidirectional method: it focuses on social relations; integrates facts and value, rejecting the idea that there are hard and fast dividing lines between social phenomena; looks not only at the present, but also at the past and future; and is deeply concerned with both conflicts and contradictions. Simmel's sociology was concerned with relationships—especially interaction—and was thus known as a methodological relationalist. This approach is based on the idea that interactions exist between everything. Overall, Simmel would be mostly interested in dualisms, conflicts, and contradictions in whatever realm of the social world he happened to be working on. The furthest Simmel has brought his work to a micro-level of analysis was in dealing with forms and interactions that takes place with different types of people. Such forms would include subordination, superordination, exchange, conflict and sociability.: 158–88 Simmel focused on these forms of association while paying little attention to individual consciousness. Simmel believed in the creative consciousness that can be found in diverse forms of interaction, which he observed both the ability of actors to create social structures, as well as the disastrous effects such structures had on the creativity of individuals. Simmel also believed that social and cultural structures come to have a life of their own. Simmel refers to "all the forms of association by which a mere sum of separate individuals are made into a 'society'," whereby society is defined as a "higher unity," composed of individuals.: 157 Simmel would especially be fascinated by man's "impulse to sociability," whereby "the solitariness of the individuals is resolved into togetherness," referring to this unity as "the free-playing, interacting interdependence of individuals.": 157–8 Accordingly, he defines sociability as "the play-form of association" driven by "amicability, breeding, cordiality and attractiveness of all kinds.": 158 In order for this free association to occur, Simmel explains, "the personalities must not emphasize themselves too individually...with too much abandon and aggressiveness.": 158 Rather, "this world of sociability...a democracy of equals" is to be without friction so long as people blend together in the spirit of pleasure and bringing "about among themselves a pure interaction free of any disturbing material accent.": 159 Simmel describes idealised interactions in expressing that "the vitality of real individuals, in their sensitivities and attractions, in the fullness of their impulses and convictions...is but a symbol of life, as it shows itself in the flow of a lightly amusing play," adding that "a symbolic play, in whose aesthetic charm all the finest and most highly sublimated dynamics of social existence and its riches are gathered.": 162–3 In a dyad (i.e. a two-person group), a person is able to retain their individuality as there is no fear that another may shift the balance of the group. In contrast, triads (i.e. three-person groups) risk the potential of one member becoming subordinate to the other two, thus threatening their individuality. Furthermore, were a triad to lose a member, it would become a dyad. The basic nature of this dyad-triad principle forms the essence of structures that form society. As a group (structure) increases in size, it becomes more isolated and segmented, whereby the individual also becomes further separated from each member. In respect to the notion of "group size", Simmel's view was somewhat ambiguous. On one hand, he believed that the individual benefits most when a group gets bigger and exerting control on the individual becomes harder. On the other hand, with a large group there is a possibility of the individual's becoming distant and impersonal. Therefore, in an effort for the individual to cope with the larger group they must become a part of a smaller group such as the family. The value of something is determined by the distance from its actor. In "The Stranger", Simmel discusses how if a person is too close to the actor they are not considered a stranger. If they are too far, however, they would no longer be a part of a group. The particular distance from a group allows a person to have objective relationships with different group members. Views One of Simmel's most notable essays is "The Metropolis and Mental Life" ("Die Großstädte und das Geistesleben") from 1903, which was originally given as one of a series of lectures on all aspects of city life by experts in various fields, ranging from science and religion to art. The series was conducted alongside the Dresden cities exhibition of 1903. Simmel was originally asked to lecture on the role of intellectual (or scholarly) life in the big city, but he effectively reversed the topic in order to analyze the effects of the big city on the mind of the individual. As a result, when the lectures were published as essays in a book, to fill the gap, the series editor himself had to supply an essay on the original topic.[citation needed] "The Metropolis and Mental Life" was not particularly well received during Simmel's lifetime. The organisers of the exhibition overemphasised its negative comments about city life, because Simmel also pointed out positive transformations. During the 1920s the essay was influential on the thinking of Robert E. Park and other American sociologists at the University of Chicago who collectively became known as the "Chicago School". It gained wider circulation in the 1950s when it was translated into English and published as part of Kurt Wolff's edited collection, The Sociology of Georg Simmel. It now appears regularly on the reading lists of courses in urban studies and architecture history. However, it is important to note that the notion of the blasé is actually not the central or final point of the essay, but is part of a description of a sequence of states in an irreversible transformation of the mind. In other words, Simmel does not quite say that the big city has an overall negative effect on the mind or the self, even as he suggests that it undergoes permanent changes. It is perhaps this ambiguity that gave the essay a lasting place in the discourse on the metropolis. The deepest problems of modern life flow from the attempt of the individual to maintain the independence and individuality of his existence against the sovereign powers of society, against the weight of the historical heritage and the external culture and technique of life. The antagonism represents the most modern form of the conflict which primitive man must carry on with nature for his own bodily existence. The eighteenth century may have called for liberation from all the ties which grew up historically in politics, in religion, in morality and in economics in order to permit the original natural virtue of man, which is equal in everyone, to develop without inhibition; the nineteenth century may have sought to promote, in addition to man's freedom, his individuality (which is connected with the division of labor) and his achievements which make him unique and indispensable but which at the same time make him so much the more dependent on the complementary activity of others; Nietzsche may have seen the relentless struggle of the individual as the prerequisite for his full development, while socialism found the same thing in the suppression of all competition – but in each of these the same fundamental motive was at work, namely the resistance of the individual to being levelled, swallowed up in the social-technological mechanism. — Georg Simmel, The Metropolis and Mental Life (1903) In The Philosophy of Money, Simmel views money as a component of life which helped us understand the totality of life. Simmel believed people created value by making objects, then separating themselves from that object and then trying to overcome that distance. He found that things which were too close were not considered valuable and things which were too far for people to get were also not considered valuable. Considered in determining value was the scarcity, time, sacrifice, and difficulties involved in getting the object. For Simmel, city life led to a division of labor and increased financialisation. As financial transactions increase, some emphasis shifts to what the individual can do, instead of who the individual is. Financial matters in addition to emotions are in play. Simmel's concept of distance comes into play where he identifies a stranger as a person that is far away and close at the same time. The Stranger is close to us, insofar as we feel between him and ourselves common features of a national, social, occupational, or generally human, nature. He is far from us, insofar as these common features extend beyond him or us, and connect us only because they connect a great many people. — Georg Simmel, "The Stranger" (1908) A stranger is far enough away that he is unknown but close enough that it is possible to get to know him. In a society there must be a stranger. If everyone is known then there is no person that is able to bring something new to everybody. The stranger bears a certain objectivity that makes him a valuable member to the individual and society. People let down their inhibitions around him and confess openly without any fear. This is because there is a belief that the Stranger is not connected to anyone significant and therefore does not pose a threat to the confessor's life.[citation needed] More generally, Simmel observes that because of their peculiar position in the group, strangers often carry out special tasks that the other members of the group are either incapable or unwilling to carry out. For example, especially in pre-modern societies, most strangers made a living from trade, which was often viewed as an unpleasant activity by "native" members of those societies. In some societies, they were also employed as arbitrators and judges, because they were expected to treat rival factions in society with an impartial attitude. Objectivity may also be defined as freedom: the objective individual is bound by no commitments which could prejudice his perception, understanding, and evaluation of the given. — Georg Simmel, "The Stranger" (1908) On one hand the stranger's opinion does not really matter because of his lack of connection to society, but on the other the stranger's opinion does matter, because of his lack of connection to society. He holds a certain objectivity that allows him to be unbiased and decide freely without fear. He is simply able to see, think, and decide without being influenced by the opinion of others.[citation needed] According to Simmel, in small groups, secrets are less needed because everyone seems to be more similar. In larger groups secrets are needed as a result of their heterogeneity. In secret societies, groups are held together by the need to maintain the secret, a condition that also causes tension because the society relies on its sense of secrecy and exclusion. For Simmel, secrecy exists even in relationships as intimate as marriage.[citation needed]In revealing all, marriage becomes dull and boring and loses all excitement. Simmel saw a general thread in the importance of secrets and the strategic use of ignorance: To be social beings who are able to cope successfully with their social environment, people need clearly defined realms of unknowns for themselves. Furthermore, sharing a common secret produces a strong "we feeling." The modern world depends on honesty and therefore a lie can be considered more devastating than it ever has been before.[citation needed] Money allows a level of secrecy that has never been attainable before, because money allows for "invisible" transactions, due to the fact that money is now an integral part of human values and beliefs. It is possible to buy silence. In his multi-layered essay, "Women, Sexuality & Love", published in 1923, Simmel discusses flirtation as a generalized type of social interaction. According to Simmel, "to define flirtation as simply a 'passion for pleasing' is to confuse the means to an end with the desire for this end." The distinctiveness of the flirt lies in the fact that she awakens delight and desire by means of a unique antithesis and synthesis: through the alternation of accommodation and denial. In the behavior of the flirt, the man feels the proximity and interpenetration of the ability and inability to acquire something. This is in essence the "price." A sidelong glance with the head half-turned is characteristic of flirtation in its most banal guise. In the eyes of Simmel, fashion is a form of social relationship that allows those who wish to conform to the demands of a group to do so. It also allows some to be individualistic by deviating from the norm. There are many social roles in fashion and both objective culture and individual culture can have an influence on people. In the initial stage everyone adopts what is fashionable and those that deviate from the fashion inevitably adopt a whole new view of what they consider fashion. Ritzer wrote:: 163 Simmel argued that not only does following what is in fashion involve dualities so does the effort on the part of some people to be of fashion. Unfashionable people view those who follow a fashion as being imitators and themselves as mavericks, but Simmel argued that the latter are simply engaging in an inverse form of imitation. — George Ritzer, "Georg Simmel", Modern Sociological Theory (2008) This means that those who are trying to be different or "unique," are not, because in trying to be different they become a part of a new group that has labeled themselves different or "unique". Aesthetics and cultural history Simmel's work on European art history spans the Middle Ages to the early 20th century, including books on Goethe and Rembrandt and essays on artists from Michelangelo and Leonardo da Vinci to Auguste Rodin. He also wrote on art forms and media, aesthetics after Kant and Schopenhauer, and the sociology of art in modern culture. These are important to Simmel's view of society's modern forms. He used symbols, metaphors, tropes, and analogies exploring structures of mind and behavior. From 1901 he planned an unrealized volume integrating his philosophy of art, which he told philosopher Heinrich Rickert was a "burning" and "dominant interest". He asked Kantorowicz to edit his unpublished manuscripts posthumously. Suhrkamp Verlag issued many unknown texts in a 1989–2014 Gesamtausgabe (collected works). Works Simmel's major monographic works include, in chronological order: See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-215] | [TOKENS: 12858] |
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/CatBoost] | [TOKENS: 452] |
Contents CatBoost CatBoost is an open-source software library developed by Yandex. It provides a gradient boosting framework which, among other features, attempts to solve for categorical features using a permutation-driven alternative to the classical algorithm. It works on Linux, Windows, macOS, and is available in Python, R, and models built using CatBoost can be used for predictions in C++, Java, C#, Rust, Core ML, ONNX, and PMML. The source code is licensed under Apache License and available on GitHub. InfoWorld magazine awarded the library "The best machine learning tools" in 2017. along with TensorFlow, Pytorch, XGBoost and 8 other libraries. Kaggle listed CatBoost as one of the most frequently used machine learning (ML) frameworks in the world. It was listed as the top-8 most frequently used ML framework in the 2020 survey and as the top-7 most frequently used ML framework in the 2021 survey. As of April 2022, CatBoost is installed about 100000 times per day from PyPI repository Features CatBoost has gained popularity compared to other gradient boosting algorithms primarily due to the following features History In 2009 Andrey Gulin developed MatrixNet, a proprietary gradient boosting library that was used in Yandex to rank search results. Since 2009 MatrixNet has been used in different projects at Yandex, including recommendation systems and weather prediction. In 2014–2015 Andrey Gulin worked with a team of researchers to start a new project called Tensornet which was aimed at solving the problem of "how to work with categorical data". Their work resulted in several proprietary Gradient Boosting libraries with different approaches to handling categorical data. In 2016 the Machine Learning Infrastructure team led by Anna Dorogush started working on Gradient Boosting in Yandex, including Matrixnet and Tensornet. They implemented and open-sourced the next version of Gradient Boosting library called CatBoost, which has support for categorical and text data, GPU training, model analysis, and visualization tools. CatBoost was open-sourced in July 2017 and is under active development in Yandex and the open-source community. Application See also References External links |
======================================== |
[SOURCE: https://he.wikipedia.org/w/index.php?title=Grand_Theft_Auto&action=edit§ion=2] | [TOKENS: 196] |
עריכת הדף "Grand Theft Auto" (פסקה) סימני פיסוק: ־ – — … קוד ויקי: • · [] [[]] () {{}} {{{}}} תווים מתמטיים: ~ | ° ± − × ² ³ + תווים עבריים: בּ וֹ וּ בַ בָ בֶ בֵ בִ בֳ בֲ בֱ בְ בֻ שׁ שׂ תו כיווניות דף זה כלול ב־12 קטגוריות מוסתרות: |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Birthday#cite_note-9] | [TOKENS: 4101] |
Contents Birthday A birthday is the anniversary of the birth of a person or the figurative birth of an institution. Birthdays of people are celebrated in numerous cultures, often with birthday gifts, birthday cards, a birthday party, or a rite of passage. Many religions celebrate the birth of their founders or religious figures with special holidays (e.g. Christmas, Mawlid, Buddha's Birthday, Krishna Janmashtami, and Gurpurb). There is a distinction between birthday and birthdate (also known as date of birth): the former, except for February 29, occurs each year (e.g. January 15), while the latter is the complete date when a person was born (e.g. January 15, 2001). Coming of age In most legal systems, one becomes a legal adult on a particular birthday when they reach the age of majority (usually between 12 and 21), and reaching age-specific milestones confers particular rights and responsibilities. At certain ages, one may become eligible to leave full-time education, become subject to military conscription or to enlist in the military, to consent to sexual intercourse, to marry with parental consent, to marry without parental consent, to vote, to run for elected office, to legally purchase (or consume) alcohol and tobacco products, to purchase lottery tickets, or to obtain a driver's licence. The age of majority is when minors cease to legally be considered children and assume control over their persons, actions, and decisions, thereby terminating the legal control and responsibilities of their parents or guardians over and for them. Most countries set the age of majority at 18, though it varies by jurisdiction. Many cultures celebrate a coming of age birthday when a person reaches a particular year of life. Some cultures celebrate landmark birthdays in early life or old age. In many cultures and jurisdictions, if a person's real birthday is unknown (for example, if they are an orphan), their birthday may be adopted or assigned to a specific day of the year, such as January 1. Racehorses are reckoned to become one year old in the year following their birth on January 1 in the Northern Hemisphere and August 1 in the Southern Hemisphere.[relevant?] Birthday parties In certain parts of the world, an individual's birthday is celebrated by a party featuring a specially made cake. Presents are bestowed on the individual by the guests appropriate to their age. Other birthday activities may include entertainment (sometimes by a hired professional, i.e., a clown, magician, or musician) and a special toast or speech by the birthday celebrant. The last stanza of Patty Hill's and Mildred Hill's famous song, "Good Morning to You" (unofficially titled "Happy Birthday to You") is typically sung by the guests at some point in the proceedings. In some countries, a piñata takes the place of a cake. The birthday cake may be decorated with lettering and the person's age, or studded with the same number of lit candles as the age of the individual. The celebrated individual may make a silent wish and attempt to blow out the candles in one breath; if successful, superstition holds that the wish will be granted. In many cultures, the wish must be kept secret or it will not "come true". Birthdays as holidays Historically significant people's birthdays, such as national heroes or founders, are often commemorated by an official holiday marking the anniversary of their birth. Some notables, particularly monarchs, have an official birthday on a fixed day of the year, which may not necessarily match the day of their birth, but on which celebrations are held. In Mahayana Buddhism, many monasteries celebrate the anniversary of Buddha's birth, usually in a highly formal, ritualized manner. They treat Buddha's statue as if it was Buddha himself as if he were alive; bathing, and "feeding" him. Jesus Christ's traditional birthday is celebrated as Christmas Eve or Christmas Day around the world, on December 24 or 25, respectively. As some Eastern churches use the Julian calendar, December 25 will fall on January 7 in the Gregorian calendar. These dates are traditional and have no connection with Jesus's actual birthday, which is not recorded in the Gospels. Similarly, the birthdays of the Virgin Mary and John the Baptist are liturgically celebrated on September 8 and June 24, especially in the Roman Catholic and Eastern Orthodox traditions (although for those Eastern Orthodox churches using the Julian calendar the corresponding Gregorian dates are September 21 and July 7 respectively). As with Christmas, the dates of these celebrations are traditional and probably have no connection with the actual birthdays of these individuals. Catholic saints are remembered by a liturgical feast on the anniversary of their "birth" into heaven a.k.a. their day of death. In Hinduism, Ganesh Chaturthi is a festival celebrating the birth of the elephant-headed deity Ganesha in extensive community celebrations and at home. Figurines of Ganesha are made for the holiday and are widely sold. Sikhs celebrate the anniversary of the birth of Guru Nanak and other Sikh gurus, which is known as Gurpurb. Mawlid is the anniversary of the birth of Muhammad and is celebrated on the 12th or 17th day of Rabi' al-awwal by adherents of Sunni and Shia Islam respectively. These are the two most commonly accepted dates of birth of Muhammad. However, there is much controversy regarding the permissibility of celebrating Mawlid, as some Muslims judge the custom as an unacceptable practice according to Islamic tradition. In Iran, Mother's Day is celebrated on the birthday of Fatima al-Zahra, the daughter of Muhammad. Banners reading Ya Fatima ("O Fatima") are displayed on government buildings, private buildings, public streets and car windows. Religious views In Judaism, rabbis are divided about celebrating this custom, although the majority of the faithful accept it. In the Torah, the only mention of a birthday is the celebration of Pharaoh's birthday in Egypt (Genesis 40:20). Although the birthday of Jesus of Nazareth is celebrated as a Christian holiday on December 25, historically the celebrating of an individual person's birthday has been subject to theological debate. Early Christians, notes The World Book Encyclopedia, "considered the celebration of anyone's birth to be a pagan custom." Origen, in his commentary "On Levites," wrote that Christians should not only refrain from celebrating their birthdays but should look at them with disgust as a pagan custom. A saint's day was typically celebrated on the anniversary of their martyrdom or death, considered the occasion of or preparation for their entrance into Heaven or the New Jerusalem. Ordinary folk in the Middle Ages celebrated their saint's day (the saint they were named after), but nobility celebrated the anniversary of their birth.[citation needed] The "Squire's Tale", one of Chaucer's Canterbury Tales, opens as King Cambuskan proclaims a feast to celebrate his birthday. In the Modern era, the Catholic Church, the Eastern Orthodox Church and Protestantism, i.e. the three main branches of Christianity, as well as almost all Christian religious denominations, consider celebrating birthdays acceptable or at most a choice of the individual. An exception is Jehovah's Witnesses, who do not celebrate them for various reasons: in their interpretation this feast has pagan origins, was not celebrated by early Christians, is negatively expounded in the Holy Scriptures and has customs linked to superstition and magic. In some historically Roman Catholic and Eastern Orthodox countries,[a] it is common to have a 'name day', otherwise known as a 'Saint's day'. It is celebrated in much the same way as a birthday, but it is held on the official day of a saint with the same Christian name as the birthday person; the difference being that one may look up a person's name day in a calendar, or easily remember common name days (for example, John or Mary); however in pious traditions, the two were often made to concur by giving a newborn the name of a saint celebrated on its day of confirmation, more seldom one's birthday. Some are given the name of the religious feast of their christening's day or birthday, for example, Noel or Pascal (French for Christmas and "of Easter"); as another example, Togliatti was given Palmiro as his first name because he was born on Palm Sunday. The birthday does not reflect Islamic tradition, and because of this, the majority of Muslims refrain from celebrating it. Others do not object, as long as it is not accompanied by behavior contrary to Islamic tradition. A good portion of Muslims (and Arab Christians) who have emigrated to the United States and Europe celebrate birthdays as customary, especially for children, while others abstain. Hindus celebrate the birth anniversary day every year when the day that corresponds to the lunar month or solar month (Sun Signs Nirayana System – Sourava Mana Masa) of birth and has the same asterism (Star/Nakshatra) as that of the date of birth. That age is reckoned whenever Janma Nakshatra of the same month passes. Hindus regard death to be more auspicious than birth, since the person is liberated from the bondages of material society. Also, traditionally, rituals and prayers for the departed are observed on the 5th and 11th days, with many relatives gathering. Historical and cultural perspectives According to Herodotus (5th century BC), of all the days in the year, the one which the Persians celebrate most is their birthday. It was customary to have the board furnished on that day with an ampler supply than common: the richer people eat wholly baked cow, horse, camel, or donkey (Greek: ὄνον), while the poorer classes use instead the smaller kinds of cattle. On his birthday, the king anointed his head and presented gifts to the Persians. According to the law of the Royal Supper, on that day "no one should be refused a request". The rule for drinking was "No restrictions". In ancient Rome, a birthday (dies natalis) was originally an act of religious cultivation (cultus). A dies natalis was celebrated annually for a temple on the day of its founding, and the term is still used sometimes for the anniversary of an institution such as a university. The temple founding day might become the "birthday" of the deity housed there. March 1, for example, was celebrated as the birthday of the god Mars. Each human likewise had a natal divinity, the guardian spirit called the Genius, or sometimes the Juno for a woman, who was owed religious devotion on the day of birth, usually in the household shrine (lararium). The decoration of a lararium often shows the Genius in the role of the person carrying out the rites. A person marked their birthday with ritual acts that might include lighting an altar, saying prayers, making vows (vota), anointing and wreathing a statue of the Genius, or sacrificing to a patron deity. Incense, cakes, and wine were common offerings. Celebrating someone else's birthday was a way to show affection, friendship, or respect. In exile, the poet Ovid, though alone, celebrated not only his own birthday rite but that of his far distant wife. Birthday parties affirmed social as well as sacred ties. One of the Vindolanda tablets is an invitation to a birthday party from the wife of one Roman officer to the wife of another. Books were a popular birthday gift, sometimes handcrafted as a luxury edition or composed especially for the person honored. Birthday poems are a minor but distinctive genre of Latin literature. The banquets, libations, and offerings or gifts that were a regular part of most Roman religious observances thus became part of birthday celebrations for individuals. A highly esteemed person would continue to be celebrated on their birthday after death, in addition to the several holidays on the Roman calendar for commemorating the dead collectively. Birthday commemoration was considered so important that money was often bequeathed to a social organization to fund an annual banquet in the deceased's honor. The observance of a patron's birthday or the honoring of a political figure's Genius was one of the religious foundations for imperial cult or so-called "emperor worship." The Chinese word for "year(s) old" (t 歲, s 岁, suì) is entirely different from the usual word for "year(s)" (年, nián), reflecting the former importance of Chinese astrology and the belief that one's fate was bound to the stars imagined to be in opposition to the planet Jupiter at the time of one's birth. The importance of this duodecennial orbital cycle only survives in popular culture as the 12 animals of the Chinese zodiac, which change each Chinese New Year and may be used as a theme for some gifts or decorations. Because of the importance attached to the influence of these stars in ancient China and throughout the Sinosphere, East Asian age reckoning previously began with one at birth and then added years at each Chinese New Year, so that it formed a record of the suì one had lived through rather than of the exact amount of time from one's birth. This method—which can differ by as much as two years of age from other systems—is increasingly uncommon and is not used for official purposes in the PRC or on Taiwan, although the word suì is still used for describing age. Traditionally, Chinese birthdays—when celebrated—were reckoned using the lunisolar calendar, which varies from the Gregorian calendar by as much as a month forward or backward depending on the year. Celebrating the lunisolar birthday remains common on Taiwan while growing increasingly uncommon on the mainland. Birthday traditions reflected the culture's deep-seated focus on longevity and wordplay. From the homophony in some dialects between 酒 ("rice wine") and 久 (meaning "long" in the sense of time passing), osmanthus and other rice wines are traditional gifts for birthdays in China. Longevity noodles are another traditional food consumed on the day, although western-style birthday cakes are increasingly common among urban Chinese. Hongbaos—red envelopes stuffed with money, now especially the red 100 RMB notes—are the usual gift from relatives and close family friends for most children. Gifts for adults on their birthdays are much less common, although the birthday for each decade is a larger occasion that might prompt a large dinner and celebration. The Japanese reckoned their birthdays by the Chinese system until the Meiji Reforms. Celebrations remained uncommon or muted until after the American occupation that followed World War II.[citation needed] Children's birthday parties are the most important, typically celebrated with a cake, candles, and singing. Adults often just celebrate with their partner. In North Korea, the Day of the Sun, Kim Il Sung's birthday, is the most important public holiday of the country, and Kim Jong Il's birthday is celebrated as the Day of the Shining Star. North Koreans are not permitted to celebrate birthdays on July 8 and December 17 because these were the dates of the deaths of Kim Il Sung and Kim Jong Il, respectively. More than 100,000 North Koreans celebrate displaced birthdays on July 9 and December 18 instead to avoid these dates. A person born on July 8 before 1994 may change their birthday, with official recognition. South Korea was one of the last countries to use a form of East Asian age reckoning for many official purposes. Prior to June 2023, three systems were used together—"Korean ages" that start with 1 at birth and increase every January 1st with the Gregorian New Year, "year ages" that start with 0 at birth and otherwise increase the same way, and "actual ages" that start with 0 at birth and increase each birthday. First birthday celebrations was heavily celebrated, despite usually having little to do with the child's age. In June 2023, all Korean ages were set back at least one year, and official ages henceforth are reckoned only by birthdays. In Ghana, children wake up on their birthday to a special treat called oto, which is a patty made from mashed sweet potato and eggs fried in palm oil. Later they have a birthday party where they usually eat stew and rice and a dish known as kelewele, which is fried plantain chunks. Distribution through the year Birthdays are fairly evenly distributed throughout the year, with some seasonal effects. In the United States, there tend to be more births in September and October. This may be because there is a holiday season nine months before (the human gestation period is about nine months), or because the longest nights of the year also occur in the Northern Hemisphere nine months before. However, the holidays affect birth rates more than the winter: New Zealand, a Southern Hemisphere country, has the same September and October peak with no corresponding peak in March and April. The least common birthdays tend to fall around public holidays, such as Christmas, New Year's Day and fixed-date holidays such as Independence Day in the US, which falls on July 4. Between 1973 and 1999, September 16 was the most common birthday in the United States, and December 25 was the least common birthday (other than February 29 because of leap years). In 2011, October 5 and 6 were reported as the most frequently occurring birthdays. New Zealand's most common birthday is September 29, and the least common birthday is December 25. The ten most common birthdays all fall within a thirteen-day period, between September 22 and October 4. The ten least common birthdays (other than February 29) are December 24–27, January 1–2, February 6, March 22, April 1, and April 25. This is based on all live births registered in New Zealand between 1980 and 2017. Positive and negative associations with culturally significant dates may influence birth rates. The study shows a 5.3% decrease in spontaneous births and a 16.9% decrease in Caesarean births on Halloween, compared to dates occurring within one week before and one week after the October holiday. In contrast, on Valentine's Day, there is a 3.6% increase in spontaneous births and a 12.1% increase in Caesarean births. In Sweden, 9.3% of the population is born in March and 7.3% in November, when a uniform distribution would give 8.3%. In the Gregorian calendar (a common solar calendar), February in a leap year has 29 days instead of the usual 28, so the year lasts 366 days instead of the usual 365. A person born on February 29 may be called a "leapling" or a "leaper". In common years, they usually celebrate their birthdays on February 28. In some situations, March 1 is used as the birthday in a non-leap year since it is the day following February 28. Technically, a leapling will have fewer birthday anniversaries than their age in years. This phenomenon is exploited when a person claims to be only a quarter of their actual age, by counting their leap-year birthday anniversaries only. In Gilbert and Sullivan's 1879 comic opera The Pirates of Penzance, Frederic the pirate apprentice discovers that he is bound to serve the pirates until his 21st birthday rather than until his 21st year. For legal purposes, legal birthdays depend on how local laws count time intervals. An individual's Beddian birthday, named in tribute to firefighter Bobby Beddia, occurs during the year that their age matches the last two digits of the year they were born. Some studies show people are more likely to die on their birthdays, with explanations including excessive drinking, suicide, cardiovascular events due to high stress or happiness, efforts to postpone death for major social events, and death certificate paperwork errors. See also References Notes External links |
======================================== |
[SOURCE: https://techcrunch.com/video/how-did-davos-turn-into-a-tech-conference/] | [TOKENS: 704] |
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us How did Davos turn into a tech conference? Loading the player… The World Economic Forum’s annual meeting in Davos felt different this year, and not just because Meta and Salesforce took over storefronts on the main promenade. AI dominated the conversation in a way that overshadowed traditional topics like climate change and global poverty, and the CEOs weren’t holding back. There was public criticism of trade policy, warnings about AI bubbles popping, and a lot of talk about what comes next for the industry. Watch as TechCrunch’s Equity podcast discusses which conversations took over Davos this week, the latest startup raises that caught our eye, and more. Subscribe to Equity on YouTube, Apple Podcasts, Overcast, Spotify and all the casts. You also can follow Equity on X and Threads, at @EquityPod. Topics Audio Producer Theresa Loconsolo is an audio producer at TechCrunch focusing on Equity, the network’s flagship podcast. Before joining TechCrunch in 2022, she was one of 2 producers at a four-station conglomerate where she wrote, recorded, voiced and edited content, and engineered live performances and interviews from guests like lovelytheband. Theresa is based in New Jersey and holds a bachelors degree in Communication from Monmouth University. You can contact or verify outreach from Theresa by emailing theresa.loconsolo@techcrunch.com. You can contact or verify outreach from Theresa by emailing theresa.loconsolo@techcrunch.com. You can contact or verify outreach from Theresa by emailing theresa.loconsolo@techcrunch.com. Save up to $680 on your pass before February 27.Meet investors. Discover your next portfolio company. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what’s next. Don’t miss these one-time savings. Most Popular FBI says ATM ‘jackpotting’ attacks are on the rise, and netting hackers millions in stolen cash Meta’s own research found parental supervision doesn’t really help curb teens’ compulsive social media use How Ricursive Intelligence raised $335M at a $4B valuation in 4 months After all the hype, some AI experts don’t think OpenClaw is all that exciting OpenClaw creator Peter Steinberger joins OpenAI Hollywood isn’t happy about the new Seedance 2.0 video generator The great computer science exodus (and where students are going instead) Subscribe for the industry’s biggest tech news Every weekday and Sunday, you can get the best of TechCrunch’s coverage. TechCrunch's AI experts cover the latest news in the fast-moving field. Every Monday, gets you up to speed on the latest advances in aerospace. Startups are the core of TechCrunch, so get our best coverage delivered weekly. By submitting your email, you agree to our Terms and Privacy Notice. Related © 2025 TechCrunch Media LLC. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Email_forwarding] | [TOKENS: 1641] |
Contents Email forwarding Email forwarding generically refers to the operation of re-sending a previously delivered email to an email address to one or more different email addresses. The term forwarding, used for mail since long before electronic communications, has no specific technical meaning, but it implies that the email has been moved "forward" to a new destination. Email forwarding can also redirect mail going to a certain address and send it to one or more other addresses. Vice versa, email items going to several different addresses can converge via forwarding to end up in a single address in-box.[clarification needed] Email users and administrators of email systems use the same term when speaking of both server-based and client-based forwarding. Server-based forwarding The domain name (the part appearing to the right of @ in an email address) defines the target server(s) for the corresponding class of addresses. A domain may also define backup servers; they have no mailboxes and forward messages without changing any part of their envelopes. By contrast, primary servers can deliver a message to a user's mailbox and/or forward it by changing some envelope addresses. ~/.forward files (see below) provide a typical example of server-based forwarding to different recipients. Email administrators sometimes use the term redirection as a synonym for server-based email-forwarding to different recipients. Protocol engineers sometimes use the term Mediator to refer to a forwarding server. Because of spam, it is becoming increasingly difficult to reliably forward mail across different domains, and some recommend avoiding it if at all possible. Plain message-forwarding changes the envelope recipient(s) and leaves the envelope sender field untouched. The "envelope sender" field does not equate to the From header which Email client software usually displays: it represents a field used in the early stages of the SMTP protocol, and subsequently saved as the Return-Path header. This field holds the address to which mail-systems must send bounce messages — reporting delivery-failure (or success) — if any. By contrast, the terms remailing or redistribution can sometimes mean re-sending the message and also rewriting the "envelope sender" field. Electronic mailing lists furnish a typical example. Authors submit messages to a reflector that performs remailing to each list address. That way, bounce messages (which report a failure delivering a message to any list- subscriber) will not reach the author of a message. However, annoying misconfigured vacation autoreplies do reach authors. Typically, plain message-forwarding does alias-expansion, while proper message-forwarding, also named forwarding tout-court serves for mailing-lists. When additional modifications to the message are carried out, so as to rather resemble the action of a Mail User Agent submitting a new message, the term forwarding becomes deceptive and remailing seems more appropriate. In the Sender Policy Framework (SPF), the domain-name in the envelope sender remains subject to policy restrictions. Therefore, SPF generally disallows plain message-forwarding. In case of forwarding, the email is being sent from the forwarding server, which is not authorized to send emails for the original sender's domain. So the SPF will fail. Intra domain redirection complies with SPF as long as the relevant servers share a consistent configuration. Mail servers that practice inter-domain message-forwarding may break SPF even if they do not implement SPF themselves, i.e. they neither apply SPF checks nor publish SPF records. Sender Rewriting Scheme provides for a generic forwarding mechanism compatible with SPF. Client-based forwarding Client forwarding can take place automatically using a non-interactive client such as a mail retrieval agent. Although the retrieval agent uses a client protocol, this forwarding resembles server forwarding in that it keeps the same message-identity. Concerns about the envelope-sender apply. An end-user can manually forward a message using an email client. Forwarding inline quotes the message below the main text of the new message, and usually preserves original attachments as well as a choice of selected headers (e.g. the original From and Reply-To.) The recipient of a message forwarded this way may still be able to reply to the original message; the ability to do so depends on the presence of original headers and may imply manually copying and pasting the relevant destination addresses. Forwarding as attachment prepares a MIME attachment (of type message/rfc822) that contains the full original message, including all headers and any attachment. Note that including all the headers discloses much information about the message, such as the servers that transmitted it and any client-tag added on the mailbox. The recipient of a message forwarded this way may be able to open the attached message and reply to it seamlessly. This kind of forwarding actually constitutes a remailing from the points of view of the envelope-sender and of the recipient(s). The message identity also changes. Historical development of email forwarding RFC 821, Simple Mail Transfer Protocol, by Jonathan B. Postel in 1982, provided for a forward-path for each recipient, in the form of, for example, @USC-ISIE.ARPA, @USC-ISIF.ARPA: Q-Smith@ISI-VAXA.ARPA — an optional list of hosts and a required destination-mailbox. When the list of hosts existed, it served as a source-route, indicating that each host had to relay the mail to the next host on the list. Otherwise, in the case of insufficient destination-information but where the server knew the correct destination, it could take the responsibility to deliver the message by responding as follows: The concept at that time envisaged the elements of the forward-path (source route) moving to the return-path (envelope sender) as a message got relayed from one SMTP server to another. Even if the system discouraged the use of source-routing, dynamically building the return-path implied that the "envelope sender" information could not remain in its original form during forwarding. Thus RFC 821 did not originally allow plain message-forwarding. The introduction of the MX record made source-routing unnecessary. In 1989, RFC 1123 recommended accepting source-routing only for backward-compatibility. At that point, plain message forwarding became the recommended action for alias-expansion. In 2008, RFC 5321 still mentions that "systems may remove the return path and rebuild [it] as needed", taking into consideration that not doing so might inadvertently disclose sensitive information. Actually, plain message-forwarding can be conveniently used for alias expansion within the same server or a set of coordinated servers. The reference SMTP implementation in the early 1980s was sendmail, which provided for ~/.forward files, which can store the target email-addresses for given users. This kind of server-based forwarding is sometimes called dot-forwarding. One can configure some email-program filters to automatically perform forwarding or replying actions immediately after receiving. Forward files can also contain shell scripts, which have become a source of many security problems. Formerly only trusted users could utilize the command-line switch for setting the envelope sender, -f arg; some systems disabled this feature for security reasons. Email predates the formalization of client–server architectures in the 1990s. Therefore, the distinction between client and server seems necessarily forced. The original distinction contrasted daemons and user-controlled programs which run on the same machine. The sendmail daemon used to run with root privileges so it could impersonate any user whose mail it had to manage. On the other hand, users can access their own individual mail-files and configuration files, including ~/.forward. Client programs may assist in editing the server configuration-files of a given user, thereby causing some confusion as to what role each program plays. The term "virtual users" refers to email users who never log on a mail-server system and only access their mailboxes using remote clients. A mail-server program may work for both virtual and regular users, or it may require minor modifications to take advantage of the fact that virtual users frequently share the same system id. The latter circumstance allows the server program to implement some features more easily, as it does not have to obey system-access restrictions. The same principles of operations apply. However, virtual users have more difficulty in accessing their configuration files, for good or ill. See also Notes |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Construct_(python_library)] | [TOKENS: 437] |
Contents Construct (Python library) Construct is a Python library for the construction and deconstruction of data structures in a declarative fashion. In this context, construction, or building, refers to the process of converting (serializing) a programmatic object into a binary representation. Deconstruction, or parsing, refers to the opposite process of converting (deserializing) binary data into a programmatic object. Being declarative means that user code defines the data structure, instead of the convention of writing procedural code to accomplish the goal. Construct can work seamlessly with bit- and byte-level data granularity and various byte-ordering. Benefits Using declarative code has many benefits. For example, the same code that can parse can also build (symmetrical), debugging and testing are much simpler (provable to some extent), creating new constructs is easy (wrapping components), and many more.[citation needed] If one is familiar with the C (programming language), one can think of constructs as casting from char * to struct foo * and vice versa, rather than writing code that unpacks the data. Example The following example show how a TCP/IP protocol stack might be defined using Construct. Some code is omitted for brevity and simplicity. Also note that the following code is just Python code that creates objects. First, the Ethernet header (layer 2): Next, the IP header (layer 3): And finally, the TCP header (layer 4): Now define the hierarchy of the protocol stack. The following code "binds" each pair of adjacent protocols into a separate unit. Each such unit will "select" the proper next layer based on its contained protocol. At this point, the code can parse captured TCP/IP frames into "packet" objects and build these packet objects back into binary representation. Ports and spin-offs Data::ParseBinary is a CPAN module that originated as a port of Construct to the Perl programming language. (see its main POD document for its inspiration). Since the initial version, some parts of the original API have been deprecated. A port to Java is available on GitHub. Examples in Java, the Ethernet header (layer 2): See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ancient_Greece] | [TOKENS: 9856] |
Contents Ancient Greece Ancient Greece (Ancient Greek: Ἑλλάς, romanized: Hellás) was a northeastern Mediterranean civilization, existing from the Greek Dark Ages of the 12th–9th centuries BC to the end of classical antiquity (c. 600 AD), that comprised a loose collection of culturally and linguistically related city-states and communities. Prior to the Roman period, most of these regions were officially unified only once under the Kingdom of Macedon from 338 to 323 BC.[a] In Western history, the era of classical antiquity was immediately followed by the Early Middle Ages and the Byzantine period. Three centuries after the decline of Mycenaean Greece during the Bronze Age collapse, Greek urban poleis began to form in the 8th century BC, ushering in the Archaic period and the colonization of the Mediterranean Basin. This was followed by the age of Classical Greece, from the Greco-Persian Wars to the death of Alexander the Great in 323 BC, and which included the Golden Age of Athens and the Peloponnesian War between Athens and Sparta. The unification of Greece by Macedon under Philip II and subsequent conquest of the Achaemenid Empire by Alexander the Great spread Hellenistic civilization across the Middle East. The Hellenistic period is considered to have ended in 30 BC, when the last Hellenistic kingdom, Ptolemaic Egypt, was annexed by the Roman Republic. Classical Greek culture, especially philosophy, had a powerful influence on ancient Rome, which carried a version of it throughout the Mediterranean and much of Europe. For this reason, Classical Greece is generally considered the cradle of Western civilization, the seminal culture from which the modern West derives many of its founding archetypes and ideas in politics, philosophy, science, and art. Chronology Classical antiquity in the Mediterranean region is commonly considered to have begun in the 8th century BC (around the time of the earliest recorded poetry of Homer) and ended in the 6th century AD. Classical antiquity in Greece was preceded by the Greek Dark Ages (c. 1200 – c. 800 BC), archaeologically characterised by the protogeometric and geometric styles of designs on pottery. Following the Dark Ages was the Archaic period, beginning around the 8th century BC, which saw early developments in Greek culture and society leading to the Classical period from the Persian invasion of Greece in 480 BC until the death of Alexander the Great in 323 BC. The Classical period is characterized by a "classical" style, i.e. one which was considered exemplary by later observers, most famously in the Parthenon of Athens. Politically, the Classical period was dominated by Athens and the Delian League during the 5th century, but displaced by Spartan hegemony during the early 4th century BC, before power shifted to Thebes and the Boeotian League and finally to the League of Corinth led by Macedon. This period was shaped by the Greco-Persian Wars, the Peloponnesian War, and the Rise of Macedon. Following the Classical period was the Hellenistic period (323–146 BC), during which Greek culture and power expanded into the Near East from the death of Alexander until the Roman conquest. Roman Greece is usually counted from the Roman victory over the Corinthians at the Battle of Corinth in 146 BC to the establishment of Byzantium by Constantine as the capital of the Roman Empire in 330 AD. Finally, Late Antiquity refers to the period of Christianization during the later 4th to early 6th centuries AD, consummated by the closure of the Academy of Athens by Justinian I in 529. Historiography The historical period of ancient Greece is unique in world history as the first period attested directly in comprehensive, narrative historiography, while earlier ancient history or protohistory is known from much more fragmentary documents such as annals, king lists, and pragmatic epigraphy. Herodotus is widely known as the "father of history": his Histories are eponymous of the entire field. Written between the 450s and 420s BC, Herodotus' work reaches about a century into the past, discussing 6th-century BC historical figures such as Darius I of Persia, Cambyses II and Psamtik III, and alluding to some 8th-century BC persons such as Candaules. The accuracy of Herodotus' works is debated. Herodotus was succeeded by authors such as Thucydides, Xenophon, Demosthenes, Plato and Aristotle. Most were either Athenian or pro-Athenian, which is why far more is known about the history and politics of Athens than of many other cities. Their scope is further limited by a focus on political, military and diplomatic history, ignoring economic and social history. History The archaic period, lasting approximately from 800 to 500 BC, saw the culmination of political and social developments which had begun in the Greek Dark Age, with the polis (city-state) becoming the most important unit of political organisation in Greece. The absence of powerful states in Greece after the collapse of Mycenaean power, and the geography of Greece, where many settlements were separated from their neighbours by mountainous terrain, encouraged the development of small independent city-states. Several Greek states saw tyrants rise to power in this period, most famously at Corinth from 657 BC. The period also saw the founding of Greek colonies around the Mediterranean, with Euboean settlements at Al-Mina in the east as early as 800 BC, and Ischia in the west by 775. Increasing contact with non-Greek peoples in this period, especially in the Near East, inspired developments in art and architecture, the adoption of coinage, and the development of the Greek alphabet. Athens developed its democratic system over the course of the archaic period. Already in the 7th century, the right of all citizen men to attend the assembly appears to have been established. After a failed coup led by Cylon of Athens around 636 BC, Draco was appointed to establish a code of laws in 621. This failed to reduce the political tension between the poor and the elites, and in 594 Solon was given the authority to enact another set of reforms, which attempted to balance the power of the rich and the poor. In the middle of the 6th century, Pisistratus established himself as a tyrant, and after his death in 527 his son Hippias inherited his position; by the end of the 6th century he had been overthrown, and Cleisthenes carried out further democratising reforms. In Sparta, a political system with two kings, a council of elders, and five ephors developed over the course of the 8th and 7th centuries. According to Spartan tradition, this constitution was established by the legendary lawgiver Lycurgus. Over the course of the First Messenian War and Second Messenian War, Sparta subjugated the neighbouring region of Messenia, enserfing the population. In the 6th century, Greek city-states began to develop formal relationships with one another, where previously individual rulers had relied on personal relationships with the elites of other cities. Towards the end of the Archaic period, Sparta began to build a series of alliances, the Peloponnesian League, with cities including Corinth, Elis, and Megara, isolating Messenia and reinforcing Sparta's position against Argos, the other major power in the Peloponnese. Other alliances in the 6th century included those between Elis and Heraea in the Peloponnese; and between the Greek colony Sybaris in southern Italy, its allies, and the Serdaioi. In 499 BC, the Ionian city states under Persian rule rebelled against their Persian-supported tyrant rulers. Supported by troops sent from Athens and Eretria, they advanced as far as Sardis and burnt the city before being driven back by a Persian counterattack. The revolt continued until 494, when the rebelling Ionians were defeated. Darius did not forget that Athens had assisted the Ionian revolt, and in 490 he assembled an armada to retaliate. Though heavily outnumbered, the Athenians—supported by their Plataean allies—defeated the Persian hordes at the Battle of Marathon, and the Persian fleet turned tail. Ten years later, a second invasion was launched by Darius' son Xerxes. The city-states of northern and central Greece submitted to the Persian forces without resistance, but a coalition of 31 Greek city states, including Athens and Sparta, determined to resist the Persian invaders. At the same time, Greek Sicily was invaded by a Carthaginian force. In 480 BC, the first major battle of the invasion was fought at Thermopylae, where a small rearguard of Greeks, led by three hundred Spartans, held a crucial pass guarding the heart of Greece for several days; at the same time Gelon, tyrant of Syracuse, defeated the Carthaginian invasion at the Battle of Himera. The Persians were decisively defeated at sea by a primarily Athenian naval force at the Battle of Salamis, and on land in 479 BC at the Battle of Plataea. The alliance against Persia continued, initially led by the Spartan Pausanias but from 477 by Athens, and by 460 Persia had been driven out of the Aegean. During this long campaign, the Delian League gradually transformed from a defensive alliance of Greek states into an Athenian empire, as Athens' growing naval power intimidated the other league states. Athens ended its campaigns against Persia in 450, after a disastrous defeat in Egypt in 454, and the death of Cimon in action against the Persians on Cyprus in 450. As the Athenian fight against the Persian empire waned, conflict grew between Athens and Sparta. Suspicious of the increasing Athenian power funded by the Delian League, Sparta offered aid to reluctant members of the League to rebel against Athenian domination. These tensions were exacerbated in 462 BC when Athens sent a force to aid Sparta in overcoming a helot revolt, but this aid was rejected by the Spartans. In the 450s, Athens took control of Boeotia, and won victories over Aegina and Corinth. However, Athens failed to win a decisive victory, and in 447 lost Boeotia again. Athens and Sparta signed the Thirty Years' Peace in the winter of 446/445, ending the conflict. Despite the treaty, Athenian relations with Sparta declined again in the 430s, and in 431 BC the Peloponnesian War began. The first phase of the war saw a series of fruitless annual invasions of Attica by Sparta, while Athens successfully fought the Corinthian empire in northwest Greece and defended its own empire, despite a plague which killed the leading Athenian statesman Pericles. The war turned after Athenian victories led by Cleon at Pylos and Sphakteria, and Sparta sued for peace, but the Athenians rejected the proposal. The Athenian failure to regain control of Boeotia at Delium and Brasidas' successes in northern Greece in 424 improved Sparta's position after Sphakteria. After the deaths of Cleon and Brasidas, the strongest proponents of war on each side, a peace treaty was negotiated in 421 by the Athenian general Nicias. The peace did not last, however. In 418 BC allied forces of Athens and Argos were defeated by Sparta at Mantinea. In 415 Athens launched an ambitious naval expedition to dominate Sicily; the expedition ended in disaster at the harbor of Syracuse, with almost the entire army killed, and the ships destroyed. Soon after the Athenian defeat in Syracuse, Athens' Ionian allies began to rebel against the Delian league, while Persia began to once again involve itself in Greek affairs on the Spartan side. Initially the Athenian position continued relatively strong, with important victories at Cyzicus in 410 and Arginusae in 406. However, in 405 the Spartan Lysander defeated Athens in the Battle of Aegospotami, and began to blockade Athens' harbour; driven by hunger, Athens sued for peace, agreeing to surrender their fleet and join the Spartan-led Peloponnesian League. Following the Athenian surrender, Sparta installed an oligarchic regime, the Thirty Tyrants, in Athens, one of a number of Spartan-backed oligarchies which rose to power after the Peloponnesian war. Spartan predominance did not last: after only a year, the Thirty had been overthrown. The first half of the 4th century saw the major Greek states attempt to dominate the mainland; none were successful, and their resulting weakness led to a power vacuum which was eventually filled by Macedon under Philip II and then Alexander the Great. In the immediate aftermath of the Peloponnesian war, Sparta attempted to extend their own power, leading Argos, Athens, Corinth, and Thebes to join against them. Aiming to prevent any single Greek state gaining the dominance that would allow it to challenge Persia, the Persian king initially joined the alliance against Sparta, before imposing the Peace of Antalcidas ("King's Peace") which restored Persia's control over the Anatolian Greeks. By 371 BC, Thebes was in the ascendancy, defeating Sparta at the Battle of Leuctra, killing the Spartan king Cleombrotus I, and invading Laconia. Further Theban successes against Sparta in 369 led to Messenia gaining independence; Sparta never recovered from the loss of Messenia's fertile land and the helot workforce it provided. The rising power of Thebes led Sparta and Athens to join forces; in 362 they were defeated by Thebes at the Battle of Mantinea. In the aftermath of Mantinea, none of the major Greek states were able to dominate. Though Thebes had won the battle, their general Epaminondas was killed, and they spent the following decades embroiled in wars with their neighbours; Athens, meanwhile, saw its second naval alliance, formed in 377, collapse in the mid-350s. The power vacuum in Greece after the Battle of Mantinea was filled by Macedon, under Philip II, starting from the battle of Crocus field. In 338 BC, he defeated a Greek alliance at the Battle of Chaeronea, and subsequently formed the League of Corinth. Philip planned to lead the League to invade Persia, but was murdered in 336 BC. His son Alexander the Great was left to fulfil his father's ambitions. After campaigns against Macedon's western and northern enemies, and those Greek states that had broken from the League of Corinth following the death of Philip, Alexander began his campaign against Persia in 334 BC. He conquered Persia, defeating Darius III at the Battle of Issus in 333 BC, and after the Battle of Gaugamela in 331 BC proclaimed himself king of Asia. From 329 BC he led expeditions to Bactria and then India; further plans to invade Arabia and North Africa were halted by his death in 323 BC. The period from the death of Alexander the Great in 323 BC until the death of Cleopatra, the last Macedonian ruler of Egypt, is known as the Hellenistic period. In the early part of this period, a new form of kingship developed based on Macedonian and Near Eastern traditions. The first Hellenistic kings were previously Alexander's generals, and took power in the period following his death, though they were not part of existing royal lineages and lacked historic claims to the territories they controlled. The most important of these rulers in the decades after Alexander's death were Antigonus I and his son Demetrius in Macedonia and the rest of Greece, Ptolemy in Egypt, and Seleucus I in Syria and the former Persian empire; smaller Hellenistic kingdoms included Epirus under the reign of Pyrrhus, the Attalids in Anatolia and the Greco-Bactrian kingdom. In the early part of the Hellenistic period, the exact borders of the Hellenistic kingdoms were not settled. Antigonus attempted to expand his territory by attacking the other successor kingdoms until they joined against him, and he was killed at the Battle of Ipsus in 301 BC. His son Demetrius spent many years in Seleucid captivity, and his son, Antigonus II, only reclaimed the Macedonian throne around 276. Meanwhile, the Seleucid kingdom gave up territory in the east to the Indian king Chandragupta Maurya in exchange for war elephants, and later lost large parts of Persia to the Parthian Empire. By the mid-3rd century, the kingdoms of Alexander's successors was mostly stable, though there continued to be disputes over border areas. The great capitals of Hellenistic culture were Alexandria in the Ptolemaic Kingdom and Antioch in the Seleucid Empire. The conquests of Alexander had numerous consequences for the Greek city-states. It greatly widened the horizons of the Greeks and led to a steady emigration of the young and ambitious to the new Greek empires in the east. Many Greeks migrated to Alexandria, Antioch and the many other new Hellenistic cities founded in Alexander's wake, as far away as present-day Afghanistan and Pakistan, where the Greco-Bactrian Kingdom and the Indo-Greek Kingdom survived until the end of the 1st century BC. Some city-states within Greece formed themselves into two major leagues; the Achaean League (including Corinth and Argos) and the Aetolian League. For much of the period until the Roman conquest, these leagues were at war, often participating in the conflicts between the Diadochi's successor states to Alexander's empire.[citation needed] The Antigonid Kingdom became involved in a war with the Roman Republic in the late 3rd century. Although the First Macedonian War was inconclusive, the Romans, in typical fashion, continued to fight Macedon until it was completely absorbed into the Roman Republic (by 149 BC). In the east, the unwieldy Seleucid Empire gradually disintegrated, although a rump survived until 64 BC, whilst the Ptolemaic Kingdom continued in Egypt until 30 BC when it too was conquered by the Romans. The Aetolian league grew wary of Roman involvement in Greece, and sided with the Seleucids in the Roman–Seleucid War; when the Romans were victorious, the league was effectively absorbed into the Republic. Although the Achaean league outlasted both the Aetolian league and Macedon, it was also soon defeated and absorbed by the Romans in 146 BC, bringing Greek independence to an end. The Greek peninsula came under Roman rule during the 146 BC conquest of Greece after the Battle of Corinth. Macedonia became a Roman province while southern Greece came under the surveillance of Macedonia's prefect; however, some Greek poleis managed to maintain a partial independence and avoid taxation. The Aegean Islands were added to this territory in 133 BC. Athens and other Greek cities revolted in 88 BC, and the peninsula was crushed by the Roman general Sulla. The Roman civil wars devastated the land even further, until Augustus organized the peninsula as the province of Achaea in 27 BC. Greece was a key eastern province of the Roman Empire, as the Roman culture had long been in fact Greco-Roman. The Greek language served as a lingua franca in the East and in Italy, and many Greek intellectuals such as Galen would perform most of their work in Rome. Geography The territory of Greece is mountainous, and as a result, ancient Greece consisted of many smaller regions, each with its own dialect, cultural peculiarities, and identity. Regionalism and regional conflicts were prominent features of ancient Greece. Cities tended to be located in valleys between mountains, or on coastal plains, and dominated a certain area around them. In the south lay the Peloponnese, consisting of the regions of Laconia (southeast), Messenia (southwest), Elis (west), Achaia (north), Korinthia (northeast), Argolis (east), and Arcadia (center). These names survive to the present day as regional units of modern Greece, though with somewhat different boundaries. Mainland Greece to the north, nowadays known as Central Greece, consisted of Aetolia and Acarnania in the west, Locris, Doris, and Phocis in the center, while in the east lay Boeotia, Attica, and Megaris. Northeast lay Thessaly, while Epirus lay to the northwest. Epirus stretched from the Ambracian Gulf in the south to the Ceraunian Mountains and the Aoos river in the north, and consisted of Chaonia (north), Molossia (center), and Thesprotia (south). In the northeast corner was Macedonia, originally consisting Lower Macedonia and its regions, such as Emathia (Macedonia), Pieria, and Bottiaea, and later Almopia, Amphaxitis. The original capital of Macedon was Aigai and then moved to Pella in the 4th century BC. Around the time of Alexander I of Macedon, the Argead kings of Macedon started to expand into Eordaia and the rest of Upper Macedonia, lands inhabited by independent Upper Macedonian tribes like the Lyncestae, Orestae and the Elimiotae and to the east, beyond the Axius river, into Mygdonia and regions settled by Thracian tribes. To the north of Macedonia lay various non-Greek peoples such as the Paeonians due north, the Thracians to the northeast, and the Illyrians, with whom the Macedonians were frequently in conflict, to the northwest. Chalcidice was settled early on by southern Greek colonists and was considered part of the Greek world, while from the late 2nd millennium BC substantial Greek settlement also occurred on the eastern shores of the Aegean, in Anatolia. During the Archaic period, the Greek population grew beyond the capacity of the limited arable land of Greece proper, resulting in the large-scale establishment of colonies elsewhere: according to one estimate, the population of the widening area of Greek settlement increased roughly ten-fold from 800 to 400 BC, from 800,000 to as many as 7+1⁄2–10 million. This was not simply for trade, but also to found settlements. These Greek colonies were not, as Roman colonies were, dependent on their mother-city, but were independent city-states in their own right. Greeks settled outside of Greece in two distinct ways. The first was in permanent settlements founded by Greeks, which formed as independent poleis. The second form was in what historians refer to as emporia; trading posts which were occupied by both Greeks and non-Greeks and which were primarily concerned with the manufacture and sale of goods. Examples of this latter type of settlement are found at Al Mina in the east and Pithekoussai in the west. From around 750 to 500 BC, Greeks settled colonies in all directions. To the east, the Aegean coast of Asia Minor was colonized first, followed by Cyprus and the coasts of Thrace, the Sea of Marmara and the south coast of the Black Sea. Eventually, Greek colonization reached as far northeast as present-day Ukraine and Russia (Taganrog). To the west the coasts of Illyria, Southern Italy (called "Magna Graecia") were settled, followed by Southern France, Corsica, and even eastern Spain. Greek colonies were also founded in Egypt and Libya. Modern Syracuse, Naples, Marseille and Istanbul had their beginnings as the Greek colonies Syracusae (Συράκουσαι), Neapolis (Νεάπολις), Massalia (Μασσαλία) and Byzantion (Βυζάντιον). These colonies played an important role in the spread of Greek influence throughout Europe and also aided in the establishment of long-distance trading networks between the Greek city-states, boosting the economy of ancient Greece. Politics and society Ancient Greece consisted of several hundred relatively independent city-states (poleis). This was a situation unlike that in most other contemporary societies, which were either tribal or kingdoms ruling over relatively large territories. Undoubtedly, the geography of Greece—divided and sub-divided by hills, mountains, and rivers—contributed to the fragmentary nature of ancient Greece. On the one hand, the ancient Greeks had no doubt that they were "one people"; they had the same religion, same basic culture, and same language. Furthermore, the Greeks were very aware of their tribal origins; Herodotus was able to extensively categorise the city-states by tribe. Yet, although these higher-level relationships existed, they seem to have rarely had a major role in Greek politics. The independence of the poleis was fiercely defended; unification was something rarely contemplated by the ancient Greeks. Even when, during the second Persian invasion of Greece, a group of city-states allied themselves to defend Greece, the vast majority of poleis remained neutral, and after the Persian defeat, the allies quickly returned to infighting. Thus, the major peculiarities of the ancient Greek political system were its fragmented nature (and that this does not particularly seem to have tribal origin), and the particular focus on urban centers within otherwise tiny states. The peculiarities of the Greek system are further evidenced by the colonies that they set up throughout the Mediterranean, which, though they might count a certain Greek polis as their 'mother' (and remain sympathetic to her), were completely independent of the founding city. Inevitably smaller poleis might be dominated by larger neighbors, but conquest or direct rule by another city-state appears to have been quite rare. Instead the poleis grouped themselves into leagues, membership of which was in a constant state of flux. Later in the Classical period, the leagues would become fewer and larger, be dominated by one city (particularly Athens, Sparta and Thebes); and often poleis would be compelled to join under threat of war (or as part of a peace treaty). Even after Philip II of Macedon conquered the heartlands of ancient Greece, he did not attempt to annex the territory or unify it into a new province, but compelled most of the poleis to join his own Corinthian League. Initially many Greek city-states seem to have been petty kingdoms; there was often a city official carrying some residual, ceremonial functions of the king (basileus), e.g., the archon basileus in Athens. However, by the Archaic period and the first historical consciousness, most had already become aristocratic oligarchies. It is unclear exactly how this change occurred. For instance, in Athens, the kingship had been reduced to a hereditary, lifelong chief magistracy (archon) by c. 1050 BC; by 753 BC this had become a decennial, elected archonship, and finally an annually elected archonship by 683 BC. Through each stage, more power would have been transferred to the aristocracy as a whole, and away from a single individual. Inevitably, the domination of politics and concomitant aggregation of wealth by small groups of families was apt to cause social unrest in many poleis. In many cities a tyrant (not in the modern sense of repressive autocracies), would at some point seize control and govern according to their own will; often a populist agenda would help sustain them in power. In a system wracked with class conflict, government by a 'strongman' was often the best solution. Athens fell under a tyranny in the second half of the 6th century BC. When this tyranny was ended, the Athenians founded the world's first democracy as a radical solution to prevent the aristocracy regaining power. A citizens' assembly (the Ecclesia), for the discussion of city policy, had existed since the reforms of Draco in 621 BC; all citizens were permitted to attend after the reforms of Solon (early 6th century), but the poorest citizens could not address the assembly or run for office. With the establishment of the democracy, the assembly became the de jure mechanism of government; all citizens had equal privileges in the assembly. However, non-citizens, such as metics (foreigners living in Athens) or slaves, had no political rights at all. After the rise of democracy in Athens, other city-states founded democracies. However, many retained more traditional forms of government. As so often in other matters, Sparta was a notable exception to the rest of Greece, ruled through the whole period by not one, but two hereditary monarchs. This was a form of diarchy. The Kings of Sparta belonged to the Agiads and the Eurypontids, descendants respectively of Eurysthenes and Procles. Both dynasties' founders were believed to be twin sons of Aristodemus, a Heraclid ruler. However, the powers of these kings were held in check by both a council of elders (the Gerousia) and magistrates specifically appointed to watch over the kings (the Ephors). Only free, land-owning, native-born men could be citizens entitled to the full protection of the law in a city-state. In most city-states, unlike the situation in Rome, social prominence did not allow special rights. Sometimes families controlled public religious functions, but this ordinarily did not give any extra power in the government. In Athens, the population was divided into four social classes based on wealth. People could change classes if they made more money. In Sparta, all male citizens were called homoioi, meaning "peers". However, Spartan kings, who served as the city-state's dual military and religious leaders, came from two families. Women in Ancient Greece appear to have primarily performed domestic tasks, managed households, and borne and reared children. Slaves had no power or status. Slaves had the right to have a family and own property, subject to their master's goodwill and permission, but they had no political rights. By 600 BC, chattel slavery had spread in Greece. By the 5th century BC, slaves made up one-third of the total population in some city-states. Between 40 and 80% of the population of Classical Athens were slaves. Slaves outside of Sparta almost never revolted because they were made up of too many nationalities and were too scattered to organize. However, unlike later Western culture, the ancient Greeks did not think in terms of race. Most families owned slaves as household servants and laborers, and even poor families might have owned a few slaves. Owners were not allowed to beat or kill their slaves. Owners often promised to free slaves in the future to encourage slaves to work hard. Unlike in Rome, freedmen did not become citizens. Instead, they were mixed into the population of metics, which included people from foreign countries or other city-states who were officially allowed to live in the state. City-states legally owned slaves. These public slaves had a larger measure of independence than slaves owned by families, living on their own and performing specialized tasks. In Athens, public slaves were trained to look out for counterfeit coinage, while temple slaves acted as servants of the temple's deity and Scythian slaves were employed in Athens as a police force corralling citizens to political functions. Sparta had a special type of slaves called helots. Helots were Messenians enslaved en masse during the Messenian Wars by the state and assigned to families where they were forced to stay. Helots raised food and did household chores so that women could concentrate on raising strong children while men could devote their time to training as hoplites. Their masters treated them harshly, and helots revolted against their masters several times. In 370/369 BC, as a result of Epaminondas' liberation of Messenia from Spartan rule, the helot system there came to an end and the helots won their freedom. However, it persisted in Laconia until the 2nd century BC. For most of Greek history, education was private, except in Sparta. During the Hellenistic period, some city-states established public schools. Only wealthy families could afford a teacher. Boys learned how to read, write and quote literature. They also learned to sing and play one musical instrument and were trained as athletes for military service. They studied not for a job but to become an effective citizen. Girls also learned to read, write and do simple arithmetic so they could manage the household. They almost never received education after childhood. Boys went to school at the age of seven, or went to the barracks, if they lived in Sparta. The three types of teachings were: grammatistes for arithmetic, kitharistes for music and dancing, and Paedotribae for sports. Boys from wealthy families attending the private school lessons were taken care of by a paidagogos, a household slave selected for this task who accompanied the boy during the day. Classes were held in teachers' private houses and included reading, writing, mathematics, singing, and playing the lyre and flute. When the boy became 12 years old the schooling started to include sports such as wrestling, running, and throwing discus and javelin. In Athens, some older youths attended academy for the finer disciplines such as culture, sciences, music, and the arts. The schooling ended at age 18, followed by military training in the army usually for one or two years. Some of Athens' greatest such schools included the Lyceum (the so-called Peripatetic school founded by Aristotle of Stageira) and the Platonic Academy (founded by Plato of Athens). The education system of the wealthy ancient Greeks is also called Paideia. At its economic height in the 5th and 4th centuries BC, the free citizenry of Classical Greece represented perhaps the most prosperous society in the ancient world, some economic historians considering Greece one of the most advanced pre-industrial economies. In terms of wheat, wages reached an estimated 7–12 kg (15–26 lb) daily for an unskilled worker in urban Athens, 2–3 times the 3.75 kg (8.3 lb) of an unskilled rural labourer in Roman Egypt, though Greek farm incomes too were on average lower than those available to urban workers. While slave conditions varied widely, the institution served to sustain the incomes of the free citizenry: an estimate of economic development drawn from the latter (or derived from urban incomes alone) is therefore likely to overstate the true overall level despite widespread evidence for high living standards. At least in the Archaic period, the fragmentary nature of ancient Greece, with many competing city-states, increased the frequency of conflict but conversely limited the scale of warfare. Unable to maintain professional armies, the city-states relied on their own citizens to fight. This inevitably reduced the potential duration of campaigns, as citizens would need to return to their own professions (especially in the case of, for example, farmers). Campaigns would therefore often be restricted to summer. When battles occurred, they were usually set piece and intended to be decisive. Casualties were slight compared to later battles, rarely amounting to more than five percent of the losing side, but the slain often included the most prominent citizens and generals who led from the front. The scale and scope of warfare in ancient Greece changed dramatically as a result of the Greco-Persian Wars. To fight the enormous armies of the Achaemenid Empire was effectively beyond the capabilities of a single city-state. The eventual triumph of the Greeks was achieved by alliances of city-states (the exact composition changing over time), allowing the pooling of resources and division of labor. Although alliances between city-states occurred before this time, nothing on this scale had been seen before. The rise of Athens and Sparta as pre-eminent powers during this conflict led directly to the Peloponnesian War, which saw further development of the nature of warfare, strategy and tactics. Fought between leagues of cities dominated by Athens and Sparta, the increased manpower and financial resources increased the scale and allowed the diversification of warfare. Set-piece battles during the Peloponnesian war proved indecisive and instead there was increased reliance on attritionary strategies, naval battles and blockades and sieges. These changes greatly increased the number of casualties and the disruption of Greek society. Athens owned one of the largest war fleets in ancient Greece. It had over 200 triremes each powered by 170 oarsmen who were seated in 3 rows on each side of the ship. The city could afford such a large fleet—it had over 34,000 oarsmen—because it owned a lot of silver mines that were worked by slaves. According to Josiah Ober, Greek city-states faced approximately a one-in-three chance of destruction during the archaic and classical period. Culture Ancient Greek philosophy focused on the role of reason and inquiry. In many ways, it had an important influence on modern philosophy, as well as modern science. Clear unbroken lines of influence lead from ancient Greek and Hellenistic philosophers, to medieval Muslim philosophers and Islamic scientists, to the European Renaissance and Enlightenment, to the secular sciences of the modern day. Neither reason nor inquiry began with the ancient Greeks. Defining the difference between the Greek quest for knowledge and the quests of the elder civilizations, such as the ancient Egyptians and Babylonians, has long been a topic of study by theorists of civilization. The first known philosophers of Greece were the pre-Socratics, who attempted to provide naturalistic, non-mythical descriptions of the world. They were followed by Socrates, one of the first philosophers based in Athens during its golden age whose ideas, despite being known by second-hand accounts instead of writings of his own, laid the basis of Western philosophy. Socrates' disciple Plato, who wrote The Republic and established a radical difference between ideas and the concrete world, and Plato's disciple Aristotle, who wrote extensively about nature and ethics, are also immensely influential in Western philosophy to this day. The later Hellenistic philosophy, also originating in Greece, is defined by names such as Antisthenes (cynicism), Zeno of Citium (stoicism) and Plotinus (Neoplatonism). The earliest Greek literature was poetry and was composed for performance rather than private consumption. The earliest Greek poet known is Homer, although he was certainly part of an existing tradition of oral poetry. Homer's poetry, though it was developed around the same time that the Greeks developed writing, would have been composed orally; the first poet to certainly compose their work in writing was Archilochus, a lyric poet from the mid-7th century BC. Tragedy developed around the end of the archaic period, taking elements from across the pre-existing genres of late archaic poetry. Towards the beginning of the classical period, comedy began to develop—the earliest date associated with the genre is 486 BC, when a competition for comedy became an official event at the City Dionysia in Athens, though the first preserved ancient comedy is Aristophanes' Acharnians, produced in 425. Like poetry, Greek prose had its origins in the archaic period, and the earliest writers of Greek philosophy, history, and medical literature all date to the 6th century BC. Prose first emerged as the writing style adopted by the presocratic philosophers Anaximander and Anaximenes—though Thales of Miletus, considered the first Greek philosopher, apparently wrote nothing. Prose as a genre reached maturity in the classical era, and the major Greek prose genres—philosophy, history, rhetoric, and dialogue—developed in this period. The Hellenistic period saw the literary centre of the Greek world move from Athens, where it had been in the classical period, to Alexandria. At the same time, other Hellenistic kings such as the Antigonids and the Attalids were patrons of scholarship and literature, turning Pella and Pergamon respectively into cultural centres. It was thanks to this cultural patronage by Hellenistic kings, and especially the Museum at Alexandria, that so much ancient Greek literature has survived. The Library of Alexandria, part of the Museum, had the previously unenvisaged aim of collecting together copies of all known authors in Greek. Almost all of the surviving non-technical Hellenistic literature is poetry, and Hellenistic poetry tended to be highly intellectual, blending different genres and traditions, and avoiding linear narratives. The Hellenistic period also saw a shift in the ways literature was consumed—while in the archaic and classical periods literature had typically been experienced in public performance, in the Hellenistic period it was more commonly read privately. At the same time, Hellenistic poets began to write for private, rather than public, consumption. With Octavian's victory at Actium in 31 BC, Rome began to become a major centre of Greek literature, as important Greek authors such as Strabo and Dionysius of Halicarnassus came to Rome. The period of greatest innovation in Greek literature under Rome was the "long second century" from approximately 80 AD to around 230 AD. This innovation was especially marked in prose, with the development of the novel and a revival of prominence for display oratory both dating to this period. In Ancient Greek society, music was ever-present and considered a fundamental component of civilisation. It was an important part of public religious worship, private ceremonies such as weddings and funerals, and household entertainment. Men sang and played music at the symposium; both men and women sang at work; and children's games involved song and dance. Ancient Greek music was primarily vocal, sung either by a solo singer or a chorus, and usually accompanied by an instrument; purely instrumental music was less common. The Greeks used stringed instruments, including lyres, harps, and lutes; and wind instruments, of which the most important was the aulos, a reed instrument. Percussion instruments played a relatively unimportant role supporting stringed and wind instruments, and were used in certain religious cults. Ancient Greek mathematics contributed many important developments to mathematics, including the basic rules of geometry, the idea of formal mathematical proof, and discoveries in number theory, mathematical analysis, applied mathematics, and approached close to establishing integral calculus. The discoveries of several Greek mathematicians, including Pythagoras, Euclid, and Archimedes, are still used in mathematical teaching today. The Greeks developed astronomy, which they treated as a branch of mathematics, to a highly sophisticated level. The first geometrical, three-dimensional models to explain the apparent motion of the planets were developed in the 4th century BC by Eudoxus of Cnidus and Callippus of Cyzicus. Their younger contemporary Heraclides Ponticus proposed that the Earth rotates around its axis. In the 3rd century BC, Aristarchus of Samos was the first to suggest a heliocentric system. Archimedes in his treatise The Sand Reckoner revives Aristarchus' hypothesis that "the fixed stars and the Sun remain unmoved, while the Earth revolves about the Sun on the circumference of a circle". Otherwise, only fragmentary descriptions of Aristarchus' idea survive. Eratosthenes, using the angles of shadows created at widely separated regions, estimated the circumference of the Earth with great accuracy. In the 2nd century BC Hipparchus of Nicea made a number of contributions, including the first measurement of precession and the compilation of the first star catalog in which he proposed the modern system of apparent magnitudes. The Antikythera mechanism, a device for calculating the movements of planets, dates from about 80 BC and was the first ancestor of the astronomical computer.[citation needed] It was discovered in an ancient shipwreck off the Greek island of Antikythera, between Kythera and Crete. The device became famous for its use of a differential gear, previously believed to have been invented in the 16th century, and the miniaturization and complexity of its parts, comparable to a clock made in the 18th century. The original mechanism is displayed in the Bronze collection of the National Archaeological Museum of Athens, accompanied by a replica. The ancient Greeks also made important discoveries in the medical field. Hippocrates was a physician of the Classical period, and is considered one of the most outstanding figures in the history of medicine. He is referred to as the "father of medicine" in recognition of his lasting contributions to the field as the founder of the Hippocratic school of medicine. This intellectual school revolutionized medicine in ancient Greece, establishing it as a discipline distinct from other fields that it had traditionally been associated with (notably theurgy and philosophy), thus making medicine a profession. The art of ancient Greece has exercised an enormous influence on the culture of many countries from ancient times to the present day, particularly in the areas of sculpture and architecture. In the West, the art of the Roman Empire was largely derived from Greek models. In the East, Alexander the Great's conquests initiated several centuries of exchange between Greek, Central Asian and Indian cultures, resulting in Greco-Buddhist art, with ramifications as far as Japan. Following the Renaissance in Europe, the humanist aesthetic and the high technical standards of Greek art inspired generations of European artists. Well into the 19th century, the classical tradition derived from Greece dominated the art of the Western world. Religion was a central part of ancient Greek life. Though the Greeks of different cities and tribes worshipped similar gods, religious practices were not uniform and the gods were thought of differently in different places. The Greeks were polytheistic, worshipping many gods, but as early as the 6th century BC a pantheon of twelve Olympians began to develop. Greek religion was influenced by the practices of the Greeks' near eastern neighbours at least as early as the archaic period, and by the Hellenistic period this influence was seen in both directions. The most important religious act in ancient Greece was animal sacrifice, most commonly of sheep and goats. Sacrifice was accompanied by public prayer, and prayer and hymns were themselves a major part of ancient Greek religious life. Legacy The civilization of ancient Greece has been immensely influential on language, politics, educational systems, philosophy, science, and the arts. It became the Leitkultur of the Roman Empire to the point of marginalizing native Italic traditions. As Horace put it, Graecia capta ferum victorem cepit et artis / intulit agresti Latio (Epistulae 2.1.156f.) Captive Greece took captive her uncivilised conqueror and instilled her arts in rustic Latium. Via the Roman Empire, Greek culture came to be foundational to Western culture in general. The Byzantine Empire inherited Classical Greek-Hellenistic culture directly, without Latin intermediation, and the preservation of Classical Greek learning in medieval Byzantine tradition further exerted a strong influence on the Slavs and later on the Islamic Golden Age and the Western European Renaissance. A modern revival of Classical Greek learning took place in the Neoclassicism movement in 18th- and 19th-century Europe and the Americas. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Sifrei_Kodesh] | [TOKENS: 4301] |
Contents Sifrei Kodesh Sifrei Kodesh (Hebrew: ספרי קודש, lit. 'Holy books'), commonly referred to as sefarim (Hebrew: ספרים, lit. 'books'), or in its singular form, sefer, are books of Jewish religious literature and are viewed by religious Jews as sacred. These are generally works of Torah literature, i.e. Tanakh and all works that expound on it, including the Mishnah, Midrash (Halakha, Aggadah), Talmud, and all works of Musar, Hasidism, Kabbalah, or machshavah ("Jewish Thought"). Historically, sifrei kodesh were generally written in Hebrew with some in Judeo-Aramaic or Arabic, although in recent years, thousands of titles in other languages, most notably English, were published. An alternative spelling for 'sefarim' is seforim. Terms The term Sifrei Kodesh is Hebrew for "Holy Books", and includes all books that are considered holy in Rabbinic Judaism. This includes all Torah literature as well as Jewish prayer books. Among Orthodox Jews the word ספר sefer (plural ספרים s'farim) is used for books of the Tanakh, the Oral Torah (Mishnah and Talmud) or any work of rabbinic literature. Works unrelated to Torah study are rarely called sefer by English-speaking Orthodox Jews. Among Hebrew-speaking Ashkenazi Jews, the differentiation between books related to Torah study and other books is made by referring to the former with traditional Ashkenazi pronunciation (SEY-fur) and to the latter with Modern Hebrew pronunciation (SEF-fer).[citation needed] The term "Torah" has two meanings. It can refer solely to the Five Books of Moses. Traditionally, it is written on a parchment scroll, known as a Sefer Torah, although it is also printed in book form, known as a Chumash (and in some cases a tikkun). The term "Torah" can also include the Nevi'im and Ketuvim and rabbinic texts, and such books are therefore sometimes also referred to as "Torah literature" (Hebrew: ספרות תורנית, romanized: Sifrut Toranit).[citation needed] The Hebrew Bible or Tanakh, also known as Torah Shebikhtav ("Written " as opposed to "Oral" Torah) is a collective term for the three sections of the Bible, those being the Torah, the Nevi'im, and the Ketuvim. Separately, the Nevi'im and Ketuvim are also called Nakh. Numerous commentaries on the Tanakh have been written and published over the last thousand years. The most notable ones are Targum Onkelos, a translation of the Torah into Judeo-Aramaic, written by Onkelos; and Rashi, a commentary on the entire Tanakh written by Rashi. Both are traditionally printed in the Chumash alongside the biblical text. Other commentaries that are sometimes printed alongside the text in the Chumash are commentaries by Rabbi Jacob ben Asher and Rabbi Shabbethai Bass (the Siftei Chachamim). Commentaries traditionally printed alongside the Nakh are Rashi as well as Metzudat David and Metzudat Zion by Rabbi David Altschuler. In addition to the classic printings of Tanakh which don't include many more commentaries than Rashi and Targum Onkelos, there is the Mikraot Gedolot edition which was first published in the early sixteenth century. Commentaries in the Mikraot Gedolot on the Torah are generally those of Abraham ibn Ezra (Sefer ha-Yashar), Nachmanides, Rabbi Shlomo Ephraim Luntschitz (the Keli Yakar), Chaim ibn Attar, and the translation and commentary attributed to Rabbi Jonathan ben Uzziel, known as Targum Pseudo-Jonathan, all in addition to Rashi and Targum Onkelos; while commentaries on Nakh are those of Rashi, Rabbi David Altschuler, Rabbi David Kimhi, Rabbi Joseph Kara, and on some volumes, Rabbi Obadiah ben Jacob Sforno (the Sforno or Sepornu). Among the numerous commentaries of Tanakh not published in the Mikraot Gedolot are the Meam Loez, Malbim, Ha'amek Davar, Torah Temimah, and The Hirsch Chumash. Aside from the Bible, there were several writings of Jewish religious significance in ancient times, known today as "the outer books". There are some other writings however that most agree were written more recently that have been claimed to be older. These include the Sefer Yetzirah, which some say was written by Abraham; and the Book of Enoch, which some say was written by Enoch. Works of Chazal Jewish belief is that the Pentateuch is of Mosaic authorship, meaning that it was dictated by God to Moses. Later writings, the Nevi'im and Ketuvim, were, according to tradition, written by Jewish prophets. For over a thousand years, these books, known as Tanakh, were more or less the sole writings of Judaism. However, there was much material that was not written down, and instead memorized. Known as the Oral Torah, it includes over five hundred laws learned out from Talmudical hermeneutics as well as the laws given to Moses at Sinai (Hebrew: הלכה למשה מסיני, romanized: Halakhah leMoshe miSinai). However, circa 200 C.E., much of the Oral Torah was written down, and is known as the Mishnah (the Zohar, a book chronicling the hidden parts of the Torah, was written down as well around this time by Rabbi Shimon bar Yochai). Three hundred years later the Talmud was written, expounding on the Mishnah. For generations, the Oral Torah had been transmitted by word of mouth, largely with the help of the Sanhedrin, the leading Jewish authority. However, after the destruction of the Second Temple, the Sanhedrin had been uprooted and much of the Oral Torah was being forgotten. Therefore, c. 188 CE, Rabbi Judah ha-Nasi, head of the exiled Sanhedrin, compiled the Mishnah, i.e. the teachings of the Oral Torah. Since the Maccabean Revolt however, much had already been lost, which led to many disagreements among the scholars, the Tannaim. Therefore, the Mishnah includes their differing opinions. As Maimonides wrote in the introduction to his Mishneh Torah: [Rabbi Judah ha-Nasi] gathered together all the traditions, enactments, interpretations, and expositions of every position of the Torah, that either came down to Moses, our teacher, or had been deduced by the courts in successive generations. A similar project was carried out by Rabbi Hiyya bar Abba and his student Rabbi Hoshaiah, known as the Tosefta. A collection of statements not included in the Mishnah was compiled by Rabbi Oshiya and Bar Kappara, known as Baraitot. Circa 349, the Sanhedrin, exiled from Jerusalem and sitting in Tiberias, wrote the Jerusalem Talmud, a mammoth work compiling the teachings of the rabbis of the recent generations, known as Amoraim, as they expounded on the Mishnah. It is largely attributed to Rabbi Yochanan. However, the Jerusalem Talmud is generally overshadowed by the Babylonian Talmud, a similar yet much larger work, compiling the teachings of the Amoraim, and completed in Babylonia circa 500. The teachings were largely legalistic in nature, stating halakha. There were other teachings, known as aggadah, which incorporates narratives, parables, practical advice, remedies, and insights. The Babylonian Talmud, attributed to Rav Ashi and Ravina, was first printed in 1483 by Joshua Solomon Soncino. Soncino's layout of the Talmud, with the original Talmud text in the center of the page, with the commentary of Rashi on the outer margins and the commentary of Tosafot on the inner ones, was later imitated by Christian printer Daniel Bomberg, who printed the entire Talmud between the years 1519 and 1523, and by all subsequent major printings of the Talmud. Rabbi Moshe Shapiro, rabbi of Slavuta, Ukraine and owner of a printing press, published the Slavita Shas[a] in the early 1800s. In 1886, the Romm Publishing House in Vilnius published the Vilna Shas, which has since been reprinted and remains the classic print of the Talmud. In the past years, there have been numerous commentaries written on the Talmud. While the most commonly referenced commentaries are those of Rashi and Tosafot, and as mentioned, are printed in the margins of the Talmud, other famous commentaries (which often are recognized as Halakhic works as well) include the Piskei HaRosh, Shitah Mekubetzet, Maharsha (the Piskei Halachot and Piskei Aggadot), the Pnei Yehoshua, the Mordechai, the Chiddushia HaRitva, the Meiri, the Maharshal's Chochmas Shlomo and Yam Shel Shlomo, the Meir Einei Chachmamim, the Kehillos Yaakov, the Shaarei Yosher, and the Birkat Shmuel, as well as many published shiurim (classes) given on the Talmud, including those of Rabbi Nochum Partzovitz (Chiddushei Reb Nochum and Shiurei Reb Nochum), Rabbi Shmuel Rozovsky (Shiurei Reb Shmuel and Chiddushei Reb Shmuel), Rabbi Reuven Grozovsky (Chiddushei Rev Reuven), Rabbi Elchonon Wasserman (Kovetz Shiurim and Kovetz He'aros), Rabbi Chaim Soloveitchik (Chiddushei HaGrach al HaShas), Rabbi Naftoli Trop (Chiddushei HaGranat), and Rabbi Aryeh Leib Malin (Chiddushei Reb Aryeh Leib). Kabbalah The term Kabbalah refers to the "hidden parts of the Torah," often described as "Jewish metaphysics." Kabbalistic works show how every physical thing is a metaphor for a spiritual concept. The primary Kabbalistic work, the Zohar, was written by Rabbi Shimon bar Yochai, a Tanna who lived in the second century, although it was lost for many years. However, it was discovered in Spain in the thirteenth century and transcribed by hand numerous times, leading to changes between the texts. Between 1558 and 1560, it was printed in Mantua based on ten different manuscripts in order to glean the correct text. A separate printing took place in Cremona around the same time, using only six manuscripts, leading to differences in the two printings. The Zohar was largely expounded on by Rabbi Yitzchak Luria (known as the Arizal) and his teachings were summarized in the book Etz Chaim by his chief student, Rabbi Chaim Vital. Halakha Jewish law, known in Hebrew as Halakha, was transcribed first in the Mishnah and later in the Talmud, with the differing opinions spread out over sixty three tractates. However, later rabbis — namely the Geonim of the Early Middle Ages, the Rishonim of the High and Late Middle Ages, and the Acharonim of modern times — wrote more conclusive works. Many of these works are responsa (she'eilot u'teshuvot in Hebrew), printed questions and answers. The Geonim, the leaders of Jewry in the Early Middle Ages primarily in Babylonia, were not prolific writers like later generations. However, among their few writings is the famed Sheiltot de-Rav Ahai written by Rabbi Achai Gaon. The Rishonim, the leading rabbis of the Middle Ages after the Geonim, have left many written Halakhic works, including the Piskei HaRosh of Rabbi Asher ben Yechiel and the Sefer HaHalakhot of Rabbi Yitzchak Alfasi, both of which are often published in the back of the Talmud; and the Arba'ah Turim, also known as the Tur, of Rabbi Yaakov ben Asher, a four volume work written in attempt to organize Jewish law. Rabbi Moshe ben Maimon, known as Maimonides or as the Rambam, was a Rishon who lived in Spain, Morocco, and Egypt in the second half of the twelfth century. The author of several books, his most famous is a halakhic work, Mishneh Torah, also known as the Yad HaChazakah or simply as the Rambam, which is fourteen volumes long. Although when it was first written, Mishne Torah received much backlash from contemporary Jewish leaders, it soon became recognized by world Jewry as authentic Torah literature, with many commentaries written on it, including the Ohr Somayach, Tzofnath Paneach and the writings of the Soloveitchik dynasty, including Chiddushei Rabbeinu Chaim by Rabbi Chaim Soloveitchik; works by his sons, Chiddushei HaGram HaLevi of Rabbi Moshe Soloveitchik and Chiddushei Maran Ryz HaLevi of Rabbi Yitzchak Zev Soloveitchik; and by his grandson Rabbi Meshulam Dovid Soloveitchik, titled Chiddushei Rabbeinu Meshulam Dovid Halevi. A student of Rabbi Chaim Soloveitchik, Rabbi Isser Zalman Meltzer wrote his own commentary on the Rambam, titled Even HaEzel. Likely the most monumental Halakhic work ever written, Rabbi Yoseph Karo completed the Shulchan Aruch (or Code of Jewish Law, sometimes shortened to Codes) in 1565 in Safed. It was a condensation of his previous Halakhic work, Beit Yosef, which was written as commentary on the Arba'ah Turim. Like the Tur, it was divided into four sections: Orach Chayim, Yoreh De'ah, Even Ha'ezer, and Choshen Mishpat. The Mapah, a commentary on Shulchan Aruch by Rabbi Moshe Isserles (the Rema) is generally printed together with the Shulchan Aruch in the center of the page, albeit in a different font, with the commentaries of Turei Zahav of Rabbi David HaLevi Segal and Magen Avraham of Rabbi Avraham Gombiner or Siftei Kohen of Rabbi Shabbatai HaKohen printed in the margins. Major commentaries written on the Shulchan Aruch include the Ketzos Hachoshen, Avnei Milu'im, and the Nesivos Hamishpat. Many later Halakhic works were based on Shulchan Aruch. These include Rabbi Shneur Zalman of Liadi's Shulchan Aruch HaRav, Rabbi Yechiel Michel Epstein's Aruch HaShulchan, Rabbi Shlomo Ganzfried's Kitzur Shulchan Aruch, and Rabbi Avraham Danzig's Chayei Adam and Chochmas Adam (only on Orach Chayim and Yoreh De'ah). Mishnah Berurah, a six-volume work expounding on Orach Chayim, was published between 1884 and 1907 and is followed by most Litvishe Jews almost exclusively. Comparative Sephardic works are Kaf HaChaim and Yalkut Yosef. The Ben Ish Hai, by Rabbi Yosef Hayyim, is based on the sermons he delivered, and therefore includes halakha as well as Kabbalah and explanations on the Torah. Many Halakhic works of the Acharonim are responsa. These include the Igros Moshe of Rabbi Moshe Feinstein, the Noda B'Yehudah of Rabbi Yechezkel Landau, She'eilot U'teshuvot Rabbi Akiva Eiger of Rabbi Akiva Eiger, Beis HaLevi by Rabbi Yosef Dov Soloveitchik, Shevet HaLevi of Rabbi Shmuel Wosner, and Tzitz Eliezer of Rabbi Eliezer Waldenberg. Another notable Halakhic work is the Chofetz Chaim, dealing with the laws of proper speech, and written by Rabbi Yisrael Meir Kagan. Hasidism Also known as chasidus, Hasidism is an Orthodox Jewish movement originating in Eastern Europe in the mid-eighteenth century, founded by the Baal Shem Tov. Describing Hasidic thought, Rabbi Aryeh Kaplan writes: In the teachings of Hasidic masters, one comes across a new way of approaching God and the spiritual. Neither Kabbalah nor philosophy, but experience is the proper way to approach God. "Serve God with gladness!" "Taste and see that God is good!" "For me the closeness of God is best!"... The Hasidic masters used the language of Kabbalah and to a lesser extent that of Jewish philosophy, to teach the average individual how he could experience God. The first Hasidic book to be published, Toldot Yaakov Yosef by Rabbi Yaakov Yosef of Pollonye and interlaced with quotations from the Baal Shem Tov, was published in 1780. Later Hasidic works include Noam Elimelech by Rabbi Elimelech of Lizensk, Bnei Yissaschar by Rabbi Tzvi Elimelech Spira, Kedushat Levi by Rabbi Levi Yitzchok of Berditchev, and Tanya by Rabbi Shneur Zalman of Liadi. Musar While the study of musar (spiritual and interpersonal self-improvement) always existed in Jewish circles, it became more widespread with the start of Musar movement in the nineteenth century. The classic musar library of Shaarei Teshuvah, Chovot HaLevavot, Maalot HaMiddot, Orchot Tzaddikim, Mesillat Yesharim, and Derech Hashem was later expanded with the writings of rabbis and mashgiachs after the Musar movement began. Later works include Rabbi Yisrael Salanter's Or Yisrael, Rabbi Simcha Zissel Ziv's Chochmah U'Mussar, Rabbi Yosef Yozel Horowitz's Madreigas HaAdam, Rabbi Eliyahu Dessler's Michtav MeEliyahu, Rabbi Yeruchom Levovitz's Daas Chochmah U'Mussar and Daas Torah, Rabbi Chaim Shmuelevitz's Sichos Musar, and Rabbi Shlomo Wolbe's Alei Shur. Contemporary Hasidic perspectives on the place of sefarim in daily life include initiatives by Chabad emphasizing both scholarly engagement and the physical presence of holy books in the home. In the 1970s, the Lubavitcher Rebbe initiated the Bayis Malei Sefarim ("House Filled with Books") campaign, encouraging Jewish families to establish and maintain collections of Torah literature in their homes. As explained by a host on the Jewish educational platform 18Forty, this was “the mitzvah to have a house full of books… If you have a house full of books, your life is going to look differently… It’s part of your life”. See also Explanatory notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Joke#cite_note-FOOTNOTEFrank200999–100-42] | [TOKENS: 8460] |
Contents Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/ISO_4217] | [TOKENS: 1574] |
Contents ISO 4217 ISO 4217 is a standard published by the International Organization for Standardization (ISO) that defines alpha codes and numeric codes for the representation of currencies and provides information about the relationships between individual currencies and their minor units. This data is published in three tables: The first edition of ISO 4217 was published in 1978. The tables, history and ongoing discussion are maintained by SIX Group on behalf of ISO and the Swiss Association for Standardization. The ISO 4217 code list is used in banking and business globally. In many countries, the ISO 4217 alpha codes for the more common currencies are so well known publicly that exchange rates published in newspapers or posted in banks use only these to delineate the currencies, instead of translated currency names or ambiguous currency symbols. ISO 4217 alpha codes are used on airline tickets and international train tickets to remove any ambiguity about the price. History In 1973, the ISO Technical Committee 68 decided to develop codes for the representation of currencies and funds for use in any application of trade, commerce or banking. At the 17th session (February 1978), the related UN/ECE Group of Experts agreed that the three-letter alphabetic codes for International Standard ISO 4217, "Codes for the representation of currencies and funds", would be suitable for use in international trade.[citation needed] Over time, new currencies are created and old currencies are discontinued. Such changes usually originate from the formation of new countries, treaties between countries on shared currencies or monetary unions, or redenomination from an existing currency due to excessive inflation. As a result, the list of codes must be updated from time to time. The ISO 4217 maintenance agency is responsible for maintaining the list of codes. Types of codes In the case of national currencies, the first two letters of the alpha code are the two letters of the ISO 3166-1 alpha-2 country code and the third is usually the initial of the currency's main unit. So Japan's currency code is JPY: "JP" for Japan and "Y" for yen. This eliminates the problem caused by the names dollar, franc, peso, and pound being used in many countries, each having significantly differing values. In some cases, the third letter of the alpha code is not the initial letter of a currency unit name. There may be a number of reasons for this: In addition to codes for most active national currencies ISO 4217 provides codes for "supranational" currencies, procedural purposes, and several things which are "similar to" currencies: The use of the initial letter "X" for these purposes is facilitated by the ISO 3166 rule that no official country code beginning with X will ever be assigned. The inclusion of the EU (denoting the European Union) in the ISO 3166-1 reserved codes list allows the euro to be coded as EUR rather than assigned a code beginning with X, even though it is a supranational currency. ISO 4217 also assigns a three-digit numeric code to each currency. This numeric code is usually the same as the numeric code assigned to the corresponding country by ISO 3166-1. For example, USD (United States dollar) has numeric code 840 which is also the ISO 3166-1 code for "US" (United States). List of ISO 4217 currency codes The following is a list of active codes of official ISO 4217 currency names as of 1 January 2026[update]. In the standard the values are called "alphabetic code", "numeric code", "minor unit", and "entity". According to UN/CEFACT recommendation 9, paragraphs 8–9 ECE/TRADE/203, 1996: A number of currencies had official ISO 4217 currency codes and currency names until their replacement by another currency. The table below shows the ISO currency codes of former currencies and their common names (which do not always match the ISO 4217 names). That table has been introduced end 1988 by ISO. Currency details The 2008 (7th) edition of ISO 4217 says the following about minor units of currency: Requirements sometimes arise for values to be expressed in terms of minor units of currency. When this occurs, it is necessary to know the decimal relationship that exists between the currency concerned and its minor unit. This information has therefore been included in this International Standard and is shown in the column headed "Minor unit" in Tables A.1 and A.2; "0" means that there is no minor unit for that currency, whereas "1", "2" and "3" signify a ratio of 10:1, 100:1 and 1000:1 respectively. The names of the minor units are not given. Examples for the ratios of 100:1 and 1000:1 include the United States dollar and the Bahraini dinar, for which the column headed "Minor unit" shows "2" and "3", respectively. As of 2021[update], two currencies have non-decimal ratios, the Mauritanian ouguiya and the Malagasy ariary; in both cases the ratio is 5:1. For these, the "Minor unit" column shows the number "2". Some currencies, such as the Burundian franc, do not in practice have any minor currency unit at all. These show the number "0", as with currencies whose minor units are unused due to negligible value.[citation needed] The ISO 4217 standard does not regulate either the spacing, prefixing or suffixing in usage of currency codes. The style guide of the European Union's Publication Office declares that, for texts issued by or through the Commission in English, Irish, Latvian, and Maltese, the ISO 4217 code is to be followed by a "hard space" (non-breaking space) and the amount: and for texts in Bulgarian, Croatian, Czech, Danish, Dutch, Estonian, Finnish, French, German, Greek, Hungarian, Italian, Lithuanian, Polish, Portuguese, Romanian, Slovak, Slovene, Spanish, and Swedish the order is reversed; the amount is followed by a non-breaking space and the ISO 4217 code: As illustrated, the order is determined not by the currency but by the native language of the document context. The US dollar has two codes assigned: USD and USN ("US dollar next day"[definition needed]). The USS (same day) code is not in use any longer, and was removed from the list of active ISO 4217 codes in March 2014. Non ISO 4217 currencies A number of active currencies do not have an ISO 4217 code, because they may be: These currencies include: See Category:Fixed exchange rate for a list of all currently pegged currencies. Despite having no presence or status in the standard, three letter acronyms that resemble ISO 4217 coding are sometimes used locally or commercially to represent de facto currencies or currency instruments. The following non-ISO codes were used in the past. Minor units of currency (also known as currency subdivisions or currency subunits) are often used for pricing and trading company shares and other assets, such as energy, but are not assigned codes by ISO 4217. Two conventions for representing minor units are in widespread use: A third convention is similar to the second one but uses an upper-case letter, e.g. ZAC for the South African Cent. Cryptocurrencies have not been assigned an ISO 4217 code. However, some cryptocurrencies and cryptocurrency exchanges use a three-letter acronym that resemble an ISO 4217 code. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/OpenAI#cite_note-wired_inside-18] | [TOKENS: 8773] |
Contents OpenAI OpenAI is an American artificial intelligence research organization comprising both a non-profit foundation and a controlled for-profit public benefit corporation (PBC), headquartered in San Francisco. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". OpenAI is widely recognized for its development of the GPT family of large language models, the DALL-E series of text-to-image models, and the Sora series of text-to-video models, which have influenced industry research and commercial applications. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI. The organization was founded in 2015 in Delaware but evolved a complex corporate structure. As of October 2025, following restructuring approved by California and Delaware regulators, the non-profit OpenAI Foundation holds 26% of the for-profit OpenAI Group PBC, with Microsoft holding 27% and employees/other investors holding 47%. Under its governance arrangements, the OpenAI Foundation holds the authority to appoint the board of the for-profit OpenAI Group PBC, a mechanism designed to align the entity’s strategic direction with the Foundation’s charter. Microsoft previously invested over $13 billion into OpenAI, and provides Azure cloud computing resources. In October 2025, OpenAI conducted a $6.6 billion share sale that valued the company at $500 billion. In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed AI safety researchers left OpenAI, citing the company's prominent role in an industry-wide problem. Founding In December 2015, OpenAI was founded as a not for profit organization by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the co-chairs. A total of $1 billion in capital was pledged by Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), and Infosys. However, the actual capital collected significantly lagged pledges. According to company disclosures, only $130 million had been received by 2019. In its founding charter, OpenAI stated an intention to collaborate openly with other institutions by making certain patents and research publicly available, but later restricted access to its most capable models, citing competitive and safety concerns. OpenAI was initially run from Brockman's living room. It was later headquartered at the Pioneer Building in the Mission District, San Francisco. According to OpenAI's charter, its founding mission is "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity." Musk and Altman stated in 2015 that they were partly motivated by concerns about AI safety and existential risk from artificial general intelligence. OpenAI stated that "it's hard to fathom how much human-level AI could benefit society", and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly". The startup also wrote that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible", and that "because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest." Co-chair Sam Altman expected a decades-long project that eventually surpasses human intelligence. Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list of great AI researchers. Brockman was able to hire nine of them as the first employees in December 2015. OpenAI did not pay AI researchers salaries comparable to those of Facebook or Google. It also did not pay stock options which AI researchers typically get. Nevertheless, OpenAI spent $7 million on its first 52 employees in 2016. OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission." OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead. In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research. Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours. In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications. Corporate structure In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit being capped at 100 times any investment. According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company. Many top researchers work for Google Brain, DeepMind, or Facebook, which offer equity that a nonprofit would be unable to match. Before the transition, OpenAI was legally required to publicly disclose the compensation of its top employees. The company then distributed equity to its employees and partnered with Microsoft, announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. OpenAI Global, LLC then announced its intention to commercially license its technologies. It planned to spend $1 billion "within five years, and possibly much faster". Altman stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence. The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global, LLC. In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest. Some researchers have argued that OpenAI Global, LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI. On February 29, 2024, Elon Musk filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of shifting focus from public benefit to profit maximization—a case OpenAI dismissed as "incoherent" and "frivolous," though Musk later revived legal action against Altman and others in August. On April 9, 2024, OpenAI countersued Musk in federal court, alleging that he had engaged in "bad-faith tactics" to slow the company's progress and seize its innovations for his personal benefit. OpenAI also argued that Musk had previously supported the creation of a for-profit structure and had expressed interest in controlling OpenAI himself. The countersuit seeks damages and legal measures to prevent further alleged interference. On February 10, 2025, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI, declaring willingness to match or exceed any better offer. The offer was rejected on 14 February 2025, with OpenAI stating that it was not for sale, but the offer complicated Altman's restructuring plan by suggesting a lower bar for how much the nonprofit should be valued. OpenAI, Inc. was originally designed as a nonprofit in order to ensure that AGI "benefits all of humanity" rather than "the private gain of any person". In 2019, it created OpenAI Global, LLC, a capped-profit subsidiary controlled by the nonprofit. In December 2024, OpenAI proposed a restructuring plan to convert the capped-profit into a Delaware-based public benefit corporation (PBC), and to release it from the control of the nonprofit. The nonprofit would sell its control and other assets, getting equity in return, and would use it to fund and pursue separate charitable projects, including in science and education. OpenAI's leadership described the change as necessary to secure additional investments, and claimed that the nonprofit's founding mission to ensure AGI "benefits all of humanity" would be better fulfilled. The plan has been criticized by former employees. A legal letter named "Not For Private Gain" asked the attorneys general of California and Delaware to intervene, stating that the restructuring is illegal and would remove governance safeguards from the nonprofit and the attorneys general. The letter argues that OpenAI's complex structure was deliberately designed to remain accountable to its mission, without the conflicting pressure of maximizing profits. It contends that the nonprofit is best positioned to advance its mission of ensuring AGI benefits all of humanity by continuing to control OpenAI Global, LLC, whatever the amount of equity that it could get in exchange. PBCs can choose how they balance their mission with profit-making. Controlling shareholders have a large influence on how closely a PBC sticks to its mission. On October 28, 2025, OpenAI announced that it had adopted the new PBC corporate structure after receiving approval from the attorneys general of California and Delaware. Under the new structure, OpenAI's for-profit branch became a public benefit corporation known as OpenAI Group PBC, while the non-profit was renamed to the OpenAI Foundation. The OpenAI Foundation holds a 26% stake in the PBC, while Microsoft holds a 27% stake and the remaining 47% is owned by employees and other investors. All members of the OpenAI Group PBC board of directors will be appointed by the OpenAI Foundation, which can remove them at any time. Members of the Foundation's board will also serve on the for-profit board. The new structure allows the for-profit PBC to raise investor funds like most traditional tech companies, including through an initial public offering, which Altman claimed was the most likely path forward. In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value. On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft's cloud-computing service Azure. From September to December, 2023, Microsoft rebranded all variants of its Copilot to Microsoft Copilot, and they added MS-Copilot to many installations of Windows and released Microsoft Copilot mobile apps. Following OpenAI's 2025 restructuring, Microsoft owns a 27% stake in the for-profit OpenAI Group PBC, valued at $135 billion. In a deal announced the same day, OpenAI agreed to purchase $250 billion of Azure services, with Microsoft ceding their right of first refusal over OpenAI's future cloud computing purchases. As part of the deal, OpenAI will continue to share 20% of its revenue with Microsoft until it achieves AGI, which must now be verified by an independent panel of experts. The deal also loosened restrictions on both companies working with third parties, allowing Microsoft to pursue AGI independently and allowing OpenAI to develop products with other companies. In 2017, OpenAI spent $7.9 million, a quarter of its functional expenses, on cloud computing alone. In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks. In October 2024, OpenAI completed a $6.6 billion capital raise with a $157 billion valuation including investments from Microsoft, Nvidia, and SoftBank. On January 21, 2025, Donald Trump announced The Stargate Project, a joint venture between OpenAI, Oracle, SoftBank and MGX to build an AI infrastructure system in conjunction with the US government. The project takes its name from OpenAI's existing "Stargate" supercomputer project and is estimated to cost $500 billion. The partners planned to fund the project over the next four years. In July, the United States Department of Defense announced that OpenAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and xAI. In the same month, the company made a deal with the UK Government to use ChatGPT and other AI tools in public services. OpenAI subsequently began a $50 million fund to support nonprofit and community organizations. In April 2025, OpenAI raised $40 billion at a $300 billion post-money valuation, which was the highest-value private technology deal in history. The financing round was led by SoftBank, with other participants including Microsoft, Coatue, Altimeter and Thrive. In July 2025, the company reported annualized revenue of $12 billion. This was an increase from $3.7 billion in 2024, which was driven by ChatGPT subscriptions, which reached 20 million paid subscribers by April 2025, up from 15.5 million at the end of 2024, alongside a rapidly expanding enterprise customer base that grew to five million business users. The company’s cash burn remains high because of the intensive computational costs required to train and operate large language models. It projects an $8 billion operating loss in 2025. OpenAI reports revised long-term spending projections totaling approximately $115 billion through 2029, with annual expenditures projected to escalate significantly, reaching $17 billion in 2026, $35 billion in 2027, and $45 billion in 2028. These expenditures are primarily allocated toward expanding compute infrastructure, developing proprietary AI chips, constructing data centers, and funding intensive model training programs, with more than half of the spending through the end of the decade expected to support research-intensive compute for model training and development. The company's financial strategy prioritizes market expansion and technological advancement over near-term profitability, with OpenAI targeting cash-flow-positive operations by 2029 and projecting revenue of approximately $200 billion by 2030. This aggressive spending trajectory underscores both the enormous capital requirements of scaling cutting-edge AI technology and OpenAI's commitment to maintaining its position as a leader in the artificial intelligence industry. In October 2025, OpenAI completed an employee share sale of up to $10 billion to existing investors which valued the company at $500 billion. The deal values OpenAI as the most valuable privately owned company in the world—surpassing SpaceX as the world's most valuable private company. On November 17, 2023, Sam Altman was removed as CEO when its board of directors (composed of Helen Toner, Ilya Sutskever, Adam D'Angelo and Tasha McCauley) cited a lack of confidence in him. Chief Technology Officer Mira Murati took over as interim CEO. Greg Brockman, the president of OpenAI, was also removed as chairman of the board and resigned from the company's presidency shortly thereafter. Three senior OpenAI researchers subsequently resigned: director of research and GPT-4 lead Jakub Pachocki, head of AI risk Aleksander Mądry, and researcher Szymon Sidor. On November 18, 2023, there were reportedly talks of Altman returning as CEO amid pressure placed upon the board by investors such as Microsoft and Thrive Capital, who objected to Altman's departure. Although Altman himself spoke in favor of returning to OpenAI, he has since stated that he considered starting a new company and bringing former OpenAI employees with him if talks to reinstate him didn't work out. The board members agreed "in principle" to resign if Altman returned. On November 19, 2023, negotiations with Altman to return failed and Murati was replaced by Emmett Shear as interim CEO. The board initially contacted Anthropic CEO Dario Amodei (a former OpenAI executive) about replacing Altman, and proposed a merger of the two companies, but both offers were declined. On November 20, 2023, Microsoft CEO Satya Nadella announced Altman and Brockman would be joining Microsoft to lead a new advanced AI research team, but added that they were still committed to OpenAI despite recent events. Before the partnership with Microsoft was finalized, Altman gave the board another opportunity to negotiate with him. About 738 of OpenAI's 770 employees, including Murati and Sutskever, signed an open letter stating they would quit their jobs and join Microsoft if the board did not rehire Altman and then resign. This prompted OpenAI investors to consider legal action against the board as well. In response, OpenAI management sent an internal memo to employees stating that negotiations with Altman and the board had resumed and would take some time. On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members Bret Taylor (as chairman) and Lawrence Summers, with D'Angelo remaining. According to subsequent reporting, shortly before Altman’s firing, some employees raised concerns to the board about how he had handled the safety implications of a recent internal AI capability discovery. On November 29, 2023, OpenAI announced that an anonymous Microsoft employee had joined the board as a non-voting member to observe the company's operations; Microsoft resigned from the board in July 2024. In February 2024, the Securities and Exchange Commission subpoenaed OpenAI's internal communication to determine if Altman's alleged lack of candor misled investors. In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers. In August 2023, it was announced that OpenAI had acquired the New York-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools. In June 2024, OpenAI acquired Multi, a startup focused on remote collaboration. In March 2025, OpenAI reached a deal with CoreWeave to acquire $350 million worth of CoreWeave shares and access to AI infrastructure, in return for $11.9 billion paid over five years. Microsoft was already CoreWeave's biggest customer in 2024. Alongside their other business dealings, OpenAI and Microsoft were renegotiating the terms of their partnership to facilitate a potential future initial public offering by OpenAI, while ensuring Microsoft's continued access to advanced AI models. On May 21, OpenAI announced the $6.5 billion acquisition of io, an AI hardware start-up founded by former Apple designer Jony Ive in 2024. In September 2025, OpenAI agreed to acquire the product testing startup Statsig for $1.1 billion in an all-stock deal and appointed Statsig's founding CEO Vijaye Raji as OpenAI's chief technology officer of applications. The company also announced development of an AI-driven hiring service designed to rival LinkedIn. OpenAI acquired personal finance app Roi in October 2025. In October 2025, OpenAI acquired Software Applications Incorporated, the developer of Sky, a macOS-based natural language interface designed to operate across desktop applications. The Sky team joined OpenAI, and the company announced plans to integrate Sky’s capabilities into ChatGPT. In December 2025, it was announced OpenAI had agreed to acquire Neptune, an AI tooling startup that helps companies track and manage model training, for an undisclosed amount. In January 2026, it was announced OpenAI had acquired healthcare technology startup Torch for approximately $60 million. The acquisition followed the launch of OpenAI’s ChatGPT Health product and was intended to strengthen the company’s medical data and healthcare artificial intelligence capabilities. OpenAI has been criticized for outsourcing the annotation of data sets to Sama, a company based in San Francisco that employed workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to moderate toxic content, notably from ChatGPT's training data and outputs. However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The investigation uncovered that OpenAI began sending snippets of data to Sama as early as November 2021. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management. In 2024, OpenAI began collaborating with Broadcom to design a custom AI chip capable of both training and inference, targeted for mass production in 2026 and to be manufactured by TSMC on a 3 nm process node. This initiative intended to reduce OpenAI's dependence on Nvidia GPUs, which are costly and face high demand in the market. In January 2024, Arizona State University purchased ChatGPT Enterprise in OpenAI's first deal with a university. In June 2024, Apple Inc. signed a contract with OpenAI to integrate ChatGPT features into its products as part of its new Apple Intelligence initiative. In June 2025, OpenAI began renting Google Cloud's Tensor Processing Units (TPUs) to support ChatGPT and related services, marking its first meaningful use of non‑Nvidia AI chips. In September 2025, it was revealed that OpenAI signed a contract with Oracle to purchase $300 billion in computing power over the next five years. In September 2025, OpenAI and NVIDIA announced a memorandum of understanding that included a potential deployment of at least 10 gigawatts of NVIDIA systems and a $100 billion investment from NVIDIA in OpenAI. OpenAI expected the negotiations to be completed within weeks. As of January 2026, this has not been realized, and the two sides are rethinking the future of their partnership. In October 2025, OpenAI announced a multi-billion dollar deal with AMD. OpenAI committed to purchasing six gigawatts worth of AMD chips, starting with the MI450. OpenAI will have the option to buy up to 160 million shares of AMD, about 10% of the company, depending on development, performance and share price targets. In December 2025, Disney said it would make a $1 billion investment in OpenAI, and signed a three-year licensing deal that will let users generate videos using Sora—OpenAI's short-form AI video platform. More than 200 Disney, Marvel, Star Wars and Pixar characters will be available to OpenAI users. In early 2026, Amazon entered advanced discussions to invest up to $50 billion in OpenAI as part of a potential artificial intelligence partnership. Under the proposed agreement, OpenAI’s models could be integrated into Amazon’s digital assistant Alexa and other internal projects. OpenAI provides LLMs to the Artificial Intelligence Cyber Challenge and to the Advanced Research Projects Agency for Health. In October 2024, The Intercept revealed that OpenAI's tools are considered "essential" for AFRICOM's mission and included in an "Exception to Fair Opportunity" contractual agreement between the United States Department of Defense and Microsoft. In December 2024, OpenAI said it would partner with defense-tech company Anduril to build drone defense technologies for the United States and its allies. In 2025, OpenAI's Chief Product Officer, Kevin Weil, was commissioned lieutenant colonel in the U.S. Army to join Detachment 201 as senior advisor. In June 2025, the U.S. Department of Defense awarded OpenAI a $200 million one-year contract to develop AI tools for military and national security applications. OpenAI announced a new program, OpenAI for Government, to give federal, state, and local governments access to its models, including ChatGPT. Services In February 2019, GPT-2 was announced, which gained attention for its ability to generate human-like text. In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named the API, would form the heart of its first commercial product. Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish Anthropic. In 2021, OpenAI introduced DALL-E, a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture. In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024. After ChatGPT was launched, Google announced a similar chatbot, Bard, amid internal concerns that ChatGPT could threaten Google’s position as a primary source of online information. On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products. On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus. On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries. On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand. Access for newer subscribers re-opened a month later on December 13. In December 2024, the company launched the Sora model. It also launched OpenAI o1, an early reasoning model that was internally codenamed strawberry. Additionally, ChatGPT Pro—a $200/month subscription service offering unlimited o1 access and enhanced voice features—was introduced, and preliminary benchmark results for the upcoming OpenAI o3 models were shared. On January 23, 2025, OpenAI released Operator, an AI agent and web automation tool for accessing websites to execute goals defined by users. The feature was only available to Pro users in the United States. OpenAI released deep research agent, nine days later. It scored a 27% accuracy on the benchmark Humanity's Last Exam (HLE). Altman later stated GPT-4.5 would be the last model without full chain-of-thought reasoning. In July 2025, reports indicated that AI models by both OpenAI and Google DeepMind solved mathematics problems at the level of top-performing students in the International Mathematical Olympiad. OpenAI's large language model was able to achieve gold medal-level performance, reflecting significant progress in AI's reasoning abilities. On October 6, 2025, OpenAI unveiled its Agent Builder platform during the company's DevDay event. The platform includes a visual drag-and-drop interface that lets developers and businesses design, test, and deploy agentic workflows with limited coding. On October 21, 2025, OpenAI introduced ChatGPT Atlas, a browser integrating the ChatGPT assistant directly into web navigation, to compete with existing browsers such as Google Chrome and Apple Safari. On December 11, 2025, OpenAI announced GPT-5.2. This model will be better at creating spreadsheets, building presentations, perceiving images, writing code and understanding long context. On January 27, 2026, OpenAI introduced Prism, a LaTeX-native workspace meant to assist scientists to help with research and writing. The platform utilizes GPT-5.2 as a backend to automate the process of drafting for scientific papers, including features for managing citations, complex equation formatting, and real-time collaborative editing. In March 2023, the company was criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards. OpenAI cited competitiveness and safety concerns to justify this repudiation. OpenAI's former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become "obvious" in a few years. In September 2025, OpenAI published a study on how people use ChatGPT for everyday tasks. The study found that "non-work tasks" (according to an LLM-based classifier) account for more than 72 percent of all ChatGPT usage, with a minority of overall usage related to business productivity. In July 2023, OpenAI launched the superalignment project, aiming within four years to determine how to align future superintelligent systems. OpenAI promised to dedicate 20% of its computing resources to the project, although the team denied receiving anything close to 20%. OpenAI ended the project in May 2024 after its co-leaders Ilya Sutskever and Jan Leike left the company. In August 2025, OpenAI was criticized after thousands of private ChatGPT conversations were inadvertently exposed to public search engines like Google due to an experimental "share with search engines" feature. The opt-in toggle, intended to allow users to make specific chats discoverable, resulted in some discussions including personal details such as names, locations, and intimate topics appearing in search results when users accidentally enabled it while sharing links. OpenAI announced the feature's permanent removal on August 1, 2025, and the company began coordinating with search providers to remove the exposed content, emphasizing that it was not a security breach but a design flaw that heightened privacy risks. CEO Sam Altman acknowledged the issue in a podcast, noting users often treat ChatGPT as a confidant for deeply personal matters, which amplified concerns about AI handling sensitive data. Management In 2018, Musk resigned from his Board of Directors seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars. OpenAI stated that Musk's financial contributions were below $45 million. On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via Greylock Partners, and his co-founding of the AI startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI. In May 2024, Chief Scientist Ilya Sutskever resigned and was succeeded by Jakub Pachocki. Co-leader Jan Leike also departed amid concerns over safety and trust. OpenAI then signed deals with Reddit, News Corp, Axios, and Vox Media. Paul Nakasone then joined the board of OpenAI. In August 2024, cofounder John Schulman left OpenAI to join Anthropic, and OpenAI's president Greg Brockman took extended leave until November. In September 2024, CTO Mira Murati left the company. In November 2025, Lawrence Summers resigned from the board of directors. Governance and legal issues In May 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence. They stated that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They proposed creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also called for more technical safety research for superintelligences, and asked for more coordination, for example through governments launching a joint project which "many current efforts become part of". In July 2023, the FTC issued a civil investigative demand to OpenAI to investigate whether the company's data security and privacy practices to develop ChatGPT were unfair or harmed consumers (including by reputational harm) in violation of Section 5 of the Federal Trade Commission Act of 1914. These are typically preliminary investigative matters and are nonpublic, but the FTC's document was leaked. In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information. They asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people. The agency also raised concerns about ‘circular’ spending arrangements—for example, Microsoft extending Azure credits to OpenAI while both companies shared engineering talent—and warned that such structures could negatively affect the public. In September 2024, OpenAI's global affairs chief endorsed the UK's "smart" AI regulation during testimony to a House of Lords committee. In February 2025, OpenAI CEO Sam Altman stated that the company is interested in collaborating with the People's Republic of China, despite regulatory restrictions imposed by the U.S. government. This shift comes in response to the growing influence of the Chinese artificial intelligence company DeepSeek, which has disrupted the AI market with open models, including DeepSeek V3 and DeepSeek R1. Following DeepSeek's market emergence, OpenAI enhanced security protocols to protect proprietary development techniques from industrial espionage. Some industry observers noted similarities between DeepSeek's model distillation approach and OpenAI's methodology, though no formal intellectual property claim was filed. According to Oliver Roberts, in March 2025, the United States had 781 state AI bills or laws. OpenAI advocated for preempting state AI laws with federal laws. According to Scott Kohler, OpenAI has opposed California's AI legislation and suggested that the state bill encroaches on a more competent federal government. Public Citizen opposed a federal preemption on AI and pointed to OpenAI's growth and valuation as evidence that existing state laws have not hampered innovation. Before May 2024, OpenAI required departing employees to sign a lifelong non-disparagement agreement forbidding them from criticizing OpenAI and acknowledging the existence of the agreement. Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement. Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity. However, leaked documents and emails refute this claim. On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement. OpenAI was sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad in July 2023. In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work. The New York Times also sued the company in late December 2023. In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the training of GPT-3, and which the Authors Guild believed to have contained over 100,000 copyrighted books. In 2021, OpenAI developed a speech recognition tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI's president, Greg Brockman. The resulting dataset proved instrumental in training GPT-4. In February 2024, The Intercept as well as Raw Story and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground. The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI. On April 30, 2024, eight newspapers filed a lawsuit in the Southern District of New York against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles. The suing publications included The Mercury News, The Denver Post, The Orange County Register, St. Paul Pioneer Press, Chicago Tribune, Orlando Sentinel, Sun Sentinel, and New York Daily News. In June 2023, a lawsuit claimed that OpenAI scraped 300 billion words online without consent and without registering as a data broker. It was filed in San Francisco, California, by sixteen anonymous plaintiffs. They also claimed that OpenAI and its partner as well as customer Microsoft continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models. On May 22, 2024, OpenAI entered into an agreement with News Corp to integrate news content from The Wall Street Journal, the New York Post, The Times, and The Sunday Times into its AI platform. Meanwhile, other publications like The New York Times chose to sue OpenAI and Microsoft for copyright infringement over the use of their content to train AI models. In November 2024, a coalition of Canadian news outlets, including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC, sued OpenAI for using their news articles to train its software without permission. In October 2024 during a New York Times interview, Suchir Balaji accused OpenAI of violating copyright law in developing its commercial LLMs which he had helped engineer. He was a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in court filings as potentially having documents relevant to the case. On November 26, 2024, Balaji died by suicide. His death prompted the circulation of conspiracy theories alleging that he had been deliberately silenced. California Congressman Ro Khanna endorsed calls for an investigation. On April 24, 2025, Ziff Davis sued OpenAI in Delaware federal court for copyright infringement. Ziff Davis is known for publications such as ZDNet, PCMag, CNET, IGN and Lifehacker. In April 2023, the EU's European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against OpenAI about the ChatGPT service". In late April 2024 NOYB filed a complaint with the Austrian Datenschutzbehörde against OpenAI for violating the European General Data Protection Regulation. A text created with ChatGPT gave a false date of birth for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT's work nor the sources used, could be made available, OpenAI claimed. OpenAI was criticized for lifting its ban on using ChatGPT for "military and warfare". Up until January 10, 2024, its "usage policies" included a ban on "activity that has high risk of physical harm, including", specifically, "weapons development" and "military and warfare". Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons". In August 2025, the parents of a 16-year-old boy who died by suicide filed a wrongful death lawsuit against OpenAI (and CEO Sam Altman), alleging that months of conversations with ChatGPT about mental health and methods of self-harm contributed to their son's death and that safeguards were inadequate for minors. OpenAI expressed condolences and said it was strengthening protections (including updated crisis response behavior and parental controls). Coverage described it as a first-of-its-kind wrongful death case targeting the company's chatbot. The complaint was filed in California state court in San Francisco. In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits against OpenAI, of which four lawsuits alleged wrongful death. The suits were filed on behalf of Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon, who each committed suicide after prolonged ChatGPT usage. In December 2025, Stein-Erik Soelberg, who was 56 years old at the time, allegedly murdered his mother Suzanne Adams. In the months prior the paranoid, delusional man often discussed his ideas with ChatGPT. Adam's estate then sued OpenAI claiming that the company shared responsibility due to the risk of chatbot psychosis despite the fact that chatbot psychosis is not a real medical diagnosis. OpenAI responded saying they will make ChatGPT safer for users disconnected from reality. See also References Further reading External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.